The PETRAS consortium of nine leading universities, led by UCL with Imperial College London, University of Oxford, University of Warwick, Lancaster University, University of Southampton, University of Surrey, University of Edinburgh and Cardiff University will work together over the next three years to explore critical issues in privacy, ethics, trust, reliability, acceptability, and security for the Internet of Things.
Oxford’s participation in the consortium is led by the e-Research Centre and the Oxford Internet Institute, and also involves the Department of Computer Science, the Department of Engineering Science and the Saïd Business School, with further collaborations planned. Work at the OII will focus and lead on the socio-ethical aspects of IOT research, technologies, and applications.
Cas Cremers, Ivan Flechais, Andrew Martin, Ivan Martinovic, Kasper Rasmussen, Andrew Simpson
The Software and Systems Security Research Group brings together those researchers who are interested in solving security problems associated with the design, development, deployment and maintenance of large-scale software-based systems. The faculty members associated with this group are also responsible for the delivery of the department's MSc in Software and Systems Security. This synergy means that the department's research and teaching activities this exciting area are extremely closely related.
The Alan Turing Institute is the UK national institute for the data sciences.
The mission of the institute is to: undertake research in the data sciences at the intersection of computer science, mathematics, statistics and systems engineering; provide technically informed advice to policy makers on the wider implications of algorithms; enable researchers from industry and academia to work together to undertake research with practical application; act as a magnet for world leaders in academia and industry to engage with the UK; attract senior business and public service leaders to engage with and be trained in the business of data and analytics; and, promote the transfer of skills and insight as well as of technologies. The institute will also bring to bear insights from the social sciences and the study of ethics to the data sciences.
Oxford’s involvement in the Institute is led by five departments: The Mathematical Institute, Department of Computer Science, Department of Statistics, Department of Engineering Science, and the Oxford Internet Institute. The new Institute will tap into world-leading strengths and achievements across these scientific disciplines.
Hub for Networked Quantum Information Technologies (NQIT)
December 2014 - November 2019
December 2014 - November 2019
The NQIT Hub, part of the UK National Quantum Technology Programme, is led by the University of Oxford and involves 29 globally leading quantum centres and major companies, all working together to realise an entirely new technology sector. The flagship goal of NQIT is to build the Q20:20 machine, a fully-functional small quantum computer. This device, targeted to be operational within five years, would far exceed the size of any previous quantum information processor.
Applications of the technology include 'machine learning' – the challenge of making a machine that can understand patterns and meaning within data without having to be 'taught' by a human.
The work in this group targets both the hiding (steganography) and detection (steganalysis) of hidden data, typically in digital media. The research group, with close collaborators in France and the Czech Republic, has three strands:
• Practically-deployable steganalysis, attacking problems from unusual angles (for example by classifying network users, rather than individual media objects, as suspicious) and aiming for ultra-low false alarm rates.
• Theoretical aspects of steganalysis, from a signal-detection and hypothesis testing point of view; the "square root law" of capacity.
• Practical aspects of steganography, including hiding in Twitter.
Cryptography is the science and art of ensuring private and authenticated communications, for example over the internet, in our bank card transactions and with our mobile phones. Unfortunately most cryptographic protocols used today will become totally insecure once large scale quantum computers are build. Anticipating on this, we must already develop the next generation of cryptographic protocols and organize the transition in our security infrastructures.
Since its creation in 2015 (with GCHQ financial support), the Cryptography Group in Oxford has been connecting the strong mathematical expertise in Oxford to current security challenges, and particularly developing post-quantum cryptography. Our research results in this year of existence have included new digital signature and zero-knowledge protocols, the security analysis of existing protocols, and number theory results connected to cryptography. We have also built research links with other world-leading groups in Computer Science and Physics Departments at Oxford, and we are currently investigating new security threats posed by quantum computers.
The Group is also contributing to long-term education on (post-quantum) cryptography in the UK: we have created two new courses on Cryptography which are offered to MFoCS master students (Mathematics and Foundations of Computer Science) in Oxford; we have launched a new seminar series (and attracted world-renowned speakers such as Adi Shamir and Antoine Joux); and we have supervised dissertation projects in this area.
November 2015 - October 2017
November 2015 - October 2017
The 5G-ENSURE project brings to the 5G PPP a consortium of telco and network operators, IT providers and cyber security experts addressing priorities for security and resilience in 5G networks. The project has received funding of just over €7.5m out of a total €3.5bn for the 5G-PPP initiative. It will:
- Deliver strategic impact across technology, business enablement & standardisation.
- Develop a set of non-intrusive security enablers (AAA, Privacy, Trust, Monitoring, Network Management and Virtualization Isolation) for the core of the 5G Reference Architecture.
- Define a 5G Security Architecture needed to expand the mobile ecosystem giving operators a platform for entirely new business opportunities.
- Initiate a 5G Security testbed vision and initial set-up in which the security enablers will be made available and demonstrated.
5G-ENSURE will define a shared and agreed 5G Security Roadmap with various 5G stakeholders. The outcome will be a trustworthy 5G system offering reliable security services to customers with a “zero perceived” downtime for service provision.
The EPSRC MyTrustedCloud project (2011) was highly successful with aspects of this work are being taken forward directly in an InnovateUK funded Knowledge Transfer Project. The original project investigated how this integration of trusted and cloud computing could be used in a practical scenario. The usecase supported trusted data exchange and application attestation to manage communication between the Distribution and transmission networks using cloud computing as the data exchange vehicle. The project created a detailed threat analysis of using IaaS cloud systems and the specific countermeasures that trusted platform allow within the system, an exemplar software framework in which energy researchers are able to start making use of commercially sensitive information while at the same time make full use of cloud computing.
The Trusted Cloud Knowledge Transfer Partnership (2015-17) will develop this substantially further bringing verifiable data privacy and security to production public cloud computing. This project, working with corporate partner 100PercentIT, will build a production trusted cloud which will be certified by NCSC and include digital key management technologies to ensure isolation of user certificates from the cloud provider. Through the developed Porridge remote attestation service it will enable multiple business models rooted in cryptographically verifiable trust. A cloud user will be able to verify the identity and configuration of any remote system in a scaleable and resiliant manner building from the cloud storage and physical infrastructure through to a full chain of trust capability of any virtual instance started within the cloud. This will support software whitelisting within all of the computational systems and trusted application or software signing through the trusted storage. The KTP has stated aims of doubling the profitability of the commercial partner who already has over 20 current and potential customers interested in this new capability and who have stated they would pay for it as soon as available and certified.
Global Cyber Security Capacity Centre
April 2013 - March 2017
April 2013 - March 2017
Sadie Creese, Ian Brown, Michael Goldsmith, David Upton
The Global Cyber Security Capacity Centre (GCSCC) is a leading international centre for research on efficient and effective cybersecurity capacity building. It has created the National Cybersecurity Capacity Maturity Model (CMM), the first-of-its-kind model to review a country’s cybersecurity capacity maturity. Together with key strategic international partners, such as the World Bank, the Organization of American States (OAS), the Commonwealth Telecommunications Organisation, and the International Telecommunication Union, the Capacity Centre has since 2015 successfully deployed the CMM in over 40 countries around the world, and significantly underpinned a regional study in Latin America and the Caribbean through collaboration with the OAS. The review processes and the resulting reports, drafted by the GCSCC, enabled the governments to benchmark national cybersecurity policy and strategies, cybersecurity culture, knowledge development, legal and regulatory frameworks, and risk controls. The results and recommendations enabled nations to better plan national strategies, facilitate international and regional collaboration and cooperation, and set priorities for strategic investment and capacity development. To foster global knowledge exchange and transfer of expertise gained in the global community, the GCSCC also runs the publicly-available Cybersecurity Capacity Portal, a global online resource for good practice and knowledge in cybersecurity capacity building, which also includes a mapping of international and regional capacity building efforts by the various actors in the field. [www.sbs.ox.ac.uk/cybersecurity-capacity/]
The deployment of the model has been in itself an effective capacity-building exercise and has been informing the thinking of the global community. The deployment of the CMM has also become part of two global and regional initiatives by the Global Forum on Cyber Expertise (GFCE). The GCSCC encourages the further uptake of the model by other countries and international community actors and has constant conversations with regional organisations, governments, private companies and other research institutions who work on this issue. It also has recently established its first regional partnership with the Oceania Cybersecurity Centre, which will be the focal point for cybersecurity capacity building in that region.
Rather than evaluating the country’s policies only, they look at the its maturity in addressing a wide range of questions, including: how well do the various stakeholders work together to create and revise policies, make decisions, and assess whether strategies are working? The resultant review allows countries to understand their strengths and weaknesses, and target their resources to develop cybersecurity capacity according to their national priorities.
This methodology has been endorsed by the Organization of American States, the World Bank, and the Commonwealth Telecommunications Organisation, and has been used to assess over 40 countries, including Bhutan, Jamaica, Uganda, the UK, and 32 members of the Organisation of American States (link). The model is a living document which continues to be revised and refined.
The Capacity Centre is also developing a model for Understanding Cyber Harm, moving beyond simple measures of financial harm to address complex issues of reputational, psychological, physical harm etc. Together the Capacity Maturity Model and the future HARM Model will enable nation states and/or organisations to make better informed decisions when it comes to financial investments in cybersecurity capacity building.
The Capacity Centre also hosts the Cybersecurity Capacity Portal, a global resource for expertise and knowledge on cybersecurity capacity building. This publicly-available online platform provides access to all of the tools, models and best cases, includes and inventory of international, regional and national cyber capacity building initiatives underway, and aggregates a number of other resources in the field.
How do you communicate with someone you don't trust? How do you share data with an untrusted system without compromising your privacy? How do you do this using today's technology? These questions are becoming increasingly important as networked computer systems become an integral part of our daily lives through developments like the smart energy grid and the Internet of Things.
The archetypal solution to these privacy challenges is to introduce a some type of intermediary into the communication path to perform privacy-enhancing information processing, such as data aggregation or filtering. However, this approach is usually dismissed because we lack guarantees of this intermediary's trustworthiness – we don't want to introduce a so-called trusted third party.
But what if we had a system we could trust – a system that is trustworthy? In this research endeavour we are designing, building and testing the Trustworthy Remote Entity (TRE), a highly-specialized networked system that can process data whilst providing a very high level of assurance. Unlike a trusted third party, users do not blindly trust the TRE. Instead, the TRE uses tools and techniques from the field of Trusted Computing, such as remote attestation and the Trusted Platform Module (TPM), to provide technical guarantees of its trustworthiness. This architecture allows us to use today's well-established cryptographic techniques and widely deployed hardware, such as the TPM, to perform Secure Multi-party Computation (SMC) cheaply and efficiently and use this to enhance privacy in the smart energy grid and other emerging application domains.
The Network of Excellence in Internet Science aims to strengthen scientific and technological excellence by developing an integrated and interdisciplinary scientific understanding of Internet networks and their co-evolution with society, and also by addressing the fragmentation of European research in this area. Its main objective is to enable an open and productive dialogue between all disciplines which study Internet systems from any technological or humanistic perspective, and which in turn are being transformed by continuous advances in Internet functionality. The network brings together over thirty research institutions across Europe that are focusing on network engineering, computation, complexity, networking, security, mathematics, physics, sociology, game theory, economics, political sciences, humanities, and law, as well as other relevant social and life sciences. The network's main deliverable will be a durable shaping and structuring of the way that this research is carried out, by gathering together a critical mass of resources, gathering the expertise needed to provide European leadership in this area, and by spreading excellence beyond the partnership. The network is funded under the European Commission's Seventh Framework Programme: Information and Communication Technologies.
Wireless sensor networks are more and more seen as a solution to large-scale tracking and monitoring applications. The deployment and management of these networks, however, is handled by a central controlling entity and the sensor network is often dedicated to a single application. We argue that this is due to the fact that we do not yet have the means to deal with a secure multi-purpose federated sensor network, running different applications in parallel and able to reconfigure dynamically to run others.
The communication paradigms which have been usually devised for small and single owner sensor networks simply do not have the right scalability, security, reconfigurability characteristics required for this environment.
FRESNEL worked to build a large-scale federated sensor network framework with multiple applications sharing the same resources. The project aimed to guarantee a reliable intra-application communication as well as a scalable and distributed management infrastructure. Orthogonally, privacy and application security should also be maintained.
The project was piloted though a large scale federation of sensor networks over the Cambridge campus. The sensors monitored different aspects (temperature, pollution, movement, etc), running various applications belonging to different authorities in the city.
Future Home Networks and Services
May 2011 - May 2014
May 2011 - May 2014
Ian Brown, Andrew Martin
This project addressed home network and service security by researching and developing security frameworks for sharing between networks and devices, protocols to connect devices with cloud services, and security analysis of remote management systems.
Trust Domains – A framework for modelling and designing e-service infrastructures for controlled sharing of information
April 2011 - March 2014
April 2011 - March 2014
Ensuring flows of information to the right people over multiple collaborating organisations is becoming increasingly important for both business and government. There are, however, trade-offs between the productivity and functional gains from sharing information, on the one hand, and the risks of leakage and opening up IT systems, on the other. Recent developments in trusted computing and virtualization can address these trade offs in a flexible manner, as they allow for the creation of policy controlled IT systems with configurable security properties. Collaborative, secure sharing solutions can be realized through the creation of dynamic 'Trust Domains' -- a notion that we propose to explore at and between all levels of the policy-service-infrastructure stack -- that enforce information flow and configuration policies. We created a customer-driven project starting from examples of information sharing within police forces and agencies they work with. Based on a practical understanding of the required flows and policies, we developed an abstract framework for qualifying types of and flows of information and a corresponding model of the associated risks. This allows process owners to describe their requirements and concerns. We researched how to qualify and map information flows to Trust Domain configurations, derived guidelines and templates for supporting solution architects in building IT services, and extended our set of analytics and modelling tools to help stakeholders gain an understanding of the risks associated with information flows and enforcement mechanisms.There are business opportunities for creating and operating new e-services with enhanced trust and security properties based on new methodologies and toolsets. The framework we created takes a business-driven approach to risk, trust and security and covers aspects of process and system analysis, design, configuration, security policy, human roles, and operational management. We create a value proposition by having the models, tools and methodologies that allows us to bridge the current gap between business level risk and system configuration and policy design. Hence mapping service needs onto trusted platforms, domains, and infrastructure. The project complements and expands ongoing, TSB-funded work on trust economics as well as on complexity, risk, and resilience management pioneered and exploited by HP's UK Enterprise Services. Both HP Enterprise Services and HP Labs, Bristol believe that bridging high-level incentive models and systems design for trust domains would be a unique global differentiator, not only aligned with US-NITRD 'game-changing' themes, but ahead of them in suggesting an integrated approach. The academic components of this project contributed the following developments in support of this programme: - The concept of Trust Domain, at and between the various levels of the socio-technical system stack (policy-service-infrastructure); - Mathematical systems modelling technologies to support tools and methodologies for reasoning about the properties, dynamics, and applications of the Trust Domain concept; - A thorough taxonomy of technical, design, and architectural properties which give rise to different trust characteristics in deployed services; - Modelling the quality of trust and expectations among components, to the extent of being able to make a meaningful comparison of solutions based on different architectural paradigms, within a given context.Targeted market: intra-corporate and intra-governmental data centres and 'clouds' whose stringent information flow control requirements cannot be met by today's providers.
January 2010 - December 2013
January 2010 - December 2013
October 2010 - September 2013
October 2010 - September 2013
TClouds worked to develop an advanced cloud infrastructure that can deliver computing and storage that achieves a new level of security, privacy, and resilience yet is cost-efficient, simple, and scalable. It worked to change the perceptions of cloud computing by demonstrating the prototype infrastructure in socially significant application areas: energy and healthcare.
The webinos project worked to define and deliver an Open Source Platform and software components for the Future Internet in the form of web runtime extensions, to enable web applications and services to be used and shared consistently and securely over a broad spectrum of converged and connected devices, including mobile, PC, home media (TV) and in-car units.
The project worked to create a solution to increasing problems caused by the uncontrolled flow of personal data. The team brought together researchers from HP’s Systems Security Lab in Bristol, the project leaders, with WMG at the University of Warwick, QinetiQ, HW Communications, Oxford University's Ethox Centre legal department, and regulation and business experts from the London School of Economics (LSE). EnCoRe, which is jointly funded by the Engineering and Physical Sciences Research Council, the Economic and Social Research Council and the Technology Strategy Board, will help businesses and Government adopt scalable, cost–effective and robust consent and revocation methods for controlling the use, storing, locating and sharing of personal data.
Evaluating Usability, Security, and Trustworthiness of Ad-hoc Collaborative Environments (EUSTACE)
May 2012 - October 2011
May 2012 - October 2011
The EUSTACE project aimed to develop a decision-making framework and tool support for rapidly evaluating the security implications of ad-hoc collaborative work. We proposed a framework that reuses existing models in Security, HCI, and Computer Science and makes these amenable to automated analysis and tool support.
The profusion of cloud infrastructures, built both within the public but also private space have enabled a significant body of research to move their computational requirements into this new paradigm. There are though a collection of usecases that are not able to make use of this new paradigm though it is clear that this would improve the provision of computational and data resources available to them. This project worked on the energy sector, doing pilot research on Advanced Metering Infrastructure, Condition Monitoring and Distributed State Estimation to prove that the utilisation of hardware trust within the system for attestation of state and identification of both the data and algorithms and their hosting virtual instances would mean that this high value critically important system could utilise cloud computing. The project created a detailed threat analysis of using IaaS cloud systems and the specific countermeasures that trusted platform allow within the system, an exemplar software framework in which energy researchers are able to start making use of commercially sensitive information while at the same time make full use of cloud computing. The project was followed by a TSB Knowledge Transfer Partnership to implement some of this work in early 2015.
Bill Roscoe, Ivan Flechais, Michael Goldsmith, Sadie Creese
The project developed methods to support strong, but cheap and simple to use, authentication in pervasive devices used by humans. This included: a new family of security mechanisms that allow people to create secure connections between pairs or groups of devices with minimal effort; collaborative working; security where infrastructure is missing, compromised or out of action; improved payment methods; and human-scale networks (e.g. healthcare sensors).
The research in this group centers around the application of formal methods and cryptography to the analysis and development of secure systems. The resulting contributions include:
1) Formal foundations of security (How to mathematically define secure systems and their properties, and how to reason about them;
2) Supportive technologies: we develop automated tools for analysing security protocols (e.g. the Tamarin prover, the Scyther tool, and Scyther-proof);
3) Application example: Improving security standards: we have used our analysis method and tools in many real-world case studies, yielding direct impact. Please see our for details.
Security is essential if the Internet of Things is to succeed. However, security on small constrained devices can be hard. The PicoSec project directly addresses this challenge. It is a place where embedded security experts can help drive the future innovations that will be required to support this industry.
The systems verification group develops automated methods for the analysis of hardware and software systems, including for the analysis of low-level systems security aspects. Methods used include temporal logic model checking, static analysis and testing.
Bill Roscoe, Ivan Flechais, Michael Goldsmith, Sadie Creese
Our vision is that individual authorised users of systems should be permitted, within limits defined by their authorisation, to connect their devices and share data with other devices in situations where the pre-existing security architecture hasn’t foreseen the particular instance of need, or where the backbone services which are necessary to achieve secure communications are simply out of range. Our innovation will enable this by developing a suite of protocols and associated processes for use, which can be used to bootstrap secure communications without the need for extra services or pre-agreed secrets.
Our research on air-traffic security (related to ADS-B and NextGen air-traffic communication) has resulted in a couple of recent papers: In this paper we describe OpenSky https://opensky-network.org/, a large-scale wireless network (covering 720,000 km² in Europe) based on off-the-shelf ADS-B sensors. The paper provides insights into the ADS-B communication channel, characterising typical reception quality and loss patterns in the real world. We also evaluate the feasibility of performing physical location validation using wide-area multilateration. This work is a continuation of our research published in the paper Experimental Analysis of Attacks on Next Generation Air Traffic Communication" (ACNS'13) and On the Security of the Automatic Dependent Surveillance−Broadcast Protocol (IEEE Communications Surveys & Tutorials 2014). These papers are a joint work with Martin Strohmeier (Oxford), Matthias Schaefer (TU Kaiserslautern, Germany), and Vincent Lenders (Armasuisse, Switzerland).
The objective of this research is to shed light on the feasibility of different attacks and countermeasures and to quantify the main factors that impact their success (such as, physical distance between attackers and legitimate receivers, timing constraints, and signal behaviour). OpenSky is an attempt to provide researchers realistic air traffic communication data of high quality. Example research areas which can highly benefit (or have already benefited) from OpenSky are anomaly detection, security in air traffic communication, capacity planning, data validation, multilateration, air traffic modelling and many more.
“Security” research theme
Allowing people to use information technology confidently, free of the danger of their privacy being breached or the actions they are performing being frustrated or subverted by an unauthorised intruder. Preventing the unauthorised use of hardware, software and the internet, and detecting attempts to break into systems. The Security research theme encompasses the members of the Computer Science department engaged in Cybersecurity research as well as those working on other areas of Computer Security.
There are close links between this theme and Automated Verification and Software Engineering, as well as with those working on Quantum Information Theory.
The automated verification group at Oxford is internationally recognized as among the largest and strongest in the world. Our work spans a wide range of research, from fundamental investigations into the decidability and complexity of model checking for various types of infinite-state systems, through process calculi, logics and semantic models, all the way to practical, machine-assisted methods applicable to real-world problems and programming languages. We also have strong industrial links. Our key strengths include concurrency, abstraction, industrial-scale hardware verification, software model checking, and verification of real-time and probabilistic systems, with applications in security protocols, power management, nanotechnology, and biology. A major source of impact is the adoption by others of verification tools resulting from our research: FDR (model checker), Casper (security protocol compiler), SatAbs (SAT-based model checker for C with predicate abstraction), CBMC (bounded model checker for C) and PRISM (probabilistic model checker). All are highly cited and widely used in industrial contexts, both for research and teaching.
DIET: A Different Approach to Smart Meter Data Insight against Energy Theft
David Wallom, Andrew Martin
In collaboration with British Gas, G4S, EDMI Smart Meters, The DIET project supports the development of new services to investigate coordinated analysis streams of consumption and logging data produced by smart meters.
Building on the EPSRC Advanced Dynamic Energy Pricing & Tariffs (ADEPT) and the 'Working with Infrastructure Creation of Knowledge Energy strategy Development (WICKED) projects we have developed a number of different analytic techniques from which we can match consumption to customer behaviour at the whole premises scale as well as full understand the different resolutions of data required for different levels of insight. Building on these two projects we are working through InnovateUK funding on the Data Insight against Energy Theft (DIET, 15-17) which also takes forward the established relationship with the energy sector, to develop an approach using smart meter logging and error messages alongside meter consumption data to identifying potential energy theft and faulty equipment by examining changes in data through time. The project will analyse data collected from a pool of SME electricity meters with a view to developing a reusable method for the domestic market operation.
These streams of meter status and error messages that are not normally retrieved give details on the physical environment of the meters and in conjunction with consumption data will enable creation of richer and more in-depth knowledge of system behaviour. The primary aim of these analytics are to discover and then recognise signatures for two different classes of events, possible meter failure scenarios and warnings on the occurrence of patterns indicative of meter attack/tamper.
Connecting strong expertise in Maths Institute (number theory, combinatorics, algebraic geometry...) to real-life security challenges
Designing advanced cryptography protocols and establishing their security
Evaluating the security of classical and post-quantum cryptography problems (elliptic curve discrete logarithms, lattice problems, syndrome decoding, polynomial system solving, isogeny problems) against new classical and quantum algorithms
Smart Oxford is the strategic programme of a wide range of city partners working together to develop and promote Oxford as a smart city.
By 'smart' we mean creating an environment and infrastructure that engages with the current step-change in digital technologies to support the generation & sharing of city information and to facilitate the development of innovative city-related solutions more effectively, cheaply, sustainably, fairly and inclusively.
In September 2015 we hosted our first Early Careers Researchers Symposium, to showcase the exciting topics under investigation across the network: you can download the programme and book of abstracts here. We hope to make this an annual event!