Date: 1 February 2014 - 31 January 2016
Project leader: Marina Jirotka
This project develops an empirically based and theoretically sound model of the role of responsible research and innovation governance. It explores the dynamics of participation in research and innovation, and investigates the characteristics of responsible practices. The project also investigates the nature of new partnerships among various stakeholders, researchers and policymakers that are developing within innovation networks and the influence that these developments have on knowledge production and policy.
Digital Wildfire: (Mis)information flows, propagation and responsible governance
Date: 18 November 2014 - 16 May 2016
Project leader: Marina Jirotka
The overall aim of the project is to build an empirically grounded methodology for the study and advancement of the responsible governance of social media. This objective relates to three research themes of the call: legitimacy, agency, and temporality. Temporality is at the core of ‘deciphering’ the unfolding structure of (mis)information flows in social media; agency is implied in our hypothesis that much of social media reality is already steered by different stakeholders’ capacities to demonstrate self-regulatory activities and to promote these in others; the legitimacy of any new or additional governance mechanism may be enhanced if it respects and builds on such extant self-governance techniques.
SmartSociety: hybrid and diversity-aware collective adaptive systems
Date: 1 January 2013 - 1 January 2017
Project leader: Marina Jirotka
Where people meet machines to build a smarter society.
Society is progressively moving towards a socio-technical ecosystem in which the physical and virtual dimensions of life are more and more intertwined and where people interaction often takes place with or mediated by machines. The scale at which this is happening and the differences in culture, language and interests makes the problem of establishing effective communication and coordinated action increasingly challenging.
Our goal is to move towards a hybrid system where people and machines tightly work together to build a smarter society. We envision a new generation of CAS centred on the two foundational notions of compositionality and diversity where humans and machines “compose” by synergically complement each other thus bridging the semantic gap between low-level machine and high-level human interpretation of data and where they interoperate collectively to achieve their possibly conflicting goals both at individual and societal levels.
By identifying the right incentive schemes and privacy levels, these systems should assist humans in their everyday activities, be able to cope with the diversity of the world in terms of language, knowledge and personal experience and to work in presence of possibly imperfect information.
More details at:
Global Cyber Security Capacity Centre
Date: 8 April 2013 - 31 March 2017
Project leader: Sadie Creese, Ian Brown, Michael Goldsmith, David Upton
The Global Cyber Security Capacity Centre (GCSCC) is a leading international centre for research on efficient and effective cybersecurity capacity building. It has created the National Cybersecurity Capacity Maturity Model (CMM), the first-of-its-kind model to review a country’s cybersecurity capacity maturity. Together with key strategic international partners, such as the World Bank, the Organization of American States (OAS), the Commonwealth Telecommunications Organisation, and the International Telecommunication Union, the Capacity Centre has since 2015 successfully deployed the CMM in over 40 countries around the world, and significantly underpinned a regional study in Latin America and the Caribbean through collaboration with the OAS. The review processes and the resulting reports, drafted by the GCSCC, enabled the governments to benchmark national cybersecurity policy and strategies, cybersecurity culture, knowledge development, legal and regulatory frameworks, and risk controls. The results and recommendations enabled nations to better plan national strategies, facilitate international and regional collaboration and cooperation, and set priorities for strategic investment and capacity development. To foster global knowledge exchange and transfer of expertise gained in the global community, the GCSCC also runs the publicly-available Cybersecurity Capacity Portal, a global online resource for good practice and knowledge in cybersecurity capacity building, which also includes a mapping of international and regional capacity building efforts by the various actors in the field. [www.sbs.ox.ac.uk/cybersecurity-capacity/]
The deployment of the model has been in itself an effective capacity-building exercise and has been informing the thinking of the global community. The deployment of the CMM has also become part of two global and regional initiatives by the Global Forum on Cyber Expertise (GFCE). The GCSCC encourages the further uptake of the model by other countries and international community actors and has constant conversations with regional organisations, governments, private companies and other research institutions who work on this issue. It also has recently established its first regional partnership with the Oceania Cybersecurity Centre, which will be the focal point for cybersecurity capacity building in that region.
Rather than evaluating the country’s policies only, they look at the its maturity in addressing a wide range of questions, including: how well do the various stakeholders work together to create and revise policies, make decisions, and assess whether strategies are working? The resultant review allows countries to understand their strengths and weaknesses, and target their resources to develop cybersecurity capacity according to their national priorities.
This methodology has been endorsed by the Organization of American States, the World Bank, and the Commonwealth Telecommunications Organisation, and has been used to assess over 40 countries, including Bhutan, Jamaica, Uganda, the UK, and 32 members of the Organisation of American States (link). The model is a living document which continues to be revised and refined.
The Capacity Centre is also developing a model for Understanding Cyber Harm, moving beyond simple measures of financial harm to address complex issues of reputational, psychological, physical harm etc. Together the Capacity Maturity Model and the future HARM Model will enable nation states and/or organisations to make better informed decisions when it comes to financial investments in cybersecurity capacity building.
The Capacity Centre also hosts the Cybersecurity Capacity Portal, a global resource for expertise and knowledge on cybersecurity capacity building. This publicly-available online platform provides access to all of the tools, models and best cases, includes and inventory of international, regional and national cyber capacity building initiatives underway, and aggregates a number of other resources in the field.
Date: 1 June 2015 - 31 May 2017
Project leader: David Wallom
The EPSRC MyTrustedCloud project (2011) was highly successful with aspects of this work are being taken forward directly in an InnovateUK funded Knowledge Transfer Project. The original project investigated how this integration of trusted and cloud computing could be used in a practical scenario. The usecase supported trusted data exchange and application attestation to manage communication between the Distribution and transmission networks using cloud computing as the data exchange vehicle. The project created a detailed threat analysis of using IaaS cloud systems and the specific countermeasures that trusted platform allow within the system, an exemplar software framework in which energy researchers are able to start making use of commercially sensitive information while at the same time make full use of cloud computing.
The Trusted Cloud Knowledge Transfer Partnership (2015-17) will develop this substantially further bringing verifiable data privacy and security to production public cloud computing. This project, working with corporate partner 100PercentIT, will build a production trusted cloud which will be certified by NCSC and include digital key management technologies to ensure isolation of user certificates from the cloud provider. Through the developed Porridge remote attestation service it will enable multiple business models rooted in cryptographically verifiable trust. A cloud user will be able to verify the identity and configuration of any remote system in a scaleable and resiliant manner building from the cloud storage and physical infrastructure through to a full chain of trust capability of any virtual instance started within the cloud. This will support software whitelisting within all of the computational systems and trusted application or software signing through the trusted storage. The KTP has stated aims of doubling the profitability of the commercial partner who already has over 20 current and potential customers interested in this new capability and who have stated they would pay for it as soon as available and certified.
Date: 1 November 2015 - 31 October 2017
Project leader: Andrew Martin
The 5G-ENSURE project brings to the 5G PPP a consortium of telco and network operators, IT providers and cyber security experts addressing priorities for security and resilience in 5G networks. The project has received funding of just over €7.5m out of a total €3.5bn for the 5G-PPP initiative. It will:
- Deliver strategic impact across technology, business enablement & standardisation.
- Develop a set of non-intrusive security enablers (AAA, Privacy, Trust, Monitoring, Network Management and Virtualization Isolation) for the core of the 5G Reference Architecture.
- Define a 5G Security Architecture needed to expand the mobile ecosystem giving operators a platform for entirely new business opportunities.
- Initiate a 5G Security testbed vision and initial set-up in which the security enablers will be made available and demonstrated.
5G-ENSURE will define a shared and agreed 5G Security Roadmap with various 5G stakeholders. The outcome will be a trustworthy 5G system offering reliable security services to customers with a “zero perceived” downtime for service provision.
UnBias: Emancipating Users Against Algorithmic Biases for a Trusted Digital Economy
Date: 1 September 2016 - 31 August 2018
Project leader: Marina Jirotka
In an age of ubiquitous data collecting, analysis and processing, how can citizens judge the trustworthiness and fairness of systems that heavily rely on algorithms? News feeds, search engine results and product recommendations increasingly use personalization algorithms to help us cut through the mountains of available information and find those bits that are most relevant, but how can we know if the information we get really is the best match for our interests?
There is no such thing as a neutral algorithm. As anyone who has ever created something knows, even something as simple as a meal, the act of creating inevitably involves choices that will affect the properties of the final product. Despite this truism recommendations and selections made by algorithms are commonly presented to consumers as if they are inherently free from (human) bias and ‘fair’ because the decisions are ‘based on data’. During the recent controversy about possible political bias in Facebook’s Trending Topics for instance the focus was almost exclusively on the role of the human editors even though 95% or more of the news selection process is done by algorithms. Human judgements however are ultimately also based on data.
The EPSRC funded project “UnBias: Emancipating Users Against Algorithmic Biases for a Trusted Digital Economy” looks at all of the issues above in much greater detail. A large part of this work will include user group studies to understand the concerns and perspectives of citizens. UnBias aims to provide policy recommendations, ethical guidelines and a ‘fairness toolkit’ co-produced with young people and other stakeholders that will include educational materials and resources to support youth understanding about online environments as well as raise awareness among online providers about the concerns and rights of young internet users. The project is relevant for young people as well as society as a whole to ensure trust and transparency are not missing from the internet. The results will be widely disseminated to a variety of audiences ranging from academic peer-review journals to community groups of interest such as secondary schools and youth clubs.
Date: 1 September 2014 - 1 September 2018
Project leader: Roger Heath-Brown
Cryptography is the science and art of ensuring private and authenticated communications, for example over the internet, in our bank card transactions and with our mobile phones. Unfortunately most cryptographic protocols used today will become totally insecure once large scale quantum computers are build. Anticipating on this, we must already develop the next generation of cryptographic protocols and organize the transition in our security infrastructures.
Since its creation in 2015 (with GCHQ financial support), the Cryptography Group in Oxford has been connecting the strong mathematical expertise in Oxford to current security challenges, and particularly developing post-quantum cryptography. Our research results in this year of existence have included new digital signature and zero-knowledge protocols, the security analysis of existing protocols, and number theory results connected to cryptography. We have also built research links with other world-leading groups in Computer Science and Physics Departments at Oxford, and we are currently investigating new security threats posed by quantum computers.
The Group is also contributing to long-term education on (post-quantum) cryptography in the UK: we have created two new courses on Cryptography which are offered to MFoCS master students (Mathematics and Foundations of Computer Science) in Oxford; we have launched a new seminar series (and attracted world-renowned speakers such as Adi Shamir and Antoine Joux); and we have supervised dissertation projects in this area.
Steganography and Steganalysis Research Group
Date: 1 January 2009 - 28 February 2019
Project leader: Andrew Ker
The work in this group targets both the hiding (steganography) and detection (steganalysis) of hidden data, typically in digital media. The research group, with close collaborators in France and the Czech Republic, has three strands:
• Practically-deployable steganalysis, attacking problems from unusual angles (for example by classifying network users, rather than individual media objects, as suspicious) and aiming for ultra-low false alarm rates.
• Theoretical aspects of steganalysis, from a signal-detection and hypothesis testing point of view; the "square root law" of capacity.
• Practical aspects of steganography, including hiding in Twitter.