Oxford to be key player in new AI accountability project
The University of Oxford is to play a central role in ground-breaking new research intended to make future artificial intelligence (AI) systems more transparent and accountable.
In collaboration with the Universities of Aberdeen and Cambridge, Oxford will develop auditing systems akin to ‘black box’ flight recorders for AI systems.
The Realising Accountable Intelligent Systems (RAInS) project is a multi-disciplinary initiative running in collaboration between the three universities. Backed by £1.1 million Engineering and Physical Sciences Research Council (EPSRC) funding, the project is a direct response to the EPSRC's 2017 call for research to further the understanding of Trust, Identity, Privacy and Security (TIPS) issues in the Digital Economy.
Working with the public, the legal profession and technology companies, RAInS aims to create prototype solutions to allow developers to provide secure, tamper-proof records of intelligent systems’ characteristics and behaviours.
These records can then be shared with relevant authorities and in the event of incidents or complaints, further analysed to ensure transparency and accountability in future AI systems.
Professor Rebecca Williams, Professor of Public Law and Criminal Law at Oxford’s Law Faculty, will work on the project with Dr Jat Singh from the University of Cambridge, which will be led by Professor Pete Edwards from the University of Aberdeen.
Professor Williams commented: ‘I am hugely excited to be involved in this project. From a legal perspective the transparency and accountability of these systems is vital and is inherent in any concept we might have of fairness.
‘My role will be to identify the challenges to the existing law posed by new technology of this kind, and to think about how the law can respond. Regulation of some kind will obviously be necessary, but it’s important to think about how the law can best incentivise optimal use of these systems so that we can reap the benefits they offer, while also maintaining transparency and thus fairness.
'It’s vital, though, that any such efforts by lawyers should be led by the technology itself; the law can only require what is technically possible, and any form of law or regulation is much more likely to be successful if it is informed by a deep and detailed understanding of the technical context. I’m therefore particularly delighted to be part of such a strong interdisciplinary team on this project. The more closely lawyers can work with other disciplines in this kind of area, the more effective and suitable the resulting law is likely to be.’
The RAInS project aims to develop solutions that will support auditing of AI systems, ensuring a level of accountability.
Professor Edwards commented: ‘AI technologies are being utilised in more and more scenarios including autonomous vehicles, smart home appliances, public services, retail and manufacturing. But what happens when such systems fail, as in the case of recent high-profile accidents involving autonomous vehicles?
‘How can we hold systems and developers to account if they are found to be making biased or unfair decisions? These are all real and timely challenges, given that AIs will increasingly affect many aspects of everyday life.’
Dr Singh said: ‘Our work will increase the transparency of AI systems not only after the fact, but also in a manner which allows for early interrogation and audit, which in turn may help prevent or mitigate harm.’
Professor Williams added: ‘Ultimately our ambition is to create a means by which the developer of an intelligent system can provide a secure, tamper-proof record of the system's characteristics and behaviours that can be shared - under controlled circumstances - with relevant authorities in the event of an incident or complaint.’
A total of eleven initiatives - including RAInS, have been successful under the TIPS umbrella, and collectively will receive £11 million over the next three years. A second project is also being led from Oxford, by Professor Marina Jirotka from the Department of Computer Science, who is working with partners at the Universities of Nottingham and Edinburgh to develop an experimental online tool that allows users to evaluate and critique algorithms used by online platforms, in order to rebuild and enhance people's trust in AI systems.