Expert Comment: Oxford AI experts respond to PM Rishi Sunak speech ahead of the UK AI Safety Summit
Today [26/10/2023], the UK Prime Minister Rishi Sunak delivered a speech on artificial intelligence safety at The Royal Society in London, ahead of the of the UK AI Safety Summit at Bletchley Park next week. Oxford AI experts respond.
Lionel Tarassenko, Professor of Electrical Engineering in the Department of Engineering Science and President of Reuben College, University of Oxford:
'Mathematics, which underpins all AI, teaches us that extrapolation is a very inexact process, leading to results which are often no better than random. It is therefore very doubtful that much can be gained by thinking about long-term risks without any supporting data.
'One of the key risks with AI is endowing these systems with a degree of autonomy: data is currently being accumulated about the dangerous behaviours occasionally displayed by autonomous vehicles (robotaxis) in San Francisco (and other places). All this data should be investigated by teams of AI researchers (under the aegis of the IPCC equivalent for AI), independently of the autonomous vehicles companies. If the latter will not share their data, there may be a need for regulation.
'We should be putting more resources into collecting and analysing data from other examples of existing and nascent risks arising from the application of today’s advanced AI systems. Learning about short-term risks and thinking about how to mitigate and contain them will enable us to develop strategies which may be usefully applied to longer-term risks.'
Brent Mittelstadt, Associate Professor and Director of Research at the Oxford Internet Institute, University of Oxford:
'In his speech Rishi Sunak suggested that the UK will not “rush to regulate” AI because it is impossible to write laws that make sense for a technology we do not yet understand. The idea that we do not understand AI and its impacts is overly bleak and ignores an incredible range of research undertaken in recent years to understand and explain how AI works and to map and mitigate its greatest social and ethical risks. This reluctance to regulate before the effects of AI are clearly understood means AI and the private sector are effectively the tail wagging the dog—rather than government proactively saying how these systems must be designed, used, and governed to align with societal values and rights, they will instead only regulate reactively and try to mitigate its harms without challenging the ethos of AI and the business models of AI systems. The business models behind frontier AI systems should not be given a free pass; they may be built on theft of intellectual property and violations of copyright, privacy, and data protection law at an unprecedented scale.
'There are many examples of value-driven regulation existing well before the emergence of fundamentally transformative technologies like AI—look at the 1995 Data Protection Directive and its rules for automated data processing, or non-discrimination laws which clearly set out societal values and expectations to be adhered to with frontier technologies regardless of the technological capabilities and underlying business models. My worry is that with frontier AI we are effectively letting the private sector and technology development determine what is possible and appropriate to regulate, whereas effective regulation starts from the other way around.
'I am relieved to see in the reports released by the government today a greater focus on the known, near-term societal risks of frontier AI systems. Initial indications suggested the AI Safety Summit would focus predominantly on far-fetched long-term risks rather than the real near-term risks of these systems that are fundamentally reshaping certain industries and types of work. However, the lack of attention given in the reports to the environmental impact of frontier AI is a huge oversight and one that should be quickly remedied.'
Carissa Véliz, Associate Professor at the Faculty of Philosophy and the Institute for Ethics in AI, University of Oxford:
'The UK, unlike Europe, has been thus far notoriously averse to regulating AI, so it is interesting for Sunak to say that the UK is particularly well suited to lead the efforts of ensuring the safety of AI. Serious researchers, many of them women, have been sounding the alarm about AI for years. It took powerful men with ties to big tech to start talking about risks for Sunak and others to get interested. These are the same men who helped create the risks that we now face. Our politicians would do well to read books like Unsafe at Any Speed, which tells the story of how we came to regulate the car industry, and makes it evidently clear that no industry can regulate itself.
'Tech executives are not the right people to advise governments on how to regulate AI. Their conflicts of interest run too deep. They can provide input, but they should not be allowed to dominate the conversation. Sunak’s speech sounded curiously similar to the views that are being voiced by big tech and their partners. A good outcome of this meeting would be an inclusive conversation leading to regulation that focuses on the protection of human rights and the protection of democracy. I’m not optimistic, but I hope I’m wrong.'
Angeliki Kerasidou, Associate Professor in Bioethics in the Oxford Department of Population Health, University of Oxford:
'AI holds a lot of potential for good, but this potential will not be realised unless the risks are mitigated, and the harms minimized and properly addressed. What we need is an open and honest global discussion about what kind of world we want to live in, what values we want to promote, and how AI can be harnessed to get us there. I hope that this AI Summit is the beginning of that discussion.'
Matthias Holweg, Professor of Operations Management, Saïd Business School, University of Oxford:
'The debate on AI regulation very often descends into pointing to existential risks, but those fears are misplaced. We are several key development stages away for AI to become that powerful or go out of control, without any credible path to how AI could ever become sentient.
'The clear and present danger, however, and why AI regulation is very important is that these systems decide on the access to essential services, like finance or education. If we don’t ensure AI systems conform prior to being launched, we risk excluding and/or exploiting certain parts of the population. And in the worst case, propagate existing biases into the future, under the radar.
'While the UK efforts to develop its own AI regulation are laudable, they miss the point that firms will seek to complying with one global standard, rather than having to deal with several competing standards in the regions they operate in. In that sense it is the EU AI Act that everyone looks to as setting that global standard. What the UK may or may not decide, it quite frankly irrelevant to most AI operators. AI regulation will be decided between the US lawmakers, the EU, and the big tech firms.'
Professor Alex Connock, Senior Fellow, Saïd Business School, University of Oxford
'For regulation to have bite, the UK will need to have clear access to an understanding of the training data that drives the LLMs the underpin our generative AI systems. Without that, copyright in the outputs will not be enforceable. Therefore, a key objective for the UK government will be transparency in data.'
Michael Osborne, Professor of Machine Learning in the Department of Engineering Science, University of Oxford
'I welcome the governance of AI—both its rewards and its risk—by democracies, as the current state-of-play is that this transformational technology is really governed only by a small number of opaque tech firms.'