Expert Comment: Leading AI nations convene for day one of the UK AI Summit
Today [01/11/23] delegates from 28 governments, including China and the US, gathered at Bletchley Park in the UK for talks on how to regulate artificial intelligence.
Leading AI nations in attendance have reached a world-first agreement establishing a shared understanding of the opportunities and risks posed by frontier AI.
Oxford AI experts comment during day one of the UK AI summit:
Professor Robert F Trager, Director, Oxford Martin AI Governance Initiative at the University of Oxford says:
'The declaration says "We resolve to work together" to ensure safe AI, but is short on details of how countries will cooperate on these issues. The Summit appears to have achieved a declaration of principles to guide international cooperation without having agreed on a roadmap for international cooperation.
'The declaration says that "actors developing frontier AI capabilities, in particular those AI systems which are unusually powerful and potentially harmful, have a particularly strong responsibility for ensuring the safety of these AI systems." This suggests that governments are continuing down the road of voluntary regulation, which is very likely to be insufficient. It also places the declaration somewhat behind the recent US executive order, which leverages the Defense Production Act and other legal instruments to create binding requirements. This is confirmed when the declaration "encourage[s]" industry leaders to be transparent.'
Professor Vincent Conitzer, Head of Technical AI Engagement, Institute for Ethics in AI says:
'It is encouraging to see the AI Safety Summit taking place. AI is a technology that is in some ways unlike any other, and we have seen dramatic progress in it over the past decade. Unfortunately, much of this technical progress has come along a branch of AI that makes it very difficult for us to understand or carefully steer what exactly the AI is doing, or even what the next version of a system will be capable of.
'As a consequence, the variety of concerns raised by AI, across both AI safety and AI ethics, is enormous, and the one thing we can be sure of is that we do not even understand all the risks yet. Many of these challenges require not just technical understanding but also interdisciplinary expertise of a type that we have traditionally not trained people for. Some people look at the situation and lament that issue X is getting attention because they think it's taking away resources from issue Y, and others feel that it's the other way around. In my view, in reality, issues X and Y are often related, and the real takeaway should be that there is just a lot of very important work that needs to be done.'
Professor Keegan McBride, Departmental Research Lecturer in AI, Government, and Policy and Director, MSc Programme in the Social Science of the Internet says:
'As the impact that AI has on the world continues to grow, governments are beginning to grapple with how to best regulate and control AI. The decisions made by leading policymakers now will have longstanding geopolitical implications on the global distribution of power in the age of AI. To ensure that AI remains aligned with democratic values and norms it is essential that fears over the perceived risks of AI do not lead to policies which inhibit innovation or drive the centralization of AI development. Instead, governments must create a regulatory regime that supports, rather than fears, the open development of cutting-edge AI systems.'
Experts at Oxford are developing fundamental AI tools, using AI to tackle global challenges, and addressing the ethical issues of new technologies.
Find out what AI means and how it's impacting our society from world-leading experts, and discover the groundbreaking ways artificial intelligence is being applied at Oxford.