Expert Comment: Paris AI Summit misses opportunity for global AI governance
Expert Comment: Paris AI Summit misses opportunity for global AI governance. Credit: Getty, Sansert Sangsakawrat.

Expert Comment: Paris AI Summit misses opportunity for global AI governance

From the University of Oxford's Institute for Ethics in AIJohn Tasioulas, Professor of Ethics and Legal Philosophy, Ignacio Cofone, Professor of Law and Regulation of AI, and Dr Caroline Green, Director of Research, discuss this month's Paris AI Summit which aimed to find a shared direction for the regulation of artificial intelligence (AI).

Following previous summits at Bletchley Park (2023) and Seoul (2024), global leaders re-convened at the Paris AI Summit this month to find a shared direction for the regulation of artificial intelligence (AI). The Declaration agreed at Paris, signed by fifty-seven states and bodies such as the UN, the OECD, and the European Commission, represented a welcome shift away from the 'AI safety' rhetoric, focused on existential threats, that had marked previous summits, particularly at Bletchley Park. Instead, the Paris Declaration reflected common concerns around the impact of AI on values such as sustainability, human rights, inclusivity and, especially, the future of work. 

Professor John TasioulasProfessor John Tasioulas. Credit: Ian Wallman.
Perhaps more significant than the Paris Declaration itself, however, was the decision of the US and the UK not to sign it. The UK’s refusal appears to be driven by both a strategic alignment with the US and a pre-existing preference for a less precautionary approach to AI regulation than the EU. Insights into the US’ refusal to sign the Declaration can be gleaned from the speech made by the US Vice-President, J.D. Vance at the summit. His over-arching thesis was the need to rebalance the discourse around AI away from a preoccupation with safety and towards seizing opportunities afforded by AI. According to Vance, 'excessive regulation' risks squandering these immense opportunities.

In line with an 'America First' approach held by the Trump administration, the seizing of opportunities through 'pro-growth AI policies' was presented foremost as a matter of advancing US national interests. The US, according to Vance, sets the 'gold standard' in AI, and a priority of the current administration is to ensure its continued leadership in the field. But this candid appeal to national interest was also buttressed by four ethical considerations that were intended to have a wider resonance among his audience of world leaders, and which arguably militate, intentionally or not, in favour of certain forms of regulation. 

Perhaps more significant than the Paris Declaration itself, however, was the decision of the US and the UK not to sign it.

The first consideration was framed as fairness but was fundamentally about anti-trust. The insistent calls for AI regulation in the name of safety—including protection from the supposed existential threat this technology poses—often come from the same large incumbent tech companies that stand to benefit from regulatory barriers that limit competition from new market entrants. As Vance pointed out, when industry leaders demand aggressive regulations, policymakers should question whether these measures are truly for the public benefit or rather serve to entrench the dominance of established players. 

But taking market fairness seriously also requires asking whether the existence of an AI economy dominated by a small number of powerful companies should be tolerated in the first place. Vance had previously advocated for anti-trust action against big tech, once tweeting, 'Long overdue, but it's time to break Google up.' He has also expressed admiration for Lina Khan, the former Federal Trade Commission chair known for an aggressive stance against anti-competitive practices. However, with Khan stepping down and her successor signalling a retreat from her policies, the Trump administration's commitment to market fairness is questionable. Without a broader willingness to curb corporate power through vigorous anti-trust enforcement, scepticism about the sincerity of this fairness argument is warranted. Precisely this issue is highlighted by the Paris Declaration's priority of fostering innovation while preventing market concentration.

Professor Ignacio CofoneProfessor Ignacio Cofone. Credit: Ian Wallman.

A second ethical consideration was Vance's emphasis on free speech, arguing that AI should be 'free from ideological bias' and not become a tool for authoritarian censorship. This echoes the broader Trump administration agenda, as expressed in the Executive Order on 'Restoring Freedom of Speech and Ending Federal Censorship.' Shortly before this order was signed on January 20th, Meta announced it would reduce content moderation by ending its third-party fact-checking program, signalling alignment with this new deregulatory stance.

In line with this consideration, Vance warned against foreign regulations that 'tighten screws' on US tech companies, likely referring to the EU's Digital Services Act and the UK's Online Safety Act. Some may view parts of the Paris Declaration—such as its call for AI to be inclusive and to adhere to international frameworks—as embodying some form of “ideological bias” that conflicts with the rather exceptionalist American free speech model. Yet, different countries, shaped by distinct histories and values, reasonably adopt various free speech standards. The broader issue is whether a single country, in this case, the US, should pressure others into conforming to its vision of free expression.

This highlights a larger issue: if 'America First' is to be anything more than a form of crudely self-centred nationalism, it must acknowledge the right of other nations to similarly prioritise their own interests and values—including in speech regulation. The assumption that uniform global (US) speech standards are necessary may, in fact, reflect the commercial interests of American tech giants more than any principled commitment to free expression or the best interests of US citizens. By contrast, the Paris Declaration seeks to promote international cooperation while allowing space for regulatory diversity.

As a third key consideration, Vance distinguished his vision of AI from that of Silicon Valley, criticizing industry leaders for promoting an AI future centred on automation and job displacement. Tech executives, anticipating mass lay-offs due to AI, have supported measures like a Universal Basic Income policy as a solution. In contrast, Vance proposed a 'pro-worker growth path' in which AI enhances productivity, raises wages, and creates jobs rather than eliminating them. The Paris Declaration includes similar commitments. This vision aligns with the Declaration's priority of encouraging AI deployment that benefits labour markets by positively shaping the future of work and fosters sustainable economic opportunities.

The US administration appears focussed primarily on AI growth at the cost of higher energy consumption and the environmental harm that it involves.

While appealing, a worker-centric AI model requires more than rhetoric. As Acemoglu and Johnson argue in Power and Progress, such an approach demands comprehensive policies: anti-trust measures to curb corporate overreach, government incentives for AI applications that enhance worker productivity rather than replace workers, and strong labour protections, including unions with the power to influence AI deployment. [1] They also require moving away from AI safety as an overarching rubric for regulation.[2] Here, the Trump administration's actual labour policies come into question. Without concrete commitments to worker empowerment, the promise of "pro-worker AI" rings hollow.

Finally, Vance underscored the necessity of robust energy infrastructure to support AI's escalating energy demands. The implication here is that the Paris Declaration's emphasis on making AI 'sustainable for people and the planet' might conflict with the scale of energy investment required for AI's full potential to be realised. This issue does not receive the attention it deserves in global AI discussions. The environmental footprint of AI is massive, raising the prospect of difficult trade-offs between AI innovation and sustainability. The Paris Declaration explicitly addresses sustainability concerns, stressing the importance of environmental responsibility in balancing these trade-offs.

The US administration appears focussed primarily on AI growth at the cost of higher energy consumption and the environmental harm that it involves. However, there were signs of a subtler message in Vance’s speech: that AI technology may itself be a means of achieving environmental sustainability, enabling us to create and store new forms of energy and efficiently deploy their use. But that for this to occur we must ensure the extensive energy needs of this technology are met in the shorter term. How these competing strategies should be adjudicated is an open and difficult ethical question.

Dr Caroline GreenDr Caroline Green. Credit: Ian Wallman.
Contrary to predictions that the vaunted ‘Brussels effect’ will gradually lead the US to converge on the EU model of technological regulation, [3]  it now seems, in the short term at least, that a ‘Trump factor’ – operationalised by methods such as the failure to join multilateral efforts and the threat of tariffs – is actually nudging the EU closer to the American anti-regulatory model. Indeed, European leaders, such as French President Macron, have already hinted at the need for a more simplified and business-friendly approach. In the meantime, the EU has abandoned its AI Liability Directive, which had drawn criticism from American businesses operating in the EU.

Although the US Vice-President framed his speech in terms of the immense opportunities of an AI revolution, the dominant sentiment arising from the Paris summit is a feeling of missed opportunity. The summit ultimately served to demonstrate the absence of a unified democratic consensus on AI regulation. The leading democratic states failed in their responsibility to articulate a cooperative approach to AI regulation that transcended their ideological and policy differences. Vance acknowledged the risks of authoritarian regimes using AI for surveillance and propaganda, vowing to block such efforts. However, countering these threats and realising the potential of AI for the good of humans requires more than unilateral action; it requires a shared regulatory vision among democratic allies.

A future in which AI works for the benefit of all will not be the product of any one country determining its course, no more than it will be the product of a handful of big tech corporations doing so. AI awaits its urgently-needed global regulatory moment.

[1] Daron Acemoglu and Simon Johnson, Power and Progress: Our Thousand Year Struggle over Prosperity and Technology (Basic Books, 2023).

[2] Josiah Ober and John Tasioulas, 'The Lyceum Project: AI Ethics with Aristotle', pp. 50-53 (June 17, 2024) 

[3] Anu Bradford, Digital Empires: The Global Battle to Regulate Technology (Oxford University Press, 2023).