Prime minister Rishi Sunak’s much-hyped AI Safety Summit has seen dozens of governments, artificial intelligence (AI) companies and civil society representatives come together to discuss how society should manage the technology’s risks, but depending on who you talk to, the event was either a wasted opportunity, a tentative step in the right direction or a historic milestone in global AI governance.
While the AI Safety Summit has been widely panned by trade unions and civil society organisations as a “missed opportunity”, in part because of its focus on future speculative risks over real-world harms already happening (but also because of workers’ exclusion from the event and its dominance by big tech), key figures involved in organising the event praised the UK government’s approach to AI safety.
For example, when speaking about the exponential growth in computing power that is the backbone of recent AI advances, Ian Hogarth, entrepreneur and chair of the UK government’s £100m Frontier AI Taskforce, said “all powerful technologies prompt questions about how we make them safe”.
“Power and risk go hand in hand,” he said. “From clinical trials to pharmaceuticals, to international regulation of the nuclear industry, every emerging technology, if it is sufficiently powerful, necessitates a conversation about how we make it safe. And that’s why we’ve come together today.”
Linking leaps in compute – which he said has increased by a factor of 10 million in the past 10 years – to the potentially existential risk presented by AI, Hogarth said a number of experts are concerned about uncontrolled advances in AI leading to “catastrophic consequences”, and that he is personally worried about a situation where progress in the technology outstrips our ability to safeguard society.
“There’s a wide range of beliefs in this room as to the certainty and severity of these risks – no one in this room knows, for sure, how or if these next jumps in computational power will translate to new model capabilities or harms,” he said, adding that the Taskforce he heads up has been trying to ground an understanding of these risks in “empiricism and rigour”.
Hogarth added that the summit represents “a brief moment of reflection along this curve, a moment to pause and shape its trajectory and its impact”, and praised the US’s announcement of its own AI Safety Institute, which US secretary of commerce Gina Raimondo confirmed will establish a “formal partnership” with the UK’s version announced by Sunak a week earlier.
Future summits
Matt Clifford, a former prominent investor in the technology and the prime minister’s representative on AI, also said it was “remarkable” to have pulled off so much in just 10 weeks; from the inclusion of the Chinese government and the confirmation of two future summits (in South Korea and France), to getting 28 countries and the European Union to sign the Bletchley Declaration affirming the need for an inclusive, human-centric approach to ensure AI’s trustworthiness and safety.
“This declaration is a foundation for future conversations and future science,” he said. “One of the things that comes out, I hope, from the declaration is this commitment to international collaboration and the commitment of coming together, and I think we’re really going to want to hold ourselves to account.”
While the press was kept out of the roundtable discussions, Clifford added that a range of opinions were present and some “tough questions” were asked of the AI companies present.
Dutch digital minister comments
Providing further information on the closed roundtables, Dutch digital minister Alexandra van Huffelen said some of the discussions revolved around what the companies themselves are doing to prevent various AI-related harms. “They are stating ‘we’re making products that we do not exactly know what they’re doing,” she said. “We know that they can be very harmful, we know the risks are there, but we don’t know actually if we are good enough at doing it.”
She added that firms were also giving introductions to their company policies and other safety-related developments which “sound very nice, but it’s a bit like school kids marking their own homework”.
“For loads of the people in the room – people from NGOs or research institutes, but definitely also from governments and myself – are basically saying, ‘well, this is all very nice, of course, that you think … of the products and services you’re providing’. This is not good enough.”
Asked whether she had changed her mind on anything as a result of the roundtable discussions she had witnessed throughout the day, van Huffelen responded that “it made me believe we need to regulate as fast as we can … I do not want voluntary commitments to be the endgame, but we need far more of that soon in order to make sure we’re ahead of regulation”.
The minister further added that there is a tension between companies wanting more time to test, evaluate and research their AI models before regulation is enacted, and wanting to have their products and services out on the market on the basis that they can only be properly tested in the hands of ordinary users. “This doesn’t give me a lot of comfort, this says we’re testing out products while they are out there in the world,” she said, adding that it is still a positive to have intense debates between NGO researchers, government officials and the companies all around one table.
However, van Huffelen also stressed that she would like more representation of trade unions, workers, and “ordinary people” in future summits. “We’re not only talking about the risks of AI, we’re talking about the consequences to society,” she said. “AI is going to change the workplace, it’s going to change healthcare … so it’s super-important that everybody’s involved. I would very much applaud the idea of having more people at the table to represent themselves … Everybody should be able to trust the digital world and have control of their own digital lives.”
Van Huffelen extended this to including more voices from the Global South, noting that some of the research indicates generative AI will likely create bigger global divides, rather than bridge the existing gaps.
Roundtable roundups
At the end of day one, the roundtable chairs provided feedback in a livestream about what was discussed during the closed sessions.
Chair of a roundtable on what role the scientific community can play in AI safety, Angela McLean, the UK government’s chief scientific adviser, for example, said part of the discussion revolved around the need for “epistemic modesty” as “uncertainty [around AI] is rife”, adding that the burden of proof around demonstrating the tech’s safety should remain with the suppliers and those in the scientific community.
She said those involved in the roundtable will publish a list of open research questions that can help direct responsible AI development.
“Clearly there are many technical questions here, but there are many, many social questions as well … time is of the essence, we’re going to need to do a lot of this very fast,” said McLean.
“Left to last because it’s the most important, and it’s come up again and again in other people’s discussions, is the issue of inclusivity. Whose conversation is this? This needs to be everybody’s conversation because these issues affect us all.
She added that we need to learn the lesson about the “concentration of power that has ended up in the hands of a very small number of people”. “If we can, we need to avoid that happening again,” said McLean. “We also need an inclusivity that recognises the big geographical difference at the moment, not only in who gets to speak, [but] who gets to research and who benefits. We need linguistic inclusion so this is not technology for a small number of languages.”
McLean concluded that those involved in AI decision-making “need to find ways to hear the public – not just consult with the public, but actually hear what they have to say to us … it’s us that need to learn how to hear”.
Tino Cuéllar, president of the Carnegie Endowment for International Peace, who chaired a roundtable on what actions the international community should take in relation to the risks and opportunities of AI, spoke about the need to respect different countries and their ways of operating, and also stressed the need to ensure AI does not become “a province in the Global North”.
He said further discussion revolved around the importance of improving the shared understanding of AI models’ capabilities, as well as developing a more coordinated approach to research internationally.
Speaking about the session she chaired on responsible capability scaling, UK digital secretary Michelle Donelan said there was consensus on the need for “trusted external organisations to run benchmarks that determine risks”.