All governments participating in the UK’s AI Safety Summit have issued a joint communique on the risks and opportunities of the technology, affirming the need for an inclusive, human-centric approach to ensure its trustworthiness and safety.
Signed by all 28 governments in attendance, as well as the European Union (EU), the Bletchley Declaration outlines their shared approach to addressing the risks of “frontier” AI – which they define as any highly capable general-purpose AI model that can perform a wide variety of tasks – and commits to intensified international cooperation going forward.
Recognising that AI is in increasingly widespread use throughout “many domains of daily life” – from health and education to transport and justice – the Bletchley Declaration noted that the AI Safety Summit presents “a unique moment to act and affirm the need for the safe development of AI and for the transformative opportunities of AI to be used for good and for all, in an inclusive manner in our countries and globally”.
The focus of these 28 country’s cooperation will therefore be on identifying AI safety risks of shared concern, building a shared scientific and evidence-based understanding of these risks, and sustaining that understanding as capabilities continue to develop.
This focus will also extend to building up risk-based policies for AI in each of their countries (although it notes that national approaches may differ based on their specific circumstances and applicable legal frameworks), which will include evaluation metrics and tools for safety testing, as well as building up public sector AI capabilities and the scientific research base.
In line with the commitment to deeper cooperation, UK digital secretary Michelle Donelan announced in her opening remarks that a second AI Safety Summit event will be held in South Korea in six months’ time, followed by another in France a year from now.
Welcoming the UK government’s announcement a week before that it would set up an AI Safety Institute, US secretary of commerce Gina Raimondo announced the Biden administration will be setting up its own AI Safety Institute housed within NIST, which will take on a role in developing standards for safety security and testing.
She added that this institute – alongside establishing a “formal partnership” with its UK counterpart – will also set up a consortium to facilitate work with partners in academia, industry and non-profits on advancing the safety of frontier AI.
“In recognition of the transformative positive potential of AI, and as part of ensuring wider international cooperation on AI, we resolve to sustain an inclusive global dialogue that engages existing international fora and other relevant initiatives and contributes in an open manner to broader international discussions, and to continue research on frontier AI safety to ensure that the benefits of the technology can be harnessed responsibly for good and for all,” said the Bletchley Declaration.
It added that the many risks arising from AI are “inherently international in nature”, and are therefore best addressed through international cooperation.
“We resolve to work together in an inclusive manner to ensure human-centric, trustworthy and responsible AI that is safe, and supports the good of all through existing international fora and other relevant initiatives, to promote cooperation to address the broad range of risks posed by AI,” wrote the signatories.
“In doing so, we recognise that countries should consider the importance of a pro-innovation and proportionate governance and regulatory approach that maximises the benefits and takes into account the risks associated with AI.
“This could include making, where appropriate, classifications and categorisations of risk based on national circumstances and applicable legal frameworks.”
While the Bletchley Decleration outlines a number of areas where AI can have a positive impact – including in public services, science, food security, clean energy, biodiversity, sustainability, and the enjoyment of human rights – it stresses that the technology poses significant risks, including in the “domains of daily life” where it is already being used.
Given the current breadth of AI deployments, signatories said they welcomed “relevant international efforts” to examine and address the potential impacts of AI systems, and recognise that “the protection of human rights, transparency and explainability, fairness, accountability, regulation, safety, appropriate human oversight, ethics, bias mitigation, privacy and data protection needs to be addressed”.
It added there are also “substantial risks” around intentional misuse of the technology, or unintended issues of control as a result of systems’ alignment with human intent: “We are especially concerned by such risks in domains such as cyber security and biotechnology, as well as where frontier AI systems may amplify risks such as disinformation. There is potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these AI models.
“Given the rapid and uncertain rate of change of AI, and in the context of the acceleration of investment in technology, we affirm that deepening our understanding of these potential risks and of actions to address them is especially urgent.”
Speaking in the morning, Ian Hogarth, entrepreneur and chair of the UK government’s £100m Frontier AI Taskforce’, said he is particularly worried about a situation where technological progress around AI outstrips our ability to safeguard society.
He added that while there are a wide range of beliefs about the certainty and severity of “catastrophic consequences” arising from AI, “no one in this room knows for sure how or if these next jumps in computational power will translate to new model capabilities or harms”.
However, in a letter published ahead of the AI Safety Summit, more than 100 civil society organisations signed an open letter branding the event “a missed opportunity”, on the basis it is a closed shop dominated by big tech, and for excluding groups most likely to be affected by AI, such as workers.
Notable signatories include Connected by Data; the Trade Union Congress (TUC); and the Open Rights Group (ORG) – the three of which led on coordinating the letter – as well as Mozilla; Amnesty International; Eticas Tech; the Tim Berners-Lee-founded Open Data Institute; Tabitha Goldstaub, former chair of the UK’s AI Council; and Neil Lawrence, a professor of machine learning at the University of Cambridge, who was previously interim chair of the Centre for Data Ethics and Innovation’s (CDEI) advisory board before it was quietly disbanded by the government in early September2023.
Union federations representing hundreds of millions of workers from across the globe also signed, including the American Federation of Labor and Congress of Industrial Organizations (AFL-CIO), which represents 60 unions and 12.5 million American workers; the European Trade Union Confederation (ETUC), which represents 45 million members from 93 trade union organisations in 41 European countries; and the International Trade Union Confederation, which represents 191 million trade union members in 167 countries and territories.