On 1 and 2 November 2023, world leaders, politicians, computer scientists and tech executives attended the global AI Safety Summit at Bletchley Park in the UK. Key political attendees included US Vice President Kamala Harris, European Commission President Ursula von der Leyen, UN Secretary-General António Guterres, and UK Prime Minister Rishi Sunak. Industry leaders also attended, including Elon Musk, Google DeepMind CEO Demis Hassabis, OpenAI CEO Sam Altman, Amazon Web Services CEO Adam Selipsky, and Microsoft president Brad Smith.
Day 1: The Bletchley Declaration
On the first day of the summit, 28 countries and the EU signed the Bletchley Declaration (“Declaration”). The Declaration establishes an internationally shared understanding of the risks and opportunities of AI and the need for sustainable technological development to protect human rights and to foster public trust and confidence in AI systems. In addition to the EU, signatories include the UK, the US and, significantly, China. Nevertheless, there are notable absences, most obviously, Russia.
The Declaration acknowledges that:
- AI systems are already deployed across many domains of daily life, including housing, employment, transport, education, health, accessibility, and justice;
- we face a unique moment to manage AI safety risks, particularly in relation to ‘frontier AI’ (defined as “highly capable general-purpose AI models, including foundation models, that could perform a wide variety of tasks – as well as relevant specific narrow AI that could exhibit capabilities that cause harm – which match or exceed the capabilities present in today’s most advanced models”);
- frontier AI capabilities are not fully understood and are hard to predict (with particular concern in relation to cybersecurity and biotechnology, and the risk of disinformation);
- AI can manipulate content and generate deceptive content, creating unforeseen risks;
- AI can be misused deliberately and there are issues of managing control;
- transparency, fairness, accountability, regulation, safety, appropriate human oversight, ethics, the protection of human rights, bias mitigation, privacy and data protection all must be addressed;
- international cooperation is needed to understand, monitor, and minimise risks to ensure that the benefits of AI can be harnessed responsibly for the public good; and
- all actors (including nations, international fora and other initiatives, companies, civil society, and academia) – and particularly developers of frontier AI capabilities (which pose the most urgent and dangerous risks) – have a role to play in ensuring the safety of AI.
Day 2: The UN AI Panel and AI Model Testing
Speaking on the second day of the summit, UN Secretary-General António Guterres highlighted the recent announcement of a new AI Advisory Body. Major tech companies and governments agreed to collaborate in testing advanced AI models before their release and to subject these models to ongoing testing during their lifetime use. Governments agreed to invest in capacity for testing and other safety research, to share findings, and to collaborate in developing shared standards. The UK had previously announced the establishment of an AI safety institute, and US Commerce Secretary Gina Raimondo also recently announced the creation of a US AI safety institute. The UK and US governments have indicated that these institutes will conduct testing and will share their findings, which will be made public.
Furthermore, countries in attendance agreed to set up a panel to publish an AI ‘State of the Science Report,’ which will assess existing research on the risks and capabilities of frontier AI and the priorities for further research into AI safety. The report will be overseen by Canadian computer scientist and UN Scientific Advisory Board member Yoshua Bengio.
The Declaration and other measures discussed at the summit provide an opportunity to build coordinated international AI regulation. They follow the recent Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence | The White House signed by US President Biden, which (among other things) requires AI developers to share safety test results with the US government before officially releasing AI systems. Meanwhile, the EU is in the process of finalizing the EU AI Act, which will regulate the use of AI systems in the EU and will likely be the world’s first legally binding comprehensive AI legislation.
Against this backdrop, it remains to be seen what level of collaboration the Declaration itself will lead to among the international community, or whether some governments and policymakers will nevertheless continue down the path of voluntary or unilateral AI regulation. Further summits are due to follow in 2024, hosted virtually by South Korea (around six months from now) and France (a year from now), which it is hoped will continue to foster international collaboration.
The material contained in this communication is informational, general in nature and does not constitute legal advice. The material contained in this communication should not be relied upon or used without consulting a lawyer to consider your specific circumstances. This communication was published on the date specified and may not include any changes in the topics, laws, rules or regulations covered. Receipt of this communication does not establish an attorney-client relationship. In some jurisdictions, this communication may be considered attorney advertising.