The United Kingdom hosted an Artificial Intelligence (AI) Safety Summit on November 1 – 2 at Bletchley Park with the purpose of bringing together those leading the AI charge, including international governments, AI companies, civil society groups and research experts to consider the risks of AI and to discuss AI risk mitigation through internationally coordinated action.
The output of the Summit was the Bletchley Declaration on AI Safety, which outlines some broad principles for international cooperation on AI—particularly in relation to ensuring an open, global dialogue about AI research and safety. The key takeaways from the Declaration are as follows:
- The transformative potential of AI goes hand in hand with its significant risks.
The Declaration recognizes AI’s global opportunities to transform and enhance human wellbeing, peace and prosperity. It affirms the need to continue using AI responsibly to promote inclusive economic growth, sustainable development and innovation, and to protect human rights and fundamental freedoms. The Declaration highlights that international cooperation is needed to foster public trust and confidence in AI systems to fully realize its potential.
The use of AI also poses significant risks—both foreseeable and currently unforeseen. The Declaration underscores the importance of designing, developing, deploying and using AI in a manner that prioritizes safety and is human-centric, trustworthy and responsible, as the use of AI systems is likely to increase across most areas of daily life, including housing, employment, transport, health and justice. The Declaration issues an international call to action to examine and address the potential impact of AI systems with a particular focus on the protection of human rights, transparency, fairness, accountability, regulation, appropriate human oversight, ethics, bias mitigation, privacy and data protection. The potential for unforeseen risks based on the capability to manipulate or create deceptive content is also specified as a point of concern.
- There is an urgent need to ensure the safety of “frontier AI.”
Special attention is given to “frontier AI,” which covers highly capable general-purpose AI models (including foundation models) that can perform a wide variety of tasks as well as narrower AI with capabilities that cause harm. The potential for intentional misuse or issues with control relating to alignment with human intent can cause substantial risks given the capabilities of frontier AI are not fully understood and can be hard to predict. Areas of concern with a potential to cause catastrophic harm are highlighted in the Declaration as cybersecurity and biotechnology, as well as where frontier AI systems can amplify disinformation. The Declaration emphasizes the urgency of deepening understanding of the potential risks and actions that can be taken to address them given the rapid pace of AI development.
The Declaration sets out an international agenda for addressing frontier AI risks which focuses on: (1) identifying AI safety risks of shared concern and building a shared scientific and evidence-based understanding of these risks, and (2) building risk-based policies such as evaluation metrics and tools for safety testing, and developing public sector capability and scientific research to ensure safety in light of the identified risks by collaborating as appropriate while recognizing the country-specific differences in approach.
- International cooperation through an open global dialogue on AI is vital.
International cooperation and ensuring a broad and inclusive global dialogue on AI by engaging nations, international organizations, companies, civil society and academia as part of the discussion are crucial to AI safety. The Declaration recommends that countries consider the importance of a pro-innovation and proportionate governance and regulatory approach that maximizes the benefits of AI while remaining mindful of its risks—for example by making risk classifications and categorizations based on national circumstances and applicable legal frameworks.
Common principles and codes of conduct are noted as relevant, and the cooperation with regards to addressing the risks of frontier AI demands intensified cooperation. Countries are encouraged to provide context-appropriate transparency and accountability on their plans to measure, monitor and mitigate the potential harm of frontier AI by preventing misuse and issues of control. Internationally, the Summit attendees will support an internationally inclusive network of scientific research on frontier AI that aims to help with the provision of the best science available for policy making and public good. The participating countries commit to meeting again in 2024 to continue their collaborative efforts in addressing AI safety and promoting responsible AI development.
Current International AI Developments
The Summit was held a few days after President Biden’s Executive Order on the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence (Executive Order) which is discussed in detail in our earlier Client Alert. The Executive Order creates new guidelines toward AI safety and security, privacy protections, equity and civil rights, consumers’ and workers’ rights, and innovation and competition and highlights the U.S. government’s intention to work with international partners and standards organizations to develop and implement AI standards, including multilateral agreements on security guidelines for critical infrastructure.
The EU is currently working on reaching agreement on the EU AI Act which is to become the first regulation on AI and the world’s first comprehensive AI law. The aims of the AI Act correspond with those of the Declaration in ensuring that AI systems used across the EU are safe, transparent and overseen by humans to prevent harmful outcomes. Agreement on the EU AI Act is expected to be reached by the end of the year (or early next).
For insights on these rapidly evolving topics, please visit our Artificial Intelligence practice page.