US and UK Forge Historic AI Safety Agreement

Digital illustration of split US and UK flags symbolizing the AI Safety Agreement

The United States and the United Kingdom have forged a landmark AI safety agreement to collaborate on the development of rigorous testing protocols for advanced artificial intelligence (AI) systems, marking a significant step towards ensuring their safe and responsible deployment.

Signed by UK Technology Secretary Michelle Donelan and US Commerce Secretary Gina Raimondo, the Memorandum of Understanding establishes a robust partnership between the two nations. It aims to align the scientific approaches of both countries and exchange research expertise to rapidly iterate evaluation methods for cutting-edge AI models, systems, and agents.

The agreement emphasizes the importance of addressing the risks associated with AI technology head-on while harnessing its vast potential to benefit society. It follows through on commitments made at the AI Safety Summit hosted in the UK last November, where major AI firms like OpenAI and Google DeepMind agreed to a voluntary scheme allowing AI safety institutes to evaluate and test new AI models before release.

Global AI Safety Collaboration: UK and US Institutes Unite

Under the partnership, the UK’s AI Safety Institute and its US counterpart will exchange research expertise to mitigate AI risks, including independently evaluating private AI models.

Both governments recognize the urgent need for a shared global approach to AI safety to keep pace with emerging risks, with the partnership taking effect immediately to facilitate seamless cooperation between the organizations.

The UK and US intend to collaborate on conducting joint testing exercises involving publicly accessible AI models, as well as exploring the possibility of personnel exchanges between their respective institutes. Moreover, they aim to forge similar partnerships with other countries to promote AI safety on a global scale.

The work of our two nations in driving forward AI safety will strengthen the foundations we laid at Bletchley Park in November, and I have no doubt that our shared expertise will continue to pave the way for countries tapping into AI’s enormous benefits safely and responsibly.

Michelle Donelan, UK Technology Secretary

In addition to bilateral efforts, recent initiatives by the US government underscore the importance of addressing AI risks comprehensively.

A new policy requires federal agencies to identify and mitigate potential AI risks, designate a chief AI officer, and create detailed inventories of their AI systems. Ensuring uniform standards across all agencies is crucial to mitigating vulnerabilities and safeguarding against potential risks.

Mastercard and Tech Titans Unite: AISIC Consortium Advances AI Safety

This year also saw the launch of the Artificial Intelligence Safety Institute Consortium (AISIC) by the National Institute of Standards and Technology (NIST). The consortium’s goal is to encourage cooperation between industry and government to advance the safe use of AI.

Mastercard CEO Michael Miebach emphasized the importance of establishing meaningful standards to build trust in AI technology, highlighting the need for inclusive innovation.

Mastercard is among the over 200 members of the AISIC consortium, which includes tech giants like Amazon, Meta, Google, and Microsoft, as well as academic institutions such as Princeton and Georgia Tech, and various research groups.

The collaboration between the UK and US on AI safety, along with efforts by governments and industry stakeholders to address AI risks comprehensively, underscores the importance of prioritizing safety and responsibility in AI development and deployment.

By working together and fostering international partnerships, stakeholders can build trust in AI technology and realize its potential to drive positive societal impact while mitigating potential risks.

0 0 votes
Article Rating
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x