Ensuring AI Safety: UK-US Bilateral Agreement on AI Testing

The UK and US have forged a groundbreaking pact aimed at collaborating on the advancement of artificial intelligence (AI) testing.

Signed on Monday, the agreement outlines a commitment to jointly develop reliable methods for assessing the safety of AI tools and their underlying systems. This marks the first bilateral agreement of its kind between the two nations.

Michelle Donelan, the UK’s tech minister, emphasized the significance of this initiative, labeling the safe development of AI as “the defining technology challenge of our generation.” She underscored the shared global responsibility in ensuring the safe evolution of AI technology.

“Only through collaborative efforts can we effectively tackle the risks associated with this technology and harness its vast potential to improve our lives,” stated the Secretary of State for Science, Innovation, and Technology.

The agreement further solidifies commitments established during the AI Safety Summit convened at Bletchley Park in November 2023. At the summit, attended by prominent figures in the AI sector including Sam Altman from OpenAI, Demis Hassabis from Google DeepMind, and tech mogul Elon Musk, both the UK and US pledged to establish AI Safety Institutes. These institutes aim to assess both open-source and closed-source AI systems.

Although there has been a relative calm in the AI safety landscape since the summit, the AI sector itself has seen remarkable activity. Intense competition persists among major AI chatbots, such as ChatGPT, Gemini, and Claude.

Currently, the vast majority of the active firms in this sphere are predominantly based in the US. While they remain open to the idea of regulation, authorities have yet to impose any restrictions on their endeavors.

Likewise, regulators have not compelled these AI firms to disclose information they may be hesitant to share, such as the specifics of the data utilized to train their tools or the environmental impact of operating them.

The EU’s AI Act is progressing towards enactment, and once enforced, it will mandate developers of certain AI systems to transparently disclose their risks and provide details about the data employed.

This is particularly significant in light of OpenAI’s recent announcement that it would not release a voice cloning tool it had developed, citing “serious risks” associated with the technology, especially during an election year.

In January, there was a notable incident involving a fabricated robocall generated by AI, falsely claiming to be from US President Joe Biden, urging voters to abstain from a primary election in New Hampshire.

At present, AI firms in both the US and UK are primarily self-regulating.

Kindly share this story:
Kindly share this story:
Share on whatsapp
Share on facebook
Share on twitter
Share on linkedin
Share on telegram
Share on facebook
Top News

Related Articles