The UK government has announced plans to open the first overseas office for its AI Safety Institute in the United States this summer.
Also read: The Role of the UK and US AI Safety Agreement in Tech Evolution
In a statement published Monday, the UK’s Science and Tech Secretary Michelle Donelan said the new US office will be located in San Francisco. The city houses some of the biggest AI companies, including OpenAI, Anthropic, and Inflection AI.
UK Looks to Strengthen Partnership with US on AI Safety
The US-based office will look to recruit a team of technical staff headed up by a Research Director as it begins operation later this year. The Institute’s London headquarters currently has a team of over 30 technical staff.
The Tech Secretary said the expansion of the Institute represents British leadership in AI and would establish close collaboration with the US government.
It is a pivotal moment in the UK’s ability to study both the risks and potential of AI from a global lens, strengthening our partnership with the US and paving the way for other countries to tap into our expertise as we continue to lead the world on AI safety.
Michelle Donelan, UK’s Science and Tech Secretary.
Donelan Says the Institute Has Grown From “Strength to Strength”
The UK government introduced the AI Safety Institute in November 2023. It said the goal was to “minimize surprises to the UK and humanity from rapid and unexpected advances in AI.”
Also read: UK Government Announces the Establishment of AI Safety Institute
The Institute primarily focuses on evaluating “frontier” AI systems from the lens of national security. By this measure, the UK government said it will be able to develop effective policy and regulatory responses to AI applications deployed within its jurisdiction.
Since the Prime Minister and I founded the AI Safety Institute, it has grown from strength to strength and in just over a year.
Michelle Donelan, UK’s Science and Tech Secretary.
The Institute Publishes Result of Its Model Tests
The UK AI Safety Institute is reportedly the first government-backed organization to publish results from its AI model safety tests. The Institute tested five publicly available large language models, though it doesn’t specify the names.
AI safety is still a very young and emerging field. […] Our ambition is to continue pushing the frontier of this field by developing state-of-the-art evaluations, with an emphasis on national security-related risks.
Ian Hogarth, UK’s AI Safety Institute Chair.
It said several models completed its cyber security challenges. However, they all proved highly vulnerable to basic “jailbreaks.” Some models produced harmful results even when there were no deliberate attempts to circumvent safeguards.
Cryptopolitan reporting by Ibiam Wayas