As Cointelegraph reported, artificial intelligence companies OpenAI and Anthropic recently agreed to provide the U.S. AI Safety Institute with early access to any “significant” new AI models they develop.

Although the agreement was allegedly signed out of shared security concerns, it is unclear what the government's specific role would be when technological breakthroughs have significant security implications.

OpenAI CEO and co-founder Sam Altman said on the X social media platform that the agreement was a necessary step and stressed the need for the United States to continue to lead at the national level.

Both OpenAI and Anthropic are committed to developing artificial general intelligence (AGI), which is AI that can complete any task with human resources. Each company has its own charter and mission, aiming to safely create AGI with human interests at its core.

By signing an agreement with the U.S. government to disclose their models before product launch, the two companies handed over this responsibility to the federal government.

According to Cointelegraph, OpenAI may be on the verge of a breakthrough, with its “Strawberry” and “Orion” projects purportedly capable of advanced reasoning and addressing AI’s hallucination problem. The U.S. government has seen these tools and their early internal iterations in ChatGPT.

The agreement and related safety guidelines are voluntary for participating companies, according to a blog post from the National Institute of Standards and Technology. Supporters argue that such light-touch regulation will help growth and allow the industry to self-regulate.

However, one of the potential problems with light-touch regulation is a lack of transparency. If OpenAI or Anthropic succeed and the government decides the public doesn’t need to know, there’s no legal requirement to disclose.