PANews reported on August 29 that according to The Verge, OpenAI and Anthropic have agreed to share their major new AI models with the US government before release to help improve model security. The two companies have signed a memorandum of understanding with the US AI Safety Institute to provide access to the models before and after release so that the government can assess security risks and mitigate potential problems.
In addition, the California State Assembly recently passed the Frontier Artificial Intelligence Model Safety Innovation Act (SB 1047), requiring AI companies to take specific security measures before training advanced basic models. However, the bill has sparked opposition from companies including OpenAI and Anthropic, who believe that this move may be detrimental to small open source developers.