ChainCatcher reported that according to TheVerge, OpenAI and Anthropic agreed to open access to new AI models to the US government to help improve model security. The two companies have signed a memorandum of understanding with the US AI Safety Institute to allow the US government to assess security risks and mitigate potential problems. The US AI Safety Institute said it will work with its British counterpart to provide feedback on security improvements.

It is reported that California recently passed the Frontier Artificial Intelligence Model Safety Innovation Act (SB 1047), requiring AI companies to take specific safety measures before training advanced basic models. However, the bill has triggered opposition from companies including OpenAI and Anthropic, who believe that this move may be detrimental to small open source developers.