According to Odaily, OpenAI and Anthropic have reached an agreement permitting the US government to access their AI models before and after their release to enhance safety measures. The US AI Safety Institute announced on Thursday that this initiative aims to evaluate safety risks and mitigate potential issues. This development follows California's recent passage of the Frontier AI Model Safety and Security Innovation Act (SB 1047), which mandates AI companies to implement safety measures. The legislation has sparked opposition within the industry, with concerns that it may negatively impact small open-source developers.