The U.S. Artificial Intelligence Safety Institute, part of the National Institute of Standards and Technology (NIST) under the U.S. Department of Commerce, has announced new agreements aimed at bolstering AI safety research and evaluation. These agreements, formalized through Memoranda of Understanding (MOUs), establish a collaborative framework with two leading AI companies, Anthropic and OpenAI. The partnerships are designed to facilitate access to new AI models from both companies, allowing the institute to conduct thorough evaluations both before and after these models are publicly released.
The MOUs enable the U.S. AI Safety Institute to engage in collaborative research with Anthropic and OpenAI, focusing on the evaluation of AI capabilities and the identification of safety risks. This collaboration is expected to advance methodologies for mitigating potential risks associated with advanced AI systems. Elizabeth Kelly, the director of the U.S. AI Safety Institute, emphasized that safety is a critical component of technological innovation and expressed anticipation for the forthcoming technical collaborations with these AI firms. She noted that these agreements represent an important milestone in the institute’s ongoing efforts to guide the responsible development of AI.
In addition to these collaborations, the U.S. AI Safety Institute will provide feedback to both Anthropic and OpenAI on possible safety improvements to their models. This work will be conducted in close partnership with the U.K. AI Safety Institute, reflecting a broader international effort to ensure the safe and trustworthy development of AI technologies.
The U.S. AI Safety Institute’s efforts are rooted in NIST’s long history of advancing measurement science, technology, and standards. The evaluations conducted under these agreements will contribute to NIST’s broader AI initiatives, which are aligned with the Biden-Harris administration’s Executive Order on AI. The goal is to support the safe, secure, and trustworthy development of AI systems, building on voluntary commitments made by leading AI developers to the administration.
According to a report by Reuters, on Wednesday, California lawmakers approved a controversial AI safety bill, which now awaits Governor Gavin Newsom’s decision. Newsom, a Democrat, has until September 30 to either veto the bill or sign it into law. The legislation mandates safety testing and other safeguards for AI models exceeding certain costs or computing power, a measure that some tech companies argue could hinder innovation.
Featured Image via Pixabay