After a 90-day self-assessment, OpenAI has announced five key measures to enhance the safety and security of its AI models, demonstrating its commitment to responsible AI development.

OpenAI announced five key measures to improve the safety and security of its AI models after a 90-day self-review of its workflow on September 16. The measures include establishing an independent committee, improving security measures, increasing transparency, and collaborating with outside organizations.

OpenAI’s move is a sign of the company’s continued efforts to develop safe and trustworthy AI models. It is a strategic move to ensure the safe development and deployment of AI as the technology continues to evolve.

The Safety and Security Committee plays a pivotal role

Specifically, OpenAI has established a Safety and Security Committee with an independent oversight role. This committee, chaired by Zico Kolter – Director of Computer Science at Carnegie Mellon University – is tasked with overseeing the safety procedures in the development and deployment of OpenAI AI models.

Other members of the committee include Adam D’Angelo (Quora co-founder), Paul Nakasone (retired US general), and Nicole Seligman (former Vice President and General Counsel of Sony Corporation). The committee will have the authority to delay model releases if there are safety concerns.

List of members participating in OpenAI's Security and Privacy Committee. Source: Internet.

In addition, the company will continue to apply risk management methods to protect AI models. OpenAI said it plans to increase internal information segmentation, expand its 24/7 security team, and invest in research infrastructure and products to improve security.

One prominent initiative is looking into developing an Information Sharing and Analysis Center (ISAC) for the AI ​​industry, to share information on cyber threats across industry organizations.

OpenAI is committed to greater data transparency in its work. It has published its GPT-4o and o1-preview system cards, which provide details on safety assessments before the model is released. The cards also include the results of external reviews and risk mitigation measures.

In addition, the company is actively collaborating with many organizations to promote safety standards for the AI ​​industry. In particular, OpenAI is collaborating with Los Alamos National Labs (USA) to research the safe use of AI in scientific laboratories, as well as with AI safety institutes in the US and UK to research AI safety standards. They have reorganized their research, safety, and policy groups to create an integrated safety framework for AI model development and monitoring.

This framework will be approved by the Safety and Security Commission and adjusted as AI models become more complex.