According to Cointelegraph, OpenAI, the artificial intelligence (AI) research and deployment firm behind ChatGPT, has announced the creation of a new team dedicated to tracking, evaluating, forecasting, and protecting against potential catastrophic risks stemming from AI. The new division, called 'Preparedness,' will focus on AI threats related to chemical, biological, radiological, and nuclear threats, individualized persuasion, cybersecurity, and autonomous replication and adaptation.

Led by Aleksander Madry, the Preparedness team will attempt to answer questions such as how dangerous frontier AI systems are when misused and whether malicious actors could deploy stolen AI model weights. OpenAI acknowledges that while frontier AI models have the potential to benefit humanity, they also pose increasingly severe risks. The company is committed to addressing the full spectrum of safety risks related to AI, from current systems to the furthest reaches of superintelligence.

OpenAI is currently seeking talent with various technical backgrounds for its new Preparedness team and is launching an AI Preparedness Challenge for catastrophic misuse prevention, offering $25,000 in API credits to its top 10 submissions. The risks associated with artificial intelligence have been frequently highlighted, with concerns that AI could become more intelligent than any human. Despite these risks, companies like OpenAI continue to develop new AI technologies, sparking further concerns. In May 2023, the Center for AI Safety nonprofit organization released an open letter on AI risk, urging the community to mitigate the risks of extinction from AI as a global priority alongside other societal-scale risks, such as pandemics and nuclear war.