What's going on at OpenAI? Half of the security team has left!

There are departures in the AGI Security Team at OpenAI. Concerns have emerged about the security of artificial intelligence (AGI) systems.

There has been a major departure from OpenAI’s #AGI Security Team, which is responsible for ensuring that artificial general intelligence (AGI) systems it develops do not pose a threat to humanity. Almost half of the researchers on that team have left the company in the past few months, according to former OpenAI governance researcher Daniel Kokotajlo.

Since 2024, the AGI team of the artificial intelligence giant OpenAI, which we know from language GPT models such as #ChatGPT , has decreased from approximately 30 people to 16 people. This has raised concerns about whether OpenAI pays enough attention to security.

According to Kokotajlo, these departures were not the result of an organized movement, but rather individual researchers deciding to leave the company. #OpenAI said in a statement that it is confident in its ability to deliver the world’s most secure AI systems.

However, it cannot be ignored that these major departures raise questions about the company’s future in security research. This development raises questions such as how #OpenAI will approach the security of future AGI systems and what new projects the researchers leaving this team will focus on.