Odaily Planet Daily News: On May 14th (last Tuesday, Eastern Time), OpenAI Chief Scientist Ilya Sutskever officially announced his resignation. On the same day, Jan Leike, one of the leaders of OpenAI's Super Alignment Team, also announced his resignation. Last Friday, OpenAI also confirmed that the "Super Intelligence Alignment Team" co-led by Sutskever and Jan Leike had been disbanded. In the early morning of May 18th, Jan Leike sent 13 tweets on the social platform X, revealing the real reason for his resignation and more inside information. In response to Jane Lake's revelations, Altman also issued an urgent response on May 18: "I am very grateful to Jane Lake for his contributions to OpenAI's AI super alignment research and safety culture, and I am very sorry that he left the company. He pointed out that we still have a lot of work to do, and we agree with this and are committed to advancing these efforts. In the next few days, I will write a more detailed article to discuss this issue." Tesla CEO Elon Musk commented on the news of the dissolution of the "Super Alignment" team, saying: "This shows that safety is not OpenAI's top priority. In summary, one is that computing resources are insufficient, and the other is that OpenAI does not pay enough attention to safety. Jane Lake said that more resources and energy should be invested in preparing for the next generation of AI models, but the current development path cannot smoothly achieve the goal. His team has faced huge challenges in the past few months and sometimes has difficulty obtaining sufficient computing resources. Jane Lake also emphasized that creating machines that surpass human intelligence is full of risks, and OpenAI is taking on this responsibility, but safety culture and processes have been marginalized in the pursuit of product development. OpenAI must transform into an AGI company that puts safety first. Vox website 17 According to the report, in November last year, the board of directors of OpenAI tried to fire CEO Altman, but Altman quickly took back power. Since then, at least five of the company's most security-conscious employees have resigned or been fired. The Wall Street Journal said that Sutzkwer focused on ensuring that artificial intelligence would not harm humans, while others, including Altman, were more eager to promote the development of new technologies.According to Wired magazine, Sutskever was one of the four board members who fired Altman last November. Sources at the company told Vox that security-conscious employees have lost confidence in Altman, "This is a process of trust collapsing little by little, like dominoes falling one by one." They believe that Altman claims to put safety first, but his behavior is contradictory. TechCrunch, a U.S. technology blog, said on the 18th that OpenAI abandoned security research and launched new products like GPT-4o, which eventually led to the resignation of the two leaders of the "Super Alignment" team. It is unclear when or whether the technology industry will achieve the necessary breakthroughs to create artificial intelligence that can complete any task that humans can. (Global Network)