Despite the New York Times reporting that OpenAI had security vulnerabilities, the artificial intelligence company did not inform the FBI, law enforcement, or the public.

The New York Times reported on July 4 that artificial intelligence company OpenAI suffered a security breach in 2023 but chose not to disclose the incident. It is reported that the company's executives mentioned the security incident in an internal meeting in April but did not disclose it to the public on the grounds that the attackers did not have access to customer or partner information.

In addition, OpenAI's management also believes that this security incident does not pose a threat to national security because they judged that the attacker was an individual actor and had no connection with any foreign government, so they did not report the matter to the FBI or other law enforcement agencies.

The report also stated that the attackers were able to hack into OpenAI's internal communication system and steal details of the company's artificial intelligence technology design from employees' discussions on online forums. However, the attackers were not able to access OpenAI's key systems or codes used to store and build artificial intelligence.

The New York Times report is said to be based on information provided by two sources familiar with the situation.

Former employees’ concerns and the company’s response

The New York Times reported that after the security breach, Leopold Aschenbrenner, a former researcher at OpenAI, took action and submitted a memo to the company's board of directors. In the memo, Aschenbrenner emphasized the urgency of strengthening security measures, especially to prevent foreign forces, including China, from stealing the company's trade secrets.

In response, Liz Bourgeois, a representative of OpenAI, said that the company understands Aschenbrenner's concerns and promises to continue to invest resources and efforts in the development of safe artificial general intelligence (AGI) technology. However, Liz Bourgeois has reservations about some of the points made by Aschenbrenner and questioned some of the details. He pointed out that the company had already dealt with the relevant safety issues before Aschenbrenner joined and had communicated with the board of directors on this issue.

Aschenbrenner also claimed that he was fired because of information leaks and political factors, but Bourgeois made it clear that his departure had nothing to do with the concerns he expressed about security issues.

Matt Knight, OpenAI's head of security, reiterated the company's firm commitment to security to the New York Times. He pointed out that OpenAI had invested in security before launching ChatGPT, which shows that the company has forward-looking considerations and preparations for possible security issues. Knight also frankly admitted that the development of artificial intelligence technology does come with certain risks, but he also emphasized that the company is actively identifying these risks and is committed to understanding them in order to better respond to and mitigate potential threats.

The New York Times has filed a lawsuit against OpenAI and Microsoft over copyright issues, claiming that the two companies used its content without authorization to train their own artificial intelligence models, which may be a conflict of interest. However, OpenAI disputed the New York Times' allegations and believed that its lawsuit lacked reasonable factual basis.

Conclusion:

Security issues in the field of artificial intelligence cannot be ignored. Regardless of whether the New York Times report is completely true, it reminds us that whether it is OpenAI or other artificial intelligence technology companies, they should put security at the core of development, take preventive, detection and repair measures, and ensure transparency and disclose security incidents to the public in a timely manner.

At the same time, we should be wary of possible motivations in media reporting, especially in the context of copyright disputes. We call on the media to remain objective and fair, and to resolve disputes involving copyright issues through legal channels.

Finally, industry players, the media, and regulators need to work together to promote healthy, sustainable development of technology in a responsible manner. #OpenAI #安全漏洞 #人工智能 #AI