According to the Wall Street Journal, OpenAI, the company responsible for developing ChatGPT, recently suffered a major security breach, which raised concerns about potential national security risks. The incident, which occurred early last year, exposed internal discussions between researchers and employees but did not compromise the core code of the OpenAI system. Despite the seriousness of the incident, OpenAI chose not to publicly disclose the incident, a decision that led to internal and external scrutiny.

OpenAI internal communications compromised

In early 2023, a hacker broke into OpenAI's internal messaging system and extracted details about the company's AI technology. According to two people familiar with the matter, the hackers visited an online forum where employees discussed the latest AI advances, but did not break into the system where the core technology is stored.

OpenAI executives choose not to reveal it to the public

According to sources, OpenAI executives informed employees of the matter at an all-hands meeting held at the company's San Francisco headquarters in April 2023, and the board of directors was also informed. Despite the breach, executives chose not to notify the public, citing that no customer or partner information was compromised. They assessed the hacker was an individual with no ties to a foreign government and failed to report the matter to law enforcement agencies, including the FBI.

Concerns about foreign espionage grow

The breach heightened concerns among OpenAI employees that foreign adversaries, especially China, could steal AI technology and threaten U.S. national security. The incident also sparked debate within the company about the adequacy of OpenAI’s security measures, as well as the broader risks associated with artificial intelligence.

AI security issues emerge

Following the incident, OpenAI's technical project manager Leopold Aschenbrenner submitted a memo to the board expressing concerns about the company's vulnerability to espionage by foreign entities. Aschenbrenner was later fired over the alleged leak. He believes the company's security measures are insufficient to protect against sophisticated threats from foreign actors.

Official statement from OpenAI

OpenAI spokesperson Liz Bourgeois acknowledged Aschenbrenner's concerns but said his departure was not related to the issues he raised. She stressed that OpenAI is committed to building secure artificial general intelligence (AGI), but disagreed with Aschenbrenner's assessment of its security protocols.

Technological espionage in the context of the Sino-American war

Concerns about potential ties to China are not unfounded. For example, Microsoft President Brad Smith recently testified that Chinese hackers used the company's systems to attack federal networks. However, legal constraints prohibit OpenAI from discriminating in hiring based on national origin, as blocking foreign talent could hinder U.S. AI progress.

(From DEI or MEI: Why does the recruitment policy of AI unicorn ScaleAI make Musk and Coinbase nod frequently?)

The importance of diverse talent

Matt Knight, head of security at OpenAI, emphasized that recruiting top global talent is necessary despite the risks. He emphasized the importance of striking a balance between security concerns and the need for innovative thinking to advance AI technology.

The entire AI industry is facing challenges

OpenAI is not the only company facing these challenges. Competitors like Meta and Google are also developing powerful AI systems, some of which are open source, promoting transparency and collective problem-solving in the industry. However, concerns remain about AI being used to mislead information and replace jobs.

National Security Risk Assessment: AI May Create Biological and Chemical Weapons

Some studies by AI companies such as OpenAI and Anthropic have found that current AI technology poses little risk to national security. Yet debate continues about AI’s future potential to create biological and chemical weapons or hack into government systems. Companies like OpenAI and Anthropic are actively addressing these concerns by strengthening their security protocols and establishing committees focused on AI safety.

Government Legislative Action: Restrictions on Certain AI Technologies

Federal and state lawmakers are considering regulations that would limit the release of certain AI technologies and penalize harmful uses. The regulations are intended to reduce long-term risks, although experts believe it will still take years for significant dangers from AI to emerge.

China’s progress in AI

Chinese companies are making rapid progress in AI technology, and China is home to a large number of the world's top AI researchers. Experts like Clément Delangue of Hugging Face believe China may soon surpass the United States in AI capabilities.

Call for responsible AI development

Even if the current odds are low, prominent figures like Susan Rice are calling for serious consideration of worst-case AI scenarios, emphasizing the responsibility to address potentially high-impact risks.

This article Wall Street Journal: Hackers invaded OpenAI, raising national security concerns. First appeared on Chain News ABMedia.