A recent report reveals a security breach at OpenAI, where a hacker accessed internal messaging systems last year, stealing design details about the company’s AI technologies. The New York Times reported this leak, occurred in an online forum where OpenAI employees shared information about their current progress.

Also Read: South Korean robot’s apparent ‘suicide’ now under investigation

The leak was reported to have happened in an online forum where OpenAI employees shared recent updates. According to two anonymous sources familiar with the incident, the hacker did not gain access to the core systems through which OpenAI develops and stores its AI, such as ChatGPT. According to the NY Times, the company did not report the breach to the public as nothing critical was compromised, and there was no threat to national security. 

OpenAI maintains security measures after the breach

OpenAI executives shared this information with the employees during the all-hands meeting in April last year and with the board of directors. Despite the breach, the company did not consider the involvement of federal law enforcement agencies, blaming the incident on a private person who was not affiliated with any foreign state. 

Also Read: ChatGPT outperforms humans in joke telling, study shows

Although the data breach incident did not affect the core systems of the organization that are considered the most sensitive, according to the report, the incident pointed to the fact that it is necessary to have adequate measures in place for protecting advanced AI technologies.

NYT vs. OpenAI legal battle takes an unexpected turn

In a related development, the legal battle between The New York Times and OpenAI received a major update recently. OpenAI filed documents urging the court to order the prominent publisher to prove the originality of its articles by providing detailed source materials for each copyrighted work. This request has complicated the case even further given that the basis of the case is on claims of unauthorized usage of NYT content for training AI models. 

“The Times alleges, […] it ‘invests an enormous amount of time, . . . expertise, and talent,’ including through ‘deep investigations—which usually take months and sometimes years to report and produce—into complex and important areas of public interest.

OpenAI’s lawyers

The request calls for The New York Times to submit all documentation related to the authorship process, namely, reporter’s notes, interview memos, and records of sources. OpenAI’s lawyers also pointed out that, according to the NYT, it spends significant resources to produce world-class journalism. Therefore, the creation methods, time, labor, and investment are the crux of the matter. Thus, they claim that OpenAI has a right to inspect these aspects through the discovery process. 

Also Read: Shanghai hosts 2024 World AI conference

The New York Times replied to OpenAI in a legal notice issued on July 3 to oppose the demand for source material regarding the contract. The NYT’s legal team asserted that OpenAI’s request is novel and bases its argument on a misinterpretation of copyright. They contended that the creation process of copyrighted material is irrelevant to the case’s core issue.

The Biden administration is expected to announce new rules in the near future to prevent the misuse of US AI technology by countries of concern, including China and Russia. These are the initial measures that are proposed to contain the current sophisticated AI applications like ChatGPT. 

Cryptopolitan Reporting by Brenda Kanana