Many OpenAI employees filed a complaint with the SEC, accusing the company of preventing them from reporting serious risks of AI technology.

The seven-page complaint filed with the US Securities and Exchange Commission (SEC) earlier this month was exclusively published by The Washington Post, accusing OpenAI of applying employment, severance and confidentiality agreements. strict information, potentially leading to penalties for employees who dare to speak up about their concerns.

Specifically, the employees who denounced OpenAI asked them to waive their right to receive compensation when reporting violations according to the law. Additionally, any information you want to share with the federal government, whether it is about a security risk or a violation of the law, must be approved in advance by OpenAI.

These provisions are said to run counter to federal laws designed to protect those who dare to expose wrongdoing in organizations.

Lawyers representing the group of employees said that OpenAI seriously violated the law to protect whistleblowers, hindering the government's efforts to monitor and manage the nascent AI field. "These contracts are like an implicit threat: 'Don't ever think about reporting to the authorities,'" an anonymous employee shared.

OpenAI side has denied the accusations, affirming that company policy still ensures the rights of whistleblowers. “Our policies protect workers' rights to disclose legally protected information,” said OpenAI spokeswoman Hannah Wong.

In addition, Ms. Wong also emphasized that OpenAI encourages serious debates about AI technology and has made changes in the termination policy to remove controversial provisions.

OpenAI CEO Sam Altman spoke to reporters on Capitol Hill last year. (Elizabeth Frantz/According to The Washington Post)

However, the case once again raises the alarm about the risk that technology companies, especially in the field of AI, are taking advantage of legal loopholes to silence workers and avoid accountability. .

Experts call on the SEC and authorities to take stronger measures to prevent acts that hinder whistleblowers and ensure transparency and accountability in this key technology sector.

OpenAI's case takes place in the context of increasing concerns about the "breakneck" rate of development of AI and the risk of this technology being abused for bad purposes.

The incident requires policymakers, regulators and the community to join hands to establish a strong enough legal corridor to both encourage innovation and effectively control potential risks. hidden, ensuring AI truly serves human interests.