OpenAI, the company behind ChatGPT, recently established an AI Safety and Security Committee made up of senior leaders to oversee key decisions related to security and privacy. However, the decision has raised concerns about AI ethics due to the lack of involvement of independent experts.

The committee will be led by CEO Sam Altman and board members Bret Taylor, Adam D'Angelo, Nicole Seligman, and experts Jakub Pachocki, Aleksander Madry, Lilian Weng, Matt Knight and John Schulman . The committee will evaluate OpenAI's processes and safety measures over the next 90 days.

Although OpenAI claims the Committee was established to strengthen AI security, the lack of independent outside experts has made many people concerned. In particular, some AI ethics experts have criticized OpenAI for prioritizing product development over safety, leading to many senior AI security employees leaving the company recently.

Daniel Kokotajlo, former member of OpenAI's governance team, Ilya Sutskever, co-founder and former chief scientific officer of OpenAI, left the company in April and May after disagreements with Altman over product development priorities rather than AI safety research. 

Jan Leike, a former DeepMind researcher and developer of ChatGPT and its predecessor, InstructGPT at OpenAI, has also resigned because he believes OpenAI is not on the right track in addressing privacy and security issues. All AI. Gretchen Krueger, an AI policy researcher, also left and called on the company to improve accountability and transparency.

Besides calling for AI regulation, OpenAI has also made efforts to shape these regulations by hiring experts and spending hundreds of thousands of dollars on lobbying activities in the US. Recently, the US Department of Homeland Security announced that Altman will be a member of the newly established AI Safety and Security Council, which provides recommendations on the development and deployment of safe, secure AI across America's critical infrastructure.

OpenAI is now trying to allay concerns by announcing it will retain a number of independent security and technical experts to support the committee. However, the company has not published the specific list, nor their powers and influence on the committee.

OpenAI's self-checking of AI safety has raised many controversies about the ethics and responsibilities of AI developing companies. The experts call for independent oversight and more transparency in AI development to ensure safety and ethics for humans.