In an effort to improve its AI models, OpenAI enlisted penetration experts to find vulnerabilities in its widely used AI chatbot platform.

Image: Koshiro K / Shutterstock.com

To bolster the security of its popular AI chatbot, OpenAI is seeking external cybersecurity and penetration experts, also known as “red teams,” to hunt for vulnerabilities in its AI platform.

The company said it is seeking experts in a variety of fields, including cognitive and computer science, economics, health care and cybersecurity. OpenAI said its aim is to improve the safety and ethics of artificial intelligence models.

The open invitation comes as the U.S. Federal Trade Commission launches an investigation into OpenAI’s data collection and security practices, and as policymakers and businesses question the safety of using ChatGPT.

“This is about crowdsourcing volunteers to get involved and do interesting security work,” Halborn co-founder and chief information security officer Steven Walbroehl told reporters. “It’s an opportunity to network and be at the forefront of technology.”

Walbroehl added, “The best hackers love to hack the latest emerging technologies.”

To sweeten the deal, OpenAI said red team members would be compensated and wouldn’t need any experience in AI, just a willingness to contribute a different perspective.

“We are announcing open recruitment for the OpenAI Red Team Network and invite domain experts interested in improving the safety of OpenAI models to join our efforts,” OpenAI wrote. “We are looking for experts from a variety of fields to work with us to rigorously evaluate and red team our AI models.”

Red team refers to cybersecurity professionals who are skilled at attacking (also known as penetration testing or pen testing) systems and exposing vulnerabilities. In contrast, blue team describes cybersecurity professionals who protect systems from attacks.

OpenAI continues, “Beyond joining the network, there are other collaborative opportunities to contribute to AI safety. For example, one option is to create or conduct safety assessments of AI systems and analyze the results.”

Launched in 2015, OpenAI burst into the public eye late last year with the public release of ChatGPT and the more advanced GPT-4, taking the tech world by storm and bringing generative AI into the mainstream.

In July, OpenAI joined Google, Microsoft and others in pledging to work on developing safe and reliable AI tools.

While generative AI tools like ChatGPT have revolutionized the way people create content and consume information, AI chatbots are not without controversy, sparking accusations of bias, racism, lies (hallucinations), and a lack of transparency into how and where user data is stored.

Concerns about user privacy have led to several countries, including Italy, Russia, China, North Korea, Cuba, Iran, and Syria, banning the use of ChatGPT within their borders. In response, OpenAI updated ChatGPT to add a feature to delete chat logs to protect user privacy.

The red team program is OpenAI’s latest effort to attract top security professionals to help evaluate its technology. In June, OpenAI pledged $1 million to cybersecurity measures and initiatives using artificial intelligence.

While the company says researchers are not restricted from publishing their findings or pursuing other opportunities, OpenAI notes that members of the program should be aware that participation in red team and other projects is often subject to nondisclosure agreements (NDAs) or “must remain confidential indefinitely.”

OpenAI concluded, “We encourage creativity and experimentation when evaluating AI systems, and once completed, we welcome you to contribute your evaluations to the open source Evals repository for use by the broader AI community.”

#OpenAI  #网络安全