After its previous tool fell short, OpenAI has unveiled another AI detector focused on images and the rising risk of deepfakes.
OpenAI, a pioneer in generative artificial intelligence, is taking on the challenge of detecting deepfake images amid the growing prevalence of misleading content spread on social media. Mira Murati, the company’s chief technology officer, recently unveiled a new deepfake detector at the Wall Street Journal’s Tech Live conference in Laguna Beach, California.
Murati said OpenAI’s new tool has “99 percent reliability” in determining whether an image was generated using AI.
AI-generated images can include everything from lighthearted creations (like Pope Francis in a fluffy Balenciaga coat) to deceptive images that could wreak financial havoc. The potential and pitfalls of AI are clear. As these tools become more sophisticated, distinguishing between authentic and AI-generated content is proving to be a challenge.
While the tool’s release date remains secret, its announcement has generated a lot of interest, especially given OpenAI’s past efforts.
In January 2022, the company launched a text classifier that it said could distinguish human writing from machine-generated text from models like ChatGPT. But in July, OpenAI quietly shut down the tool and released an update saying its error rate was unacceptably high. Their classifier incorrectly labeled real human writing as AI-generated 9% of the time.
If Murati's claims are true, it would be an important moment for the industry, as current methods for detecting AI-generated images are generally not automated. Typically, hobbyists rely on intuition and focus on well-known challenges that hinder generative AI, such as depicting hands, teeth, and patterns. The distinction between AI-generated images and AI-edited images remains fuzzy, especially when people try to use AI to detect AI.
Not only is OpenAI working to detect harmful AI imagery, it has also put in place guardrails to censor its own models, even beyond what is publicly stated in its content guidelines.
As discovered by Decrypt, OpenAI’s Dall-E tool appears to be configured to modify prompts without notice and quietly throw errors when asked to generate specific outputs, even if they comply with published guidelines and avoid creating images involving specific names, artist styles, and ethnicities.
A portion of the Dall-E 3 prompt from ChatGPT. Source: Decryp
Detecting deepfakes isn’t just OpenAI’s business. DeepMedia is a company developing the capability, working specifically with government clients.
Big companies like Microsoft and Adobe have also rolled up their sleeves. They have introduced so-called “AI watermarking” systems. The mechanism, promoted by the Content Provenance and Authenticity Alliance (C2PA), includes a unique “cr” symbol inside a speech bubble to indicate AI-generated content. The symbol is intended to act as a beacon of transparency, allowing users to discern the source of the content.
However, like any technology, it is not foolproof. There is a vulnerability that can remove the metadata that carries this symbol. However, as a solution, Adobe has also launched a cloud service that is able to recover the lost metadata, thus ensuring the symbol's presence. It is also not difficult to circumvent.
As regulators move toward criminalizing deepfakes, these innovations are not only technological achievements, but also social necessities. Recent moves by OpenAI, as well as companies like Microsoft and Adobe, highlight the collective effort to ensure authenticity in the digital age. While these tools are upgraded to provide a higher degree of authenticity, their effective implementation depends on widespread adoption. This involves not only tech giants, but also content creators, social media platforms, and end users.
As generative AI rapidly advances, detectors still struggle to distinguish authenticity from authenticity in text, images, and audio. Currently, human judgment and vigilance are our best line of defense against AI misuse. However, humans are not infallible. Lasting solutions will require technology leaders, lawmakers, and the public to work together to navigate this complex new frontier. #OpenAI #AI检测