Microsoft partners with StopNCII to combat deepfake images

Generative AI has become an easily available tool, and deepfake pornographic images have become widespread, seriously threatening personal privacy and security. Even Korean girl groups have become victims.

In order to combat non-consensual pornographic images, technology giant Microsoft (Microsoft) announced that it will cooperate with the privacy protection platform StopNCII to remove these pornographic images from the search engine Bing. The first wave of test operations has processed more than 260,000 images.

Victims can use StopNCII's tools on their phones, computers, or other devices to generate a digital fingerprint for pornographic images, whether real or deepfake. It acts like a unique digital tag that partners (such as Bing) can use to compare images on the platform and, if matching material is found, can be removed or prevented from spreading according to their respective policies.

According to the foreign media "Engadget", many technology companies have already cooperated with StopNCII. In addition to Microsoft, there are also Facebook, Instagram and Threads owned by Meta. As well as TikTok, Bumble, Reddit, Snap, Niantic, OnlyFans, PornHub and other platforms.

Google is not on the partner list?

As the most popular search engine, Google has also launched its own tools for reporting and removing pornographic images. However, according to an investigation by foreign media "Wired", Google is still criticized by former employees and victims for not cooperating with StopNCII. .

After all, since 2020, Google users in South Korea have reported a total of 170,000 searches and YouTube links for indecent content.

Google's failure to integrate with the StopNCII tool may also place additional burdens on victims, forcing them to deal with privacy issues in a more complex and fragmented manner.

Government pursues deepfake pornographic images

"TechCrunch" stated that StopNCII's tool is only suitable for adults over the age of 18. However, the problem of AI deepfake pornographic images is very common, and platforms such as "undressing sites" have begun to cause trouble to high school students.

Unfortunately, the United States currently has no laws specifically targeting accountability for AI deepfake pornography, leaving it now relying on a patchwork of state and local regulations to deal with these issues.

Even so, the U.S. government has begun to take some measures this year. For example, the US Copyright Office has called for the enactment of new relevant laws, and a group of senators also proposed the "NO FAKES" bill in July to protect victims of deep fake images. In August, the San Francisco government filed a lawsuit against 18 websites and apps for illegally generating non-consensual nude photos.

In addition to the United States, South Korea, where the problem of deepfake pornographic images is serious, has also ordered a crackdown on related crimes. According to Yonhap News Agency, the police will launch a seven-month operation to actively track down those who produce and distribute these images, especially those who produce and distribute these images. It targets those who exploit children and young people.

Source: TechCrunch, Engadget, Microsoft Blog

More coverage
AI-generated “illegal pornographic content” attracted 200 million views, and the San Francisco government filed a lawsuit
Don’t be addicted to AI! GPT-4o risk document warns: Users may develop "emotional attachment" to voice functions