GoPlus Security Research: The double-edged sword effect of AI face-changing technology!

In recent years, with the rapid development of artificial intelligence technology, a technology called "AI face-changing" has gradually come into the public eye. The origin of this technology can be traced back to 2017, when researchers at the University of Washington, Supasorn Suwajanakorn and others, launched a project called "Synthesizing Obama" through deep learning technology. In this project, they successfully modified video clips of Barack Obama, making him read out words from a new audio track that was unrelated to the video. The core of this technology lies in the combined application of recurrent neural networks (RNN) and lip synthesis technology, and its effect is so realistic that the audio and mouth movements in the video are almost indistinguishable.

With the advent of this technology, various videos and communities have sprung up like mushrooms after rain. A large number of video producers have begun to use AI face-changing technology to add various complex special effects and super-resolution technology to their works, making their quality reach the effect of being indistinguishable from the real thing. These videos are widely circulated on platforms such as YouTube, Instagram, and TikTok. According to statistics, the total number of visits to videos related to Deepfakes has exceeded 10 billion. Some high-quality video producers have even exceeded 1 million views on a single video.

However, behind this craze, we must also be wary of the potential harm of AI face-changing technology. The DeepFaceLab series, the most successful face-changing software in the open source community, is one of them. Developed by Ivan Perov and Daiheng Gao and others, this software has more than 66,000 GitHub stars and is one of the top ten AI open source projects in 2020, alongside giants such as Tensorflow and PyTorch. It is worth mentioning that Daiheng Gao is also an AI consultant for GoPlus, focusing on research on preventing technology abuse.

Although AI face-changing technology has brought us unprecedented creativity and entertainment experience, we must also recognize that this technology can also be used for malicious purposes. For example, by replacing facial images in videos, criminals can create false political propaganda, pornographic content, or maliciously attack others

Therefore, the GoPlus team calls on the general public to be vigilant when using AI face-changing technology and avoid using it for illegal or malicious purposes.