WeChat Cracks Down on AI-Generated Celebrity Impersonations for Fraudulent Marketing

Chinese social media platform WeChat has taken decisive action to address the growing issue of AI-generated celebrity impersonations being used for malicious marketing practices.

In a recent announcement through its official account "WeChat Coral Security" on 15 December 2024, the platform revealed that it had closed 209 accounts involved in unethical AI-based marketing tactics, aimed at manipulating consumer trust and gaining fraudulent traffic.

It had swiftly addressed a series of violations where AI was used to impersonate celebrities for improper promotional purposes.

The platform stated that it had received multiple reports from users and conducted regular checks, enabling the rapid identification and removal of these deceptive activities.

WeChat said in the statement:

"To protect users' rights and prevent consumer fraud, we have acted decisively to combat the misuse of AI technology in fraudulent marketing."

AI Technology Fuels Deceptive Practices

The use of AI technology to mimic well-known public figures has gained traction, especially for commercial purposes.

These AI-driven tools can create convincing fake videos and voice recordings by cloning famous personalities’ likenesses, leading to widespread misrepresentation online.

WeChat's crackdown was triggered by multiple reports from users and routine monitoring, which led to the identification of accounts abusing this technology.

WeChat has now removed 532 pieces of harmful content as part of its wider initiative to combat these fraudulent marketing activities.

The platform’s commitment to improving consumer protection is clear, with the company stating that it will continue to focus on curbing such deceptive practices.

A Growing Trend of Fraudulent AI Impersonation

The increasing sophistication of AI-generated content has raised concerns, as it allows for the manipulation of consumers through fabricated endorsements.

High-profile cases of AI impersonations have already surfaced, including instances where celebrity voices and images were used to promote products without their consent.

One such case involved Hong Kong actor Raymond Wong, who publicly denounced the fraudulent use of his image and voice in an AI-generated advertisement.

Wong said in a statement posted on social media:

“This behaviour is pure fraud. It aims to mislead the public and profit from it.”

This situation exposes the dangers of AI-powered impersonations, where trust is manipulated to deceive consumers into buying products through fake endorsements.

Legal and Ethical Concerns Surrounding AI-Fueled Fraud

Experts are warning that AI impersonations could infringe upon individuals' rights.

Lawyer Xu Wei from Beijing’s Jingdu Law Firm pointed out that using AI to alter someone’s likeness for commercial purposes without consent violates their portrait rights.

If the products being promoted through these AI-created endorsements are exaggerated or overpriced, it could even constitute fraud, as consumers may be misled into making purchases based on these false endorsements.

Xu also highlighted the potential for reputation damage, noting that malicious uses of AI, such as creating offensive content or putting someone’s face in an inappropriate context, could lead to violations of a person’s personal rights.

Platform Responsibility and Future Measures

The role of platform providers like WeChat is also being scrutinised.

Platforms are urged to take responsibility for the content hosted on their sites, ensuring that AI-generated videos are clearly labelled to avoid user confusion.

This regulation mandates that service providers clearly mark AI-generated content with visible indicators at key points, such as the start, end, or other prominent areas, across various formats like text, audio, images, and videos.

Major platforms have already begun adding disclaimers, such as “AI-generated content” or “suspected AI creation,” to help viewers differentiate between real and synthetic media.

However, in cases where platforms fail to appropriately label or remove misleading content, they could be held accountable for facilitating the spread of fraudulent material.

WeChat has stated that it will intensify its efforts to prevent such content from reaching users, further tightening its regulatory practices.

Stricter Regulation on AI Synthesis Services

According to recent guidelines from the Chinese authorities, AI services that generate deepfake content must clearly indicate when the content could lead to confusion.

WeChat is committed to further strengthening its governance, ensuring that both users and service providers remain accountable for the content generated using AI technologies.

The public’s role in helping identify and report fraudulent AI content is also pivotal in maintaining a safe and trustworthy online environment.

With AI tools becoming more widely accessible, the onus is on both platform providers and users to prevent exploitation and deception in the digital world.