With the popularity of generative AI, it is increasingly difficult to distinguish the authenticity of digital content on the Internet. Google recently announced that it plans to launch the ability to indicate whether images are generated and edited by AI on platforms such as Google Search in the next few months.

Launch C2PA to enhance digital content transparency

This feature helps users more clearly identify the source of digital content by adopting the C2PA (Coalition for Content Provenance and Authenticity) technical standard.​

C2PA is composed of several companies, including Google, Amazon, Microsoft, OpenAI and Adobe, and is committed to promoting the technological development of content credentials (Content Credentials). The source of the image will be identified in the "About this image" window in features such as Google Search, Google Lens and Circle to Search. In the future, this labeling may also be extended to platforms such as YouTube.​

Google also plans to use C2PA to help YouTube viewers confirm which images were actually taken by cameras, but this depends on whether the camera manufacturer supports the C2PA standard. Currently, only Leica and Sony cameras support this standard.

Challenges of C2PA Technology

Although C2PA technology can provide authenticity guarantee for digital content, its promotion and application still face many challenges.​

Currently, only images containing C2PA metadata will be marked as generated or edited by AI, but C2PA metadata can be removed or deleted, or even damaged during transmission to an unreadable state, making it impossible for Google to recognize it. In addition, this technology requires AI technology vendors to join the C2PA alliance. For example, the Flux generation tool used by xAI’s Grok chatbot does not have these metadata because it has not yet adopted the C2PA standard.​

In this regard, Google stated that through the C2PA Steering Committee, it has upgraded the technical standards for content certificates. The new version of Content Credentials technology (version 2.1) enhances tamper resistance, but cross-platform adoption still has issues to overcome.

Deepfakes are emerging in endlessly, and AI fraud incidents have surged by 245% in the past year

In recent years, crimes caused by deepfake technology have emerged one after another. For example, the "Room N 2.0" incident was reported in South Korea in August this year, in which many women became victims of deepfake pornographic images. Scams involving AI-generated content are also surging.​

According to Deloitte, fraud cases involving AI-generated content will increase by 245% from 2023 to 2024. Deloitte further predicts that economic losses related to deep fake technology will skyrocket from US$12.3 billion (approximately 392.98254 billion) in 2023 to 40 billion US dollars (approximately 1.2779 billion) in 2027, showing the strengthening of digital content sources The urgency of verifying technology.

[Disclaimer] There are risks in the market, so investment needs to be cautious. This article does not constitute investment advice, and users should consider whether any opinions, views or conclusions contained in this article are appropriate for their particular circumstances. Invest accordingly and do so at your own risk.

  • This article is reprinted with permission from: "Digital Age"