According to Cointelegraph, Google is updating its political content policy to require all verified election advertisers to disclose the use of artificial intelligence (AI) in campaign content. The tech giant announced on September 6 that these disclosures are necessary for synthetic content that inauthentically depicts real or realistic-looking people or events. The notices must be clear and conspicuous in places where users will notice them.
Ads containing synthetic content altered or generated in a way that is inconsequential to the claims made in the ad will be exempt from these disclosure requirements. Google provided examples of AI use in the editing process, such as resizing, cropping, color or defect corrections, or peripheral edits that do not create realistic depictions of actual events.
The updated policy will apply to image, video, and audio content and will be implemented in mid-November 2023, a year before the anticipated United States presidential elections in November 2024. The issue of disclosures for AI-generated content has gained prominence as mass AI tools like OpenAI's ChatGPT have made it easier to create and circulate such content.
As AI continues to permeate various sectors, Google and other major tech companies have been increasing their focus on AI tools and services. In a memo from Google's CEO on September 5, he mentioned considering pivoting Google to be an 'AI-first company' since joining in 2015. On August 17, Google upgraded its search engine to include AI-powered enhancements to streamline search functions. The company also partnered with OpenAI and Microsoft to create the 'Frontier Model Forum' to help self-regulate AI development within the industry.
Google's interest in developing AI policies has extended to other platforms, such as YouTube, which recently released its 'principles' for working with the music industry on AI-related technology.