Generative AI in Political Campaigns

In the latest tech news, Microsoft and Google have started training political campaigns on using generative AI tools like Copilot and Gemini chatbots. Since early 2024, dozens of political groups have participated in these sessions, learning to leverage AI to streamline tasks like writing and editing fundraising emails and text messages.

These training programs, which have included over 600 participants in the US alone, aim to help campaigns save time and cut costs, much like any small business would.

Mitigating Risks and Promoting Authentication

While these AI tools offer significant advantages, they also pose risks. Microsoft and Google have integrated lessons on content authentication into their training sessions to mitigate the spread of AI-generated misinformation.

Techniques like Microsoft's "content credentials" and Google's SynthID aim to watermark AI-created content, ensuring authenticity. These measures are part of a broader commitment by major tech companies to prevent their AI tools from contributing to electoral disruption.

Challenges of AI-Generated Misinformation

Despite these efforts, challenges remain. The Biden campaign recently faced a cheap fake scandal involving doctored clips that spread misinformation about President Biden. Such incidents underscore the potential for AI tools to be misused in political contexts.

While tech companies have pledged to take "reasonable precautions" to prevent misuse, none of the current authentication methods are foolproof. Additionally, both Copilot and Gemini have struggled with basic queries, such as accurately identifying the winner of the 2020 presidential election, raising concerns about their reliability.

Future of AI in Elections

Six months before the 2024 election, big tech is providing both the tools and the safeguards for AI in political campaigns. However, ensuring the ethical use of AI and preventing the spread of misinformation may require government intervention to standardize authentication technologies.

Until then, the responsibility falls on the AI industry to avoid significant mistakes in creating or detecting harmful content.