OpenAI has confirmed that its popular AI tool, ChatGPT, blocked a jaw-dropping 250,000-plus requests to generate images of major 2024 U.S. presidential candidates.

Users tried—again and again—to get ChatGPT to cook up images of President-elect Donald Trump, Vice President Kamala Harris, current President Joe Biden, Minnesota Governor Tim Walz, and Vice President-elect JD Vance.

But OpenAI said a loud “Nope!” to every one of these requests. This is reportedly about preventing ChatGPT from becoming a pawn in a high-stakes game of misinformation.

With the U.S. election looming, OpenAI wanted ChatGPT to steer clear of election interference. Political deepfakes, AI-generated fake news, and downright lies spread like wildfire online. Clarity, a machine learning company, reports that deepfake content alone has shot up 900% this year.

And U.S. intelligence says some of this stuff has ties to Russian operatives trying to control American politics.

OpenAI’s big battle against misinformation

In an October report, OpenAI laid out just how bad things have gotten. They’ve been tracking shady operations worldwide—20 of them to be exact—all trying to exploit AI tools to mess with people’s minds online. Some were pumping out AI-generated website articles.

Others had fake social media accounts posting propaganda. But OpenAI’s team claims they managed to shut down these networks before they could go viral.

And yet, that’s not enough to make everyone happy. Some lawmakers, tech watchdogs, and skeptics are all raising red flags about the dangers of letting ChatGPT roam free during election season. AI chatbots might be impressive, sure, but they’re still known to spit out questionable information now and then.

“Voters categorically should not look to AI chatbots for information about voting or the election—there are far too many concerns about accuracy and completeness,” said Alexandra Reeve Givens, CEO of the Center for Democracy & Technology, in a statement last week.

On top of that, New York Attorney General Letitia James released a warning last Friday after her office ran tests on a handful of AI-powered chatbots. Her team threw some election-related questions at them and didn’t like the answers they got. Misinformation galore.

“New Yorkers who rely on chatbots, rather than official government sources, to answer their questions about voting risk being misinformed and could even lose their opportunity to vote due to the inaccurate information,” the attorney general’s office declared.

OpenAI has since added a feature on ChatGPT that urges users seeking election results to turn to trusted news sources like the Associated Press and Reuters. The feature, introduced on November 5, is a gentle nudge to avoid using AI-generated responses for something as important as election data.

AI takes the hot seat amid global elections

This year, political campaigns around the world could impact over 4 billion people across more than 40 countries. With so much on the line, the risk of AI-fueled misinformation spreading across borders is a big deal.

Again, Clarity’s report on the 900% increase in deepfake content is rattling nerves. The fact that some of these deepfake videos are tied to Russia-backed influence campaigns just adds to the urgency.

A study in July from the Center for Democracy & Technology adds another layer of worry. They tested election-related queries on AI chatbots from major companies—Mistral, Google, OpenAI, Anthropic, and Meta. Of the 77 election-related questions, over one-third were met with inaccurate or misleading responses. Not great.

A spokesperson for Anthropic, which makes the Claude chatbot, made it clear: “For specific election and voting information, we direct users to authoritative sources as Claude is not trained frequently enough to provide real-time information about specific elections.”

The Trump-Musk duo and Silicon Valley’s new reality

And while OpenAI deals with misinformation, Silicon Valley as a whole is bracing for an overhaul of its relationship with Washington. President-elect Donald Trump’s return to the White House brings big promises—he plans to rip apart many of his predecessor’s tech policies, including the Biden administration’s recent executive order on AI safety.

This order introduced security and privacy guidelines for AI developers, with the goal of setting some basic guardrails. It pushed for AI research funding and aimed to get the National Institute of Standards and Technology more involved in setting AI standards. Trump has called the policy “dangerous” and a barrier to innovation, pledging to replace it with what he calls “AI development rooted in free speech.”

Elon Musk, who threw over $130 million at pro-Trump campaigns and even rallied on behalf of Trump in Pennsylvania, is expected to be one of Trump’s biggest tech allies. With Musk’s influence and his control of X (formerly Twitter), he has a direct line to amplify political messages to millions.

Not to mention, Musk has some skin in the game: Tesla and SpaceX could stand to gain from Trump’s policies, especially if the new administration favors less oversight.

While some tech moguls, like Amazon’s Jeff Bezos, have openly clashed with Trump, others have found ways to stay in his good graces. Meta’s Mark Zuckerberg, for example, reportedly praised Trump’s reaction to a recent assassination attempt, calling it “badass.”

And Facebook didn’t stop there—they removed some of the platform’s anti-misinformation measures as well. And Bezos, owner of The Washington Post, apparently blocked an editorial that would have endorsed Vice President Kamala Harris in the weeks before the election.

The impact of Trump’s second term on Silicon Valley will also hinge on who controls Congress. With Republicans having secured the Senate, the path is now clearer for Trump to push his tech agenda forward and confirm his chosen nominees with less friction.