On Wednesday, the U.S. Senate Commerce Committee voted 14-12 to reject a proposal to block the Federal Communications Commission (FCC) from enforcing new disclosure rules for political advertisements. The proposed rules state that broadcast radio and television political ads must disclose if they use artificial intelligence in creating the content.
Read more: Senate Committee rejects ban on AI disclosure rules for political ads
The FCC proposed these rules earlier this month and invited comments on them until mid-October. Senator Maria Cantwell, Chairwoman of the Senate Commerce Committee, said that the FCC would possibly not complete the rules before the November presidential election.
Republican senators voice concerns and attempt to add a ban
Republican senators raised concerns about the possibility of abuses of AI in political advertisements and tried to add a ban on such legislation. However, the committee’s decision supports the FCC’s recommendation. The emerging concern in Washington is that AI-written material might confuse voters during the 2024 elections.
FCC Chair Jessica Rosenworcel pointed out the threats of AI, including the ability to produce “deep fakes” or manipulated images, videos, or audio that portray someone they are not. Rosenworcel added that the FCC has been involved in disclosure since the 1930s. She further stated, “a major step to guard against AI being used by bad actors to spread chaos and confusion in our elections.”
The proposed rule would require broadcast radio and television stations, cable operators, and satellite TV and radio providers to make disclosures about the use of AI in content generation. Nevertheless, it is to be noted that the FCC has no authority over the internet or social media-based advertisements, and it doesn’t regulate streaming services as well. The agency has already dealt with the matter of deceptive AI in political robocalls through current provisions.
More AI-based fraud underscores need for regulation
In January, a phishing robocall that posed as Biden was made to suppress the vote for Biden in the New Hampshire Democratic primary. Following the incident a Democratic political consultant was charged with the call.
Additionally, the Federal Election Commission (FEC) highlighted the use of AI-generated content. The FEC is currently reviewing a petition to modify the rules to state that current laws banning fraudulent misstatements by federal candidates. These include intentionally misleading AI content in campaign messages.
Additionally, the proposed bill known as the Promoting United States Leadership in Standards Act of 2024 by Senators Mark Warner (D-Virginia) and Marsha Blackburn (R-Tennessee) was applauded for its progression. The bill seeks to regain the United States’ dominance in the formulation of international standards for artificial intelligence (AI) as well as other important emerging technologies (CETs).