A bipartisan group of senators introduced a new bill to tackle AI deepfakes. The COPIED Act aims to protect artists’ and journalists’ copyrights and stop unauthorized AI model training.

US Senator Maria Cantwell introduced the Content Origin Protection and Integrity from Edited and Deepfaked Media (COPIED) Act on Thursday. The Washington Senator presented the bill on behalf of a bipartisan group of fellow US senators. 

Also read: Biden has lost his chance to side with crypto: Senator Lummis

Senator Martin Heinrich and Senator Marsha Blackburn were among the ones who signed the bill. The Act will help regulate and detect AI-generated content and curb deepfakes used for disinformation.

COPIED Act makes watermarking mandatory

The COPIED Act will make it mandatory to watermark AI-generated content to easily differentiate it from real human generated content. It will also direct AI tool owners to craft a way for creators to include the information regarding the origin of the content which could not be removed later.

The Act also mandates the National Institute of Standards and Technology (NIST) to develop guidelines and standard procedures that would help in identifying the origin of the content produced and mark synthetic content. NIST will also be responsible for creating security measures to prevent content tempering.

According to the bill, AI-generated content could not be used to train AI models, which will help protect against AI cannibalism (a process in which AI trains on AI and, as a result, produces subpar content). Cantwell said in a statement that the Act “will provide much-needed transparency around AI-generated content.”

She continued, saying that the Act will also enable creators, including musicians, artists, and Journalists, to keep control of their content with origin records and watermarks, which she thinks are essential.

FTC will enforce the Act

The Act also prohibits the use of content for AI training without the consent of the original creator. It gives creators the right to compensation and permission. According to the bill, the US Federal Trade Commission (FTC) and state attorneys general will be responsible for enforcing the Act. This is because they are considered responsible for protecting consumers from scams, and AI has the ability to complement scamming activities.

Though such lawsuits are already in court, the Act also gives creators the right to sue AI companies if they note unauthorized use of their content with tempered origin markers. In April, MediaNews Group, the owner of The New York News Daily, sued OpenAI and Microsoft for illegally using its journalists’ work to train generative AI models. 

Also read: 8 US newspapers drag OpenAI and Microsoft into court for copyright violations

Senator Maria Cantwell is also the Senate Commerce Committee Chair and her sponsorship of the bill distinguishes the COPIED Act among other bills recently presented in the house. Rights bodies across the country have applauded the new bill including the Recording Industry Association of America (RIAA) and the Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA). 

Duncan Crabtree-Ireland, SAG-AFTRA’s chief negotiator, said the legislation’s provenance clause will help identify the technology’s misuse. He said a transparent, accountable process for AI content creation is essential to protecting people’s rights. 

RIAA Chairman Mitch Glazier said it is important to protect the legacy of artists because AI platforms copy content from the internet at scale and generate deepfakes. The COPIED Act is the latest effort in a series of bills presented in the US Senate to regulate the technology.