The sheer volume of sexually explicit AI deepfakes of children is making it hard for law enforcement in the U.S. to separate real-life kids in distress from fake ones, prosecutors and safety groups warned. They say the images are so life-like that it has become difficult to know whether real children have been abused, the Guardian reports.

Also read: ChatGPT-powered teddy bear Poe reads bedtime stories to kids

Kristina Korobov, senior attorney at a Minnesota-based child safety non-profit, Zero Abuse Project, says the group has started to notice reports of images of real children made using AI, who were not sexually abused. “But now their face is on a child that was abused,” she says.

DoJ prosecutor says ‘we’re drowning’ due to AI

“Sometimes, we recognize the bedding or background in a video or image, the perpetrator, or the series it comes from, but now there is another child’s face put on to it,” Korobov said, per the Guardian.

AI models are capable of churning out thousands of pictures and videos that are similar to real-life photographs per day. According to the report, this has made it easier for predators to use the tech to communicate unsavory content on the dark web and mainstream internet.

For authorities, it has become a nightmare. “From a law enforcement perspective, crimes against children are one of the more resource-strapped areas, and there is going to be an explosion of content from AI,” a U.S. Department of Justice (DoJ) prosecutor said, adding:

 

“We’re just drowning in this stuff already.”

 

The U.S. National Center for Missing and Exploited Children (NCMEC) says that, in 2023, reports of online child abuse rose by 12% to 36.2 million. While most of the reports were related to the circulation of real-life photos and videos of sexually abused children, the center also got 4,700 reports of AI deepfake images or videos involving the sexual exploitation of children.

NCMEC said some of the most common ways predators have been using AI include editing existing online images to make them explicit. Criminals also leverage generative artificial intelligence chatbots that give graphic instructions on how to recruit children for abuse.

Current laws don’t go far enough on AI deepfakes

There appears to be a gap in what can be considered a crime. In many jurisdictions, possessing images that sexually objectify children is a crime, but laws have not yet advanced to incorporate AI-generated images.

“When charging cases in the federal system,” the DoJ prosecutor said, “AI doesn’t change what we can prosecute, but there are many states where you have to be able to prove it’s a real child.” The prosecutor added:

 

“Quibbling over the legitimacy of images will cause problems at trials. If I was a defense attorney, that’s exactly what I’d argue.”

 

Even if the laws are enacted, the enabling technology will still be needed to catch criminals, and currently, there is a wide gap. Child safety groups warned that police departments are likely to be inundated with requests.

Also read: UN chief warns AI makes nuclear war more likely

Sexually explicit images of children generated with AI started to increase their footprint online around 2022, coinciding with the release of ChatGPT by OpenAI, according to the Guardian report.

The NCMEC accuses AI companies of not doing enough to prevent the generation and spread of exploitative content. Last year, only five AI firms voluntarily submitted reports to the Center. By comparison, 70% of social media platforms shared their reports, it said.