AI-Generated Images of Children: A Disturbing Trend in Online Exploitation

Since these powerful image-generating systems entered the public domain, researchers have warned that they can be misused to generate illicit images.

In May, Home Secretary Suella Braverman and U.S. Secretary of Homeland Security Alejandro Mayorkas issued a joint statement pledging to address the “alarming rise in despicable AI-generated images of children being sexually exploited by pedophiles.”

According to the Internet Watch Foundation (IWF), predators are sharing images of a well-known singer reimagined as a child.

Pedophiles are using artificial intelligence (AI) to create images of celebrities as children.

Hundreds of images of real child sexual abuse victims are also being created using custom-made image generators.

In a forum on the dark web, the charity claims that images of child actors are also being manipulated into sexual images.

The data comes from the IWF’s latest report on this growing problem, which seeks to raise awareness of the dangers of pedophiles using artificial intelligence systems capable of creating images from simple text instructions.

What is AI? Understand it with this simple guide

The IWF report details how researchers spent a month logging AI images on a single darknet child abuse website and found nearly 3,000 synthetic images that would be illegal under UK law.

Analysts say there is a new trend of predators taking individual photos of known child abuse victims and recreating many more of them in different sexual abuse scenarios.

One folder they found contained 501 images of a real-world victim who was between the ages of 9 and 10 when she was sexually abused. In the folder, the predators also shared a perfected AI model file so that others could generate more images of her.

The IWF claims that some of the images, including those of celebrities as children, are extremely realistic and would be indistinguishable to untrained eyes.

Analysts saw images of singers and movie stars, mostly women, who had had their ages removed by the imaging software to make them look like children.

The report did not identify the celebrities who had been targeted in the images.

The charity said it was sharing the research so that the issue could be put on the agenda of the UK government’s Artificial Intelligence Summit at Bletchley Park next week.

In one month, the IWF investigated 11,108 AI images that had been shared on a dark web child abuse forum.

Of these, 2,978 were confirmed to be in breach of UK law, i.e. depicting child sexual abuse.

More than one in five of these images (564) were classified as category A, the most serious type of images.

More than half (1,372) of these images depicted children of primary school age (between 7 and 10 years old).

In addition, 143 images showed children between the ages of three and six, while two images showed infants (under the age of two).

In June, the IWF warned that predators were beginning to explore the use of AI to make depraved images of children, but now the IWF says the fears are a reality.

“Our worst nightmares have come true,” said Susie Hargreaves, executive director of the IWF.

“Earlier this year, we warned that AI images could soon become indistinguishable from real images of children being sexually abused, and that we could start to see the proliferation of these images in much greater numbers. We are past that point.”

The IWF report reiterates the real-world harm of AI images. While children are not directly harmed by creating the content, the images normalize predatory behavior and can waste law enforcement resources by investigating children who do not exist.

In some cases, new forms of crime are also being explored, posing new complexities for law enforcement.

For example, the IWF found hundreds of images of two girls whose photos from a photo shoot at a modeling agency that did not portray them nude had been manipulated to place them in category A sexual abuse scenes. The reality is that they are now victims of sexual abuse.