British Technology Companies and Child Safety Officials to Test AI's Capability to Generate Abuse Images

Technology companies and child protection agencies will be granted authority to assess whether artificial intelligence systems can generate child exploitation images under new UK laws.

Substantial Rise in AI-Generated Harmful Material

The declaration coincided with revelations from a protection monitoring body showing that reports of AI-generated child sexual abuse material have more than doubled in the past year, growing from 199 in 2024 to 426 in 2025.

New Legal Structure

Under the amendments, the authorities will permit designated AI developers and child protection organizations to examine AI systems – the underlying technology for chatbots and image generators – and ensure they have adequate safeguards to prevent them from producing images of child exploitation.

"Ultimately about stopping abuse before it happens," declared the minister for AI and online safety, noting: "Specialists, under strict conditions, can now detect the danger in AI models promptly."

Addressing Regulatory Challenges

The amendments have been implemented because it is illegal to produce and possess CSAM, meaning that AI creators and others cannot generate such images as part of a testing regime. Previously, officials had to delay action until AI-generated CSAM was published online before addressing it.

This law is designed to averting that issue by enabling to halt the creation of those images at their origin.

Legal Structure

The amendments are being introduced by the authorities as revisions to the criminal justice legislation, which is also implementing a prohibition on owning, producing or distributing AI systems designed to create exploitative content.

Practical Impact

This recently, the minister toured the London headquarters of Childline and listened to a simulated call to counsellors featuring a report of AI-based exploitation. The call depicted a teenager requesting help after facing extortion using a explicit AI-generated image of himself, created using AI.

"When I hear about children facing extortion online, it is a source of extreme anger in me and justified concern amongst families," he said.

Concerning Statistics

A leading internet monitoring organization reported that cases of AI-generated abuse content – such as online pages that may include multiple images – had significantly increased so far this year.

Cases of the most severe content – the gravest form of exploitation – increased from 2,621 visual files to 3,086.

  • Female children were predominantly targeted, making up 94% of illegal AI depictions in 2025
  • Depictions of infants to two-year-olds rose from five in 2024 to 92 in 2025

Industry Reaction

The law change could "constitute a crucial step to guarantee AI tools are secure before they are released," commented the chief executive of the online safety foundation.

"AI tools have enabled so victims can be victimised all over again with just a simple actions, giving offenders the capability to make possibly limitless amounts of sophisticated, photorealistic child sexual abuse material," she continued. "Content which further exploits survivors' suffering, and renders children, especially female children, more vulnerable both online and offline."

Support Session Information

The children's helpline also released details of counselling sessions where AI has been mentioned. AI-related harms mentioned in the sessions include:

  • Using AI to evaluate weight, body and looks
  • Chatbots discouraging children from talking to trusted guardians about abuse
  • Facing harassment online with AI-generated content
  • Online extortion using AI-faked images

Between April and September this year, the helpline delivered 367 counselling sessions where AI, chatbots and related topics were discussed, significantly more as many as in the same period last year.

Fifty percent of the mentions of AI in the 2025 sessions were related to psychological wellbeing and wellbeing, including using chatbots for assistance and AI therapeutic applications.

Jacqueline Sandoval
Jacqueline Sandoval

A passionate sports journalist with over a decade of experience covering local athletics and community events in the Padua region.