Technology companies and child protection agencies will be granted authority to assess whether artificial intelligence systems can generate child exploitation images under new UK laws.
The declaration coincided with revelations from a protection monitoring body showing that reports of AI-generated child sexual abuse material have more than doubled in the past year, growing from 199 in 2024 to 426 in 2025.
Under the amendments, the authorities will permit designated AI developers and child protection organizations to examine AI systems – the underlying technology for chatbots and image generators – and ensure they have adequate safeguards to prevent them from producing images of child exploitation.
"Ultimately about stopping abuse before it happens," declared the minister for AI and online safety, noting: "Specialists, under strict conditions, can now detect the danger in AI models promptly."
The amendments have been implemented because it is illegal to produce and possess CSAM, meaning that AI creators and others cannot generate such images as part of a testing regime. Previously, officials had to delay action until AI-generated CSAM was published online before addressing it.
This law is designed to averting that issue by enabling to halt the creation of those images at their origin.
The amendments are being introduced by the authorities as revisions to the criminal justice legislation, which is also implementing a prohibition on owning, producing or distributing AI systems designed to create exploitative content.
This recently, the minister toured the London headquarters of Childline and listened to a simulated call to counsellors featuring a report of AI-based exploitation. The call depicted a teenager requesting help after facing extortion using a explicit AI-generated image of himself, created using AI.
"When I hear about children facing extortion online, it is a source of extreme anger in me and justified concern amongst families," he said.
A leading internet monitoring organization reported that cases of AI-generated abuse content – such as online pages that may include multiple images – had significantly increased so far this year.
Cases of the most severe content – the gravest form of exploitation – increased from 2,621 visual files to 3,086.
The law change could "constitute a crucial step to guarantee AI tools are secure before they are released," commented the chief executive of the online safety foundation.
"AI tools have enabled so victims can be victimised all over again with just a simple actions, giving offenders the capability to make possibly limitless amounts of sophisticated, photorealistic child sexual abuse material," she continued. "Content which further exploits survivors' suffering, and renders children, especially female children, more vulnerable both online and offline."
The children's helpline also released details of counselling sessions where AI has been mentioned. AI-related harms mentioned in the sessions include:
Between April and September this year, the helpline delivered 367 counselling sessions where AI, chatbots and related topics were discussed, significantly more as many as in the same period last year.
Fifty percent of the mentions of AI in the 2025 sessions were related to psychological wellbeing and wellbeing, including using chatbots for assistance and AI therapeutic applications.
A passionate sports journalist with over a decade of experience covering local athletics and community events in the Padua region.