UK Technology Firms and Child Protection Officials to Examine AI's Capability to Generate Exploitation Images
Tech firms and child safety organizations will receive authority to evaluate whether artificial intelligence tools can produce child exploitation material under new British legislation.
Substantial Increase in AI-Generated Illegal Content
The declaration coincided with revelations from a protection monitoring body showing that cases of AI-generated child sexual abuse material have increased dramatically in the last twelve months, rising from 199 in 2024 to 426 in 2025.
Updated Regulatory Framework
Under the changes, the authorities will allow approved AI companies and child protection organizations to inspect AI models – the foundational systems for conversational AI and visual AI tools – and ensure they have sufficient protective measures to stop them from producing depictions of child exploitation.
"Ultimately about preventing abuse before it occurs," stated Kanishka Narayan, adding: "Experts, under rigorous protocols, can now detect the danger in AI models early."
Tackling Regulatory Obstacles
The amendments have been implemented because it is illegal to produce and own CSAM, meaning that AI creators and other parties cannot create such images as part of a evaluation regime. Until now, authorities had to wait until AI-generated CSAM was published online before addressing it.
This legislation is aimed at preventing that problem by helping to halt the production of those images at source.
Legal Structure
The changes are being added by the government as modifications to the criminal justice legislation, which is also establishing a prohibition on possessing, producing or distributing AI systems designed to create child sexual abuse material.
Real-World Impact
This recently, the official visited the London base of a children's helpline and listened to a mock-up conversation to counsellors featuring a report of AI-based exploitation. The call portrayed a adolescent seeking help after facing extortion using a explicit deepfake of themselves, constructed using AI.
"When I learn about children facing blackmail online, it is a cause of extreme frustration in me and justified anger amongst parents," he said.
Alarming Data
A prominent internet monitoring foundation reported that cases of AI-generated exploitation material – such as webpages that may contain multiple images – had more than doubled so far this year.
Instances of the most severe content – the most serious form of exploitation – increased from 2,621 visual files to 3,086.
- Girls were overwhelmingly targeted, accounting for 94% of illegal AI depictions in 2025
- Portrayals of infants to two-year-olds rose from five in 2024 to 92 in 2025
Sector Reaction
The legislative amendment could "constitute a crucial step to ensure AI products are safe before they are launched," stated the head of the internet monitoring foundation.
"Artificial intelligence systems have made it so victims can be victimised all over again with just a few clicks, providing offenders the ability to make possibly endless amounts of advanced, photorealistic child sexual abuse material," she continued. "Content which additionally exploits victims' trauma, and renders young people, especially girls, less safe on and off line."
Support Session Information
Childline also published details of support sessions where AI has been referenced. AI-related risks mentioned in the conversations comprise:
- Employing AI to evaluate body size, physique and looks
- AI assistants dissuading young people from consulting trusted adults about harm
- Being bullied online with AI-generated content
- Online blackmail using AI-faked pictures
Between April and September this year, Childline delivered 367 support sessions where AI, conversational AI and associated topics were discussed, significantly more as many as in the equivalent timeframe last year.
Half of the references of AI in the 2025 sessions were connected with mental health and wellness, including using chatbots for support and AI therapeutic apps.