British Technology Companies and Child Protection Officials to Test AI's Capability to Generate Exploitation Content
Technology companies and child protection organizations will receive authority to evaluate whether AI systems can produce child abuse material under recently introduced British laws.
Significant Increase in AI-Generated Harmful Material
The announcement came as revelations from a safety monitoring body showing that reports of AI-generated child sexual abuse material have increased dramatically in the past year, growing from 199 in 2024 to 426 in 2025.
New Regulatory Structure
Under the changes, the authorities will permit designated AI developers and child safety organizations to examine AI systems – the underlying technology for conversational AI and image generators – and ensure they have adequate protective measures to stop them from creating images of child sexual abuse.
"Ultimately about preventing abuse before it occurs," stated Kanishka Narayan, adding: "Specialists, under strict protocols, can now identify the risk in AI models early."
Tackling Regulatory Challenges
The changes have been implemented because it is illegal to produce and own CSAM, meaning that AI creators and other parties cannot create such content as part of a testing process. Until now, authorities had to delay action until AI-generated CSAM was published online before dealing with it.
This law is aimed at averting that problem by helping to halt the creation of those images at their origin.
Legislative Structure
The changes are being introduced by the government as revisions to the criminal justice legislation, which is also implementing a ban on possessing, creating or sharing AI systems developed to create exploitative content.
Real-World Consequences
This recently, the minister toured the London base of Childline and heard a simulated call to advisors featuring a report of AI-based exploitation. The interaction depicted a teenager requesting help after facing extortion using a sexualised AI-generated image of themselves, constructed using AI.
"When I learn about children facing extortion online, it is a source of extreme anger in me and rightful anger amongst families," he said.
Alarming Statistics
A prominent online safety foundation reported that cases of AI-generated exploitation content – such as online pages that may include multiple files – had more than doubled so far this year.
Instances of the most severe content – the most serious form of exploitation – increased from 2,621 images or videos to 3,086.
- Female children were predominantly targeted, making up 94% of prohibited AI images in 2025
- Portrayals of infants to two-year-olds rose from five in 2024 to 92 in 2025
Industry Reaction
The legislative amendment could "constitute a crucial step to guarantee AI products are safe before they are released," commented the head of the online safety organization.
"Artificial intelligence systems have enabled so victims can be targeted repeatedly with just a simple actions, giving offenders the capability to create possibly limitless quantities of sophisticated, photorealistic exploitative content," she continued. "Content which further exploits victims' trauma, and makes children, especially female children, more vulnerable on and off line."
Support Session Information
The children's helpline also released details of support sessions where AI has been referenced. AI-related risks discussed in the sessions comprise:
- Using AI to evaluate weight, body and appearance
- AI assistants discouraging young people from talking to safe guardians about harm
- Being bullied online with AI-generated content
- Digital extortion using AI-manipulated images
Between April and September this year, the helpline conducted 367 counselling sessions where AI, chatbots and related topics were discussed, four times as many as in the equivalent timeframe last year.
Half of the references of AI in the 2025 interactions were related to mental health and wellness, including using AI assistants for support and AI therapy apps.