British Tech Companies and Child Safety Officials to Test AI's Capability to Create Abuse Content
Tech firms and child protection organizations will receive permission to evaluate whether artificial intelligence tools can produce child abuse material under recently introduced UK laws.
Substantial Increase in AI-Generated Illegal Content
The declaration coincided with revelations from a safety monitoring body showing that cases of AI-generated child sexual abuse material have increased dramatically in the past year, growing from 199 in 2024 to 426 in 2025.
New Regulatory Framework
Under the amendments, the authorities will permit approved AI developers and child safety groups to examine AI systems – the underlying technology for chatbots and image generators – and verify they have adequate safeguards to stop them from creating images of child exploitation.
"Ultimately about stopping abuse before it happens," stated the minister for AI and online safety, adding: "Specialists, under rigorous conditions, can now detect the danger in AI models early."
Addressing Regulatory Challenges
The changes have been implemented because it is against the law to create and possess CSAM, meaning that AI creators and other parties cannot generate such content as part of a evaluation process. Previously, authorities had to delay action until AI-generated CSAM was published online before addressing it.
This legislation is aimed at preventing that problem by helping to halt the production of those images at their origin.
Legislative Framework
The amendments are being added by the government as revisions to the criminal justice legislation, which is also establishing a prohibition on owning, creating or sharing AI models developed to create exploitative content.
Real-World Consequences
This recently, the official visited the London base of Childline and listened to a simulated call to advisors involving a account of AI-based abuse. The call portrayed a adolescent requesting help after being blackmailed using a sexualised AI-generated image of themselves, created using AI.
"When I hear about children experiencing extortion online, it is a cause of intense anger in me and justified anger amongst families," he said.
Alarming Data
A leading internet monitoring organization stated that instances of AI-generated abuse content – such as webpages that may contain multiple files – had more than doubled so far this year.
Cases of category A material – the most serious form of abuse – rose from 2,621 images or videos to 3,086.
- Girls were predominantly targeted, accounting for 94% of illegal AI images in 2025
- Depictions of newborns to two-year-olds increased from five in 2024 to 92 in 2025
Sector Reaction
The law change could "constitute a vital step to guarantee AI products are secure before they are released," commented the head of the internet monitoring foundation.
"Artificial intelligence systems have made it so survivors can be victimised repeatedly with just a simple actions, giving criminals the capability to create possibly limitless quantities of sophisticated, photorealistic child sexual abuse material," she continued. "Content which further exploits victims' suffering, and makes children, especially female children, more vulnerable on and off line."
Support Session Information
Childline also released information of support interactions where AI has been mentioned. AI-related harms mentioned in the conversations comprise:
- Using AI to evaluate weight, body and looks
- AI assistants dissuading young people from consulting trusted adults about harm
- Being bullied online with AI-generated content
- Online extortion using AI-faked images
During April and September this year, Childline delivered 367 counselling sessions where AI, chatbots and associated terms were mentioned, four times as many as in the same period last year.
Fifty percent of the mentions of AI in the 2025 interactions were related to mental health and wellness, including utilizing chatbots for support and AI therapy apps.