British Tech Companies and Child Protection Agencies to Examine AI's Capability to Generate Abuse Content
Technology companies and child safety agencies will be granted permission to evaluate whether artificial intelligence systems can generate child exploitation material under recently introduced British legislation.
Substantial Rise in AI-Generated Illegal Material
The declaration came as revelations from a protection monitoring body showing that cases of AI-generated child sexual abuse material have more than doubled in the last twelve months, growing from 199 in 2024 to 426 in 2025.
New Legal Framework
Under the amendments, the authorities will allow designated AI developers and child protection organizations to examine AI systems – the foundational systems for conversational AI and image generators – and ensure they have adequate protective measures to prevent them from creating images of child sexual abuse.
"Ultimately about stopping exploitation before it happens," declared the minister for AI and online safety, noting: "Specialists, under rigorous protocols, can now identify the risk in AI systems early."
Tackling Regulatory Obstacles
The changes have been implemented because it is against the law to produce and possess CSAM, meaning that AI developers and other parties cannot generate such content as part of a testing regime. Previously, authorities had to wait until AI-generated CSAM was published online before addressing it.
This legislation is aimed at averting that issue by enabling to stop the creation of those materials at source.
Legislative Structure
The changes are being introduced by the authorities as modifications to the criminal justice legislation, which is also implementing a ban on possessing, creating or sharing AI systems developed to generate exploitative content.
Real-World Consequences
This recently, the minister visited the London headquarters of Childline and listened to a mock-up call to advisors involving a report of AI-based abuse. The call portrayed a adolescent requesting help after facing extortion using a sexualised AI-generated image of themselves, constructed using AI.
"When I learn about children experiencing blackmail online, it is a source of extreme frustration in me and justified concern amongst parents," he stated.
Concerning Data
A prominent online safety foundation reported that cases of AI-generated exploitation content – such as webpages that may include multiple files – had significantly increased so far this year.
Cases of category A material – the most serious form of exploitation – rose from 2,621 visual files to 3,086.
- Female children were predominantly victimized, making up 94% of prohibited AI images in 2025
- Portrayals of newborns to toddlers rose from five in 2024 to 92 in 2025
Industry Reaction
The legislative amendment could "represent a crucial step to ensure AI tools are secure before they are released," stated the chief executive of the online safety organization.
"AI tools have made it so victims can be targeted all over again with just a few clicks, providing criminals the capability to create potentially endless quantities of advanced, photorealistic exploitative content," she added. "Content which further exploits survivors' suffering, and makes children, particularly girls, more vulnerable on and off line."
Counseling Session Data
The children's helpline also published details of support sessions where AI has been mentioned. AI-related harms discussed in the conversations include:
- Using AI to rate weight, body and appearance
- AI assistants dissuading children from talking to trusted adults about harm
- Facing harassment online with AI-generated material
- Digital extortion using AI-manipulated pictures
During April and September this year, Childline conducted 367 support sessions where AI, chatbots and associated terms were mentioned, significantly more as many as in the equivalent timeframe last year.
Half of the references of AI in the 2025 interactions were related to psychological wellbeing and wellbeing, encompassing using AI assistants for assistance and AI therapy apps.