UK Tech Firms and Child Protection Officials to Examine AI's Ability to Create Abuse Images
Technology companies and child protection agencies will be granted permission to assess whether AI tools can generate child exploitation material under new UK legislation.
Significant Increase in AI-Generated Illegal Material
The declaration coincided with revelations from a safety watchdog showing that reports of AI-generated CSAM have increased dramatically in the past year, rising from 199 in 2024 to 426 in 2025.
Updated Legal Framework
Under the amendments, the government will permit designated AI companies and child protection groups to examine AI models – the underlying systems for chatbots and image generators – and verify they have sufficient protective measures to prevent them from producing depictions of child sexual abuse.
"Ultimately about stopping abuse before it occurs," declared Kanishka Narayan, noting: "Experts, under strict protocols, can now detect the risk in AI models promptly."
Tackling Regulatory Obstacles
The amendments have been implemented because it is illegal to create and own CSAM, meaning that AI creators and other parties cannot generate such content as part of a testing regime. Until now, authorities had to wait until AI-generated CSAM was uploaded online before addressing it.
This law is aimed at preventing that problem by helping to halt the production of those materials at source.
Legal Framework
The amendments are being added by the government as modifications to the crime and policing bill, which is also implementing a prohibition on possessing, creating or sharing AI models developed to create child sexual abuse material.
Real-World Impact
This recently, the minister visited the London headquarters of a children's helpline and heard a mock-up conversation to advisors involving a report of AI-based abuse. The interaction depicted a adolescent seeking help after facing extortion using a explicit AI-generated image of themselves, constructed using AI.
"When I hear about children experiencing blackmail online, it is a source of extreme frustration in me and justified anger amongst parents," he said.
Concerning Data
A leading internet monitoring foundation reported that instances of AI-generated exploitation material – such as online pages that may include numerous images – had significantly increased so far this year.
Cases of the most severe material – the most serious form of exploitation – increased from 2,621 visual files to 3,086.
- Girls were overwhelmingly victimized, accounting for 94% of illegal AI depictions in 2025
- Portrayals of newborns to toddlers rose from five in 2024 to 92 in 2025
Industry Response
The law change could "constitute a crucial step to guarantee AI tools are secure before they are launched," stated the head of the internet monitoring foundation.
"Artificial intelligence systems have enabled so survivors can be victimised repeatedly with just a simple actions, giving criminals the capability to create possibly endless quantities of advanced, lifelike exploitative content," she continued. "Content which additionally commodifies survivors' trauma, and makes young people, especially female children, more vulnerable on and off line."
Support Session Information
The children's helpline also released information of counselling sessions where AI has been mentioned. AI-related harms mentioned in the sessions comprise:
- Using AI to evaluate weight, physique and appearance
- AI assistants dissuading young people from consulting safe guardians about abuse
- Being bullied online with AI-generated material
- Digital blackmail using AI-manipulated pictures
Between April and September this year, Childline conducted 367 support sessions where AI, chatbots and related terms were mentioned, four times as many as in the equivalent timeframe last year.
Fifty percent of the mentions of AI in the 2025 interactions were related to mental health and wellness, encompassing utilizing AI assistants for support and AI therapeutic applications.