Technology companies and child safety organizations will receive authority to assess whether AI systems can generate child exploitation material under recently introduced UK laws.
The announcement came as revelations from a safety watchdog showing that reports of AI-generated child sexual abuse material have more than doubled in the past year, growing from 199 in 2024 to 426 in 2025.
Under the amendments, the authorities will permit designated AI developers and child protection groups to inspect AI systems – the underlying technology for chatbots and image generators – and verify they have sufficient protective measures to stop them from creating depictions of child sexual abuse.
"Fundamentally about stopping abuse before it occurs," stated the minister for AI and online safety, adding: "Experts, under rigorous protocols, can now identify the danger in AI models early."
The amendments have been introduced because it is against the law to create and possess CSAM, meaning that AI developers and others cannot generate such images as part of a evaluation regime. Until now, authorities had to delay action until AI-generated CSAM was uploaded online before addressing it.
This law is designed to preventing that issue by enabling to stop the production of those materials at source.
The changes are being added by the government as revisions to the crime and policing bill, which is also establishing a ban on possessing, producing or sharing AI models designed to create child sexual abuse material.
This recently, the minister visited the London base of Childline and listened to a mock-up call to advisors involving a account of AI-based abuse. The call portrayed a teenager requesting help after being blackmailed using a explicit AI-generated image of himself, created using AI.
"When I learn about young people facing blackmail online, it is a cause of extreme anger in me and rightful concern amongst parents," he stated.
A prominent online safety foundation stated that cases of AI-generated abuse content – such as webpages that may include numerous images – had more than doubled so far this year.
Cases of the most severe material – the gravest form of exploitation – rose from 2,621 visual files to 3,086.
The law change could "represent a vital step to ensure AI products are safe before they are released," commented the chief executive of the internet monitoring foundation.
"AI tools have enabled so survivors can be victimised repeatedly with just a few clicks, providing offenders the ability to create possibly endless quantities of advanced, lifelike child sexual abuse material," she added. "Material which further exploits victims' suffering, and renders young people, especially female children, more vulnerable both online and offline."
Childline also published information of support interactions where AI has been referenced. AI-related harms discussed in the conversations comprise:
During April and September this year, Childline conducted 367 support interactions where AI, chatbots and associated terms were discussed, significantly more as many as in the equivalent timeframe last year.
Half of the references of AI in the 2025 sessions were related to psychological wellbeing and wellbeing, encompassing using chatbots for assistance and AI therapy applications.
A seasoned journalist and blogger with a passion for uncovering stories that matter, based in London.