UK Technology Companies and Child Safety Agencies to Test AI's Ability to Create Exploitation Images

Technology companies and child safety organizations will receive authority to assess whether AI systems can generate child exploitation material under recently introduced UK laws.

Substantial Rise in AI-Generated Illegal Content

The announcement came as revelations from a safety watchdog showing that reports of AI-generated child sexual abuse material have more than doubled in the past year, growing from 199 in 2024 to 426 in 2025.

Updated Regulatory Framework

Under the amendments, the authorities will permit designated AI developers and child protection groups to inspect AI systems – the underlying technology for chatbots and image generators – and verify they have sufficient protective measures to stop them from creating depictions of child sexual abuse.

"Fundamentally about stopping abuse before it occurs," stated the minister for AI and online safety, adding: "Experts, under rigorous protocols, can now identify the danger in AI models early."

Tackling Regulatory Challenges

The amendments have been introduced because it is against the law to create and possess CSAM, meaning that AI developers and others cannot generate such images as part of a evaluation regime. Until now, authorities had to delay action until AI-generated CSAM was uploaded online before addressing it.

This law is designed to preventing that issue by enabling to stop the production of those materials at source.

Legal Structure

The changes are being added by the government as revisions to the crime and policing bill, which is also establishing a ban on possessing, producing or sharing AI models designed to create child sexual abuse material.

Real-World Consequences

This recently, the minister visited the London base of Childline and listened to a mock-up call to advisors involving a account of AI-based abuse. The call portrayed a teenager requesting help after being blackmailed using a explicit AI-generated image of himself, created using AI.

"When I learn about young people facing blackmail online, it is a cause of extreme anger in me and rightful concern amongst parents," he stated.

Concerning Statistics

A prominent online safety foundation stated that cases of AI-generated abuse content – such as webpages that may include numerous images – had more than doubled so far this year.

Cases of the most severe material – the gravest form of exploitation – rose from 2,621 visual files to 3,086.

  • Female children were predominantly targeted, accounting for 94% of illegal AI depictions in 2025
  • Depictions of newborns to toddlers rose from five in 2024 to 92 in 2025

Sector Reaction

The law change could "represent a vital step to ensure AI products are safe before they are released," commented the chief executive of the internet monitoring foundation.

"AI tools have enabled so survivors can be victimised repeatedly with just a few clicks, providing offenders the ability to create possibly endless quantities of advanced, lifelike child sexual abuse material," she added. "Material which further exploits victims' suffering, and renders young people, especially female children, more vulnerable both online and offline."

Support Interaction Information

Childline also published information of support interactions where AI has been referenced. AI-related harms discussed in the conversations comprise:

  • Employing AI to rate weight, physique and appearance
  • Chatbots dissuading young people from consulting safe guardians about harm
  • Facing harassment online with AI-generated content
  • Online blackmail using AI-faked images

During April and September this year, Childline conducted 367 support interactions where AI, chatbots and associated terms were discussed, significantly more as many as in the equivalent timeframe last year.

Half of the references of AI in the 2025 sessions were related to psychological wellbeing and wellbeing, encompassing using chatbots for assistance and AI therapy applications.

Ryan Allen
Ryan Allen

A seasoned journalist and blogger with a passion for uncovering stories that matter, based in London.

February 2026 Blog Roll
July 2025 Blog Roll