UK Technology Firms and Child Protection Agencies to Test AI's Capability to Create Abuse Images

Tech firms and child protection organizations will be granted permission to assess whether AI systems can generate child exploitation material under new UK legislation.

Substantial Increase in AI-Generated Harmful Material

The declaration came as revelations from a protection watchdog showing that cases of AI-generated child sexual abuse material have more than doubled in the last twelve months, rising from 199 in 2024 to 426 in 2025.

Updated Legal Framework

Under the amendments, the authorities will allow designated AI companies and child protection organizations to inspect AI systems – the foundational systems for chatbots and image generators – and ensure they have adequate protective measures to stop them from producing images of child exploitation.

"Ultimately about preventing exploitation before it happens," declared the minister for AI and online safety, adding: "Specialists, under rigorous protocols, can now identify the danger in AI systems early."

Addressing Legal Obstacles

The changes have been introduced because it is illegal to produce and possess CSAM, meaning that AI creators and others cannot generate such images as part of a testing regime. Until now, authorities had to delay action until AI-generated CSAM was uploaded online before dealing with it.

This law is aimed at preventing that issue by helping to stop the creation of those images at their origin.

Legislative Framework

The amendments are being introduced by the government as modifications to the criminal justice legislation, which is also establishing a prohibition on possessing, creating or distributing AI models designed to create child sexual abuse material.

Practical Impact

This recently, the official visited the London headquarters of a children's helpline and heard a simulated conversation to advisors featuring a report of AI-based exploitation. The call depicted a adolescent seeking help after facing extortion using a sexualised deepfake of himself, constructed using AI.

"When I learn about children experiencing blackmail online, it is a source of extreme frustration in me and justified anger amongst parents," he stated.

Concerning Data

A leading internet monitoring organization reported that instances of AI-generated abuse content – such as online pages that may contain multiple images – had significantly increased so far this year.

Instances of category A material – the gravest form of exploitation – increased from 2,621 images or videos to 3,086.

  • Girls were predominantly victimized, accounting for 94% of prohibited AI images in 2025
  • Portrayals of infants to two-year-olds rose from five in 2024 to 92 in 2025

Sector Response

The law change could "represent a crucial step to ensure AI tools are secure before they are launched," stated the chief executive of the internet monitoring organization.

"Artificial intelligence systems have enabled so survivors can be targeted all over again with just a simple actions, giving criminals the ability to make possibly limitless amounts of sophisticated, lifelike child sexual abuse material," she added. "Material which further exploits victims' suffering, and renders children, particularly girls, more vulnerable both online and offline."

Counseling Interaction Data

The children's helpline also published details of support interactions where AI has been referenced. AI-related risks mentioned in the conversations comprise:

  • Employing AI to rate body size, body and looks
  • AI assistants discouraging young people from consulting trusted guardians about abuse
  • Being bullied online with AI-generated material
  • Digital blackmail using AI-manipulated pictures

During April and September this year, the helpline delivered 367 counselling sessions where AI, chatbots and related topics were discussed, significantly more as many as in the equivalent timeframe last year.

Fifty percent of the mentions of AI in the 2025 sessions were connected with mental health and wellbeing, encompassing using AI assistants for support and AI therapeutic apps.

Chris Johnson
Chris Johnson

A tech enthusiast and writer passionate about digital innovation and storytelling, sharing experiences from a global perspective.