British Tech Firms and Child Safety Agencies to Examine AI's Ability to Generate Exploitation Content

Technology companies and child protection organizations will receive authority to assess whether AI systems can generate child exploitation images under new UK legislation.

Significant Rise in AI-Generated Harmful Material

The announcement coincided with findings from a protection monitoring body showing that cases of AI-generated child sexual abuse material have more than doubled in the last twelve months, rising from 199 in 2024 to 426 in 2025.

New Legal Framework

Under the amendments, the government will permit designated AI companies and child safety organizations to inspect AI models – the foundational technology for conversational AI and visual AI tools – and verify they have adequate protective measures to prevent them from producing depictions of child exploitation.

"Fundamentally about stopping exploitation before it occurs," declared the minister for AI and online safety, noting: "Specialists, under rigorous protocols, can now detect the danger in AI systems promptly."

Addressing Legal Challenges

The amendments have been introduced because it is illegal to create and own CSAM, meaning that AI creators and others cannot create such images as part of a testing regime. Previously, authorities had to delay action until AI-generated CSAM was published online before dealing with it.

This law is designed to averting that problem by enabling to stop the production of those images at source.

Legislative Structure

The amendments are being introduced by the government as modifications to the crime and policing bill, which is also establishing a prohibition on possessing, creating or distributing AI systems designed to generate child sexual abuse material.

Practical Impact

This recently, the official toured the London base of Childline and listened to a mock-up conversation to advisors involving a report of AI-based abuse. The call portrayed a adolescent requesting help after being blackmailed using a sexualised AI-generated image of themselves, constructed using AI.

"When I hear about young people facing blackmail online, it is a cause of intense frustration in me and justified anger amongst parents," he stated.

Concerning Statistics

A prominent internet monitoring foundation reported that instances of AI-generated abuse content – such as webpages that may contain numerous files – had more than doubled so far this year.

Cases of category A content – the most serious form of abuse – increased from 2,621 visual files to 3,086.

  • Female children were overwhelmingly targeted, accounting for 94% of illegal AI depictions in 2025
  • Depictions of infants to toddlers increased from five in 2024 to 92 in 2025

Sector Response

The law change could "represent a vital step to guarantee AI tools are safe before they are launched," stated the chief executive of the internet monitoring foundation.

"Artificial intelligence systems have enabled so victims can be targeted repeatedly with just a simple actions, giving offenders the capability to make possibly limitless amounts of advanced, lifelike exploitative content," she continued. "Content which further exploits victims' suffering, and renders young people, particularly girls, less safe on and off line."

Support Session Information

Childline also published information of support interactions where AI has been referenced. AI-related risks discussed in the sessions comprise:

  • Employing AI to evaluate body size, physique and appearance
  • Chatbots dissuading children from talking to trusted adults about abuse
  • Facing harassment online with AI-generated content
  • Online extortion using AI-manipulated pictures

Between April and September this year, Childline delivered 367 counselling interactions where AI, chatbots and associated terms were mentioned, four times as many as in the equivalent timeframe last year.

Half of the mentions of AI in the 2025 sessions were connected with mental health and wellbeing, including utilizing chatbots for support and AI therapeutic applications.

Melissa Martinez
Melissa Martinez

Elara is an experienced ed-tech specialist passionate about creating innovative learning environments and improving educational outcomes through technology.

February 2026 Blog Roll

Popular Post