🔗 Share this article UK Tech Companies and Child Protection Agencies to Test AI's Capability to Create Abuse Images Technology companies and child protection agencies will receive authority to evaluate whether artificial intelligence systems can produce child abuse material under new British legislation. Substantial Rise in AI-Generated Illegal Material The declaration came as findings from a protection watchdog showing that reports of AI-generated child sexual abuse material have more than doubled in the past year, rising from 199 in 2024 to 426 in 2025. Updated Legal Structure Under the changes, the government will permit approved AI companies and child protection organizations to examine AI systems – the foundational systems for chatbots and image generators – and verify they have sufficient protective measures to stop them from producing images of child sexual abuse. "Ultimately about preventing abuse before it occurs," declared Kanishka Narayan, noting: "Specialists, under strict conditions, can now detect the risk in AI models promptly." Addressing Legal Challenges The amendments have been introduced because it is illegal to produce and possess CSAM, meaning that AI developers and other parties cannot create such content as part of a evaluation regime. Previously, officials had to wait until AI-generated CSAM was uploaded online before addressing it. This legislation is aimed at averting that issue by helping to stop the production of those images at source. Legal Structure The changes are being introduced by the government as revisions to the crime and policing bill, which is also implementing a ban on owning, creating or distributing AI systems designed to create exploitative content. Real-World Consequences This week, the official toured the London base of a children's helpline and heard a simulated call to counsellors featuring a report of AI-based exploitation. The interaction depicted a adolescent seeking help after facing extortion using a explicit deepfake of themselves, constructed using AI. "When I hear about children facing extortion online, it is a source of intense frustration in me and rightful concern amongst parents," he stated. Concerning Data A leading online safety foundation stated that cases of AI-generated abuse material – such as online pages that may include multiple images – had more than doubled so far this year. Cases of category A content – the gravest form of abuse – rose from 2,621 images or videos to 3,086. Girls were predominantly targeted, accounting for 94% of illegal AI images in 2025 Depictions of infants to toddlers rose from five in 2024 to 92 in 2025 Industry Reaction The legislative amendment could "represent a vital step to ensure AI products are secure before they are released," commented the head of the online safety organization. "AI tools have enabled so victims can be victimised repeatedly with just a simple actions, giving criminals the ability to create potentially endless amounts of sophisticated, photorealistic child sexual abuse material," she continued. "Material which additionally exploits victims' suffering, and renders children, particularly female children, less safe on and off line." Support Session Information The children's helpline also published information of counselling interactions where AI has been mentioned. AI-related harms discussed in the sessions comprise: Using AI to evaluate weight, body and looks Chatbots dissuading young people from consulting trusted guardians about harm Facing harassment online with AI-generated content Online blackmail using AI-manipulated images Between April and September this year, Childline conducted 367 support interactions where AI, chatbots and associated terms were discussed, significantly more as many as in the equivalent timeframe last year. Fifty percent of the mentions of AI in the 2025 interactions were connected with mental health and wellness, including utilizing chatbots for assistance and AI therapy apps.