By Stuart Kerr
Published: 30 June 2025, 09:15 BST | Updated: 30 June 2025, 09:15 BST
A Sky News investigation has revealed alarming growth in artificial intelligence (AI) misuse across the UK, with deepfake pornography, fraudulent celebrity bots, and AI-scams overwhelming existing safeguards. The findings come as Parliament prepares to debate amendments to the Online Safety Act amid mounting pressure for tougher AI regulations.
Deepfake Crisis: Victims Left Without Recourse
New data from the UK's Revenge Porn Helpline shows AI-generated abuse cases now account for 1 in 5 reports, with victims facing unique challenges. "Unlike traditional intimate image abuse, deepfakes leave no digital trail to trace perpetrators," explained Sophie Mortimer, the helpline's manager.
Dr. Sandra Wachter at Oxford's Internet Institute warns current laws are dangerously outdated. "The Online Safety Act defines illegal content as 'real intimate images' - this loophole lets AI-generated material slip through," she told Sky News. Police forces report only 12% of deepfake cases result in charges, largely due to jurisdictional issues when tools are hosted overseas.
The government's much-delayed AI Accountability Bill, now scheduled for autumn 2025, proposes making deepfake creation without consent a specific offence carrying up to two years' imprisonment. However, Labour's digital spokesperson Louise Haigh MP argues: "We need immediate interim measures, not more delays."
Celebrity Chatbot Scams Exploit Fan Trust
Research by the Alan Turing Institute documents how AI impersonation scams have evolved beyond crude phishing attempts. "Modern bots use voice cloning and social media history to build terrifyingly accurate personas," said lead researcher Professor Mark Bishop.
The problem gained prominence when broadcaster Maya Jama's likeness was used to promote fake investment schemes last year. Her legal team successfully petitioned Meta to remove 487 violating accounts, but new ones appeared within hours. "The platforms' 'whack-a-mole' approach isn't working," Jama told Sky News.
Entertainment lawyer Mark Stephens estimates UK celebrities now spend £15,000-£50,000 annually combating AI impersonations. "Right now, the financial burden falls entirely on victims," he said. The Intellectual Property Office is considering new personality rights protections, but reforms won't take effect before 2026.
Social Media Platforms Under Fire
Ofcom's latest transparency data reveals stark gaps in enforcement:
TikTok removed just 38% of reported AI bots within 24 hours
X (formerly Twitter) has no dedicated AI content moderation team
Instagram's detection systems fail to identify 62% of voice-cloned accounts
A leaked internal memo from Snapchat obtained by Sky News shows the company prioritises removing "terrorist content and child exploitation" over AI scams due to resource constraints.
Dame Melanie Dawes, Ofcom's Chief Executive, warned platforms face heavy fines if they don't improve. "Our new powers under the Online Safety Act will force companies to address these systemic failures," she said.
The Innovation vs Protection Debate
TechUK continues to advocate for balanced regulation. "We support watermarking and disclosure rules, but outright bans on synthetic media would harm legitimate uses in education and healthcare," said Director Antony Walker.
Medical professionals echo these concerns. Dr. Emma Robertson at Great Ormond Street Hospital uses AI voice cloning to help non-verbal children communicate. "The technology itself isn't the problem - it's about intent and oversight," she argued.
However, advocacy group Glitch points to disturbing trends. "AI abuse disproportionately targets women and marginalised communities," said founder Seyi Akiwowo. "Until platforms invest properly in moderation, calls for 'balance' ring hollow."
Pathways to Reform
Key proposals gaining traction include:
Emergency takedown powers - allowing victims to remove harmful AI content within 2 hours
Mandatory watermarking - backed by Adobe, Microsoft and OpenAI
Compensation schemes - funded by platform fees
The European Union's recently implemented AI Act provides one potential model, though critics argue its complex risk categories may prove unworkable.
For now, the government advises victims to:
Report cases to the Revenge Porn Helpline
Use the new StopAIAbuse portal for legal support
Document all instances of abuse
As Sky News Technology Correspondent Rowland Manthorpe concluded: "This isn't a theoretical debate - real people are suffering real harm today. The question isn't whether we regulate AI, but how quickly we can do it effectively."
Stuart Kerr is a journalist specialising in Ai News. Contact him at liveaiwire@gmail.com or follow @liveaiwire for updates.