AI’s New Crimewave: Why Your Inbox Is the Front Line

Stuart Kerr
0
Stylised graphic of a hooded hacker at a laptop with envelope icons, binary code, and an @ symbol, representing AI-driven email threats.


By Stuart Kerr, Technology Correspondent

Published: 22/09/2025 | Last Updated: 22/09/2025
Contact: [email protected] | Twitter: @LiveAIWire


When the Message Becomes the Weapon

It used to be that phishing emails gave themselves away — bad grammar, shady links, offers too good to be true. But the next wave of email threats is being written not by people but by machines. Artificial intelligence is powering what security analysts at the Washington Post describe as a golden age of hacking, where the inbox has once again become the easiest entry point into our digital lives.
This is not science fiction. From prompt injection — tricking a model into ignoring safeguards — to so-called agentic hacks where AI agents carry out tasks on behalf of attackers, cybercrime has never been more automated. The result is a tidal wave of highly convincing emails that blur the line between ordinary communication and attack vector.

From Spam to Sophistication

In the past, phishing was about volume: send millions of identical messages and hope a few worked. But as our coverage of Digital Delusions showed, AI shifts the balance from scale to subtlety. Modern systems craft unique, context-aware emails that adapt to the recipient’s profile.
A recent investigation by Business Insider revealed how cybercriminals are experimenting with “vibe hacking,” using conversational AI to manipulate tone and mood in ways that build false trust. What was once the preserve of con artists is now industrialised by machine learning.

The Anatomy of a Prompt Injection

The technical mechanics of this crimewave are laid bare in a peer-reviewed Applied Sciences study, which explains how attackers use hidden instructions embedded in emails to redirect large language models toward malicious ends. This is more than a clever trick; it’s a structural vulnerability in the way AI processes input.
Researchers at Google have also detailed how indirect prompt injections work, where harmless-looking content such as a PDF or website carries concealed instructions. When an AI assistant reads it, the model is silently subverted. Lessons from defending Gemini against these threats were outlined in a DeepMind security paper, offering rare transparency from one of the industry’s biggest players.

Agentic Hacks: When AI Does the Legwork

The rise of agentic AI — systems capable of carrying out sequences of tasks autonomously — adds another layer of risk. As we explored in Beyond Hallucinations, giving AI tools more autonomy comes with unintended consequences.
An arXiv preprint on Prompt Injection 2.0 warns that attackers are now hijacking entire chains of automated activity, not just single outputs. Imagine your AI scheduling meetings, sending invoices, or even making purchases — all under the control of an attacker who never sends a single email themselves.

Security and Society

The implications extend far beyond IT departments. As highlighted in our earlier piece on Europe’s Open-Source AI Moment, transparency and governance are just as critical as technical fixes. If inboxes are once again the frontline, then society needs to treat AI security as a public good.
For governments, that means regulation that keeps pace with innovation. For businesses, it means moving beyond awareness campaigns to real, systemic defences. And for individuals, it means recognising that an email that “feels right” could still be entirely fabricated.

Towards a More Resilient Digital World

The lesson here is that trust is the new target. Prompt injections don’t just exploit software — they exploit human confidence in AI itself. Unless mitigated, these threats could corrode adoption of digital assistants, customer support bots, and even medical or financial AI services.
The good news is that the research community is taking this seriously. From open-access studies to corporate transparency reports, the conversation is moving away from hype and towards resilience. If we can keep pace with the attackers, AI doesn’t just pose risks — it can also be part of the defence.

About the Author

Stuart Kerr is the Technology Correspondent for LiveAIWire. He writes about artificial intelligence, ethics, and how technology is reshaping everyday life. Read more.

Post a Comment

0 Comments

Post a Comment (0)

#buttons=(Ok, Go it!) #days=(20)

Our website uses cookies to enhance your experience. Check Now
Ok, Go it!