By Stuart Kerr, Technology Correspondent
🗓️ Published: 12 July 2025 | 🔄 Last updated: 12 July 2025
📩 Contact: liveaiwire@gmail.com | 📣 Follow @LiveAIWire
🔗 Author Bio: https://www.liveaiwire.com/p/to-liveaiwire-where-artificial.html
The Illusion of Digital Erasure
In the age of generative AI, personal data doesn’t just sit in files—it propagates, learns, and reappears. A childhood photo scraped from a forgotten social profile might resurface in a training dataset. A decade-old article could feed a chatbot’s tone. And a once-deleted comment might shape an algorithm’s view of who you are.
This raises a pressing question: Can you ever truly disappear in the age of machine memory?
The "Right to Be Forgotten," first crystallised in EU law through GDPR Article 17, is meant to empower individuals to request deletion of personal data. But with AI models trained on vast, untraceable sets of data scraped from across the internet, enforcement is slipping into murky legal territory.
When Forgetting Meets Machine Learning
AI models don’t store data like databases. They "learn" from it, internalising patterns that can’t easily be unraveled. If your information helped train a model that now generates text, images, or predictions, deletion is no longer as simple as pressing a button.
As noted by the UK Information Commissioner's Office, the complexity of AI training systems means that even if an organisation deletes the original data, its influence may remain deeply embedded.
Invisible Infrastructure explores how these systems rely on vast, opaque backend processes that evade accountability. The data pipelines that feed AI are so intricate that tracing any single data point’s effect is nearly impossible.
Legal Expectations vs Technical Limitations
A 2024 OECD report (PDF) warns that the mismatch between data protection rights and AI design could lead to a systemic erosion of privacy. In other words, our rights are only as good as the systems designed to uphold them—and many AI systems were never designed with deletion in mind.
This is particularly troubling for people who seek digital anonymity due to past trauma, political persecution, or criminal rehabilitation. The Silent Bias showed how AI can entrench historical data, further marginalising vulnerable groups. If erasure is functionally impossible, what hope is there for a fresh start?
Algorithms That Remember Everything
Modern AI tools are being deployed in everything from hiring systems to predictive policing. A 2019 EDPB PDF outlined the criteria for erasure in search engines, but AI models often sidestep these mechanisms entirely.
Even as platforms comply with deletion requests at surface level, the same data may live on in third-party datasets, archived servers, or derivative models. In Faith, Fraud and Face Filters, we examined how AI reshapes identity. But if identity is reshaped by data you can’t delete, autonomy itself is in question.
And what about consent? A user might have agreed to share their data in one context, never anticipating that it would later help train a model producing outputs worldwide.
Legal Grey Zones and Global Disparity
Globally, protections are wildly inconsistent. The EU’s GDPR is relatively robust. But in many countries, the right to be forgotten does not exist at all. Where laws are present, enforcement is weak. And even in places like the UK, interpretation varies between cases.
The site GDPR.eu explains that erasure is not absolute. Requests can be denied when data serves the public interest, legal claims, or archival purposes. But who defines public interest when AI can hallucinate reputational harm?
Toward Enforceable Forgetting
Technical solutions are emerging. Machine unlearning—a process of selectively removing training data’s influence—is gaining traction. But it’s still nascent and difficult to apply at scale. Meanwhile, advocacy groups argue for greater transparency in dataset construction, mandatory audit trails, and searchable model provenance.
In Ghost Writers of the Courtroom, we saw how AI-generated legal text could shift the dynamics of responsibility. But that same opacity plagues the systems governing erasure. If we can’t prove where data went, how can we prove it’s gone?
The debate is no longer theoretical. The right to be forgotten is not just a legal safeguard—it’s a test of whether our digital futures remain under human control.
About the Author
Stuart Kerr is the Technology Correspondent at LiveAIWire. He writes about AI’s impact on infrastructure, governance, creativity, and power.
📩 Contact: liveaiwire@gmail.com | 📣 @LiveAIWire