By Stuart Kerr, Technology Correspondent
Published: 07/08/2025
Last Updated: 07/08/2025
Contact: liveaiwire@gmail.com | Twitter: @LiveAIWire
Author Bio: About Stuart Kerr
Google has officially signed the European Union’s Code of Practice for General-Purpose AI (GPAI), a landmark regulatory framework that seeks to impose new obligations around transparency, copyright compliance, and safety for frontier AI models. The decision, confirmed on July 30th, arrives amid mounting pressure from EU regulators and growing public concern over how large AI systems are developed, deployed, and monetised.
According to Reuters, Google’s signature came with caveats, as the company continues to lobby for flexibility in how transparency measures are defined. Nonetheless, the commitment is significant: the Code of Practice acts as a precursor to mandatory requirements under the EU AI Act, expected to be enforced in 2026.
The Code is divided into three pillars: transparency, copyright, and model safety. For Google and other GPAI providers, this includes disclosing training data summaries, outlining how outputs are generated, and proving they have mechanisms to avoid systemic bias or harm—topics previously discussed in our article on AI Guardrails.
Perhaps the most contentious issue is copyright. The EU now requires GPAI developers to document and, where feasible, license the content used to train their models. This has profound implications for platforms like Gemini and OpenAI’s GPT-series, which are built on vast corpora of scraped online data. In our report, The AI Identity Crisis, we explored how generative AI often blurs the line between learned data and original content. Under the new Code, that distinction could soon become a legal battleground.
EU officials argue that this is a necessary step to safeguard creators, publishers, and consumers. "Transparency isn’t optional anymore. It’s foundational," said one Commission spokesperson during the rollout. And according to the official EU documentation, all GPAI signatories must now publish “plain-language documentation” explaining what their models do and how they interact with human users.
The Code also includes a proposed commitment to watermarking AI-generated content—a solution long pushed by creative industry bodies to preserve trust in digital media. However, enforcement remains a grey area. While the Code sets a framework, it is currently non-binding, and Google has previously expressed concerns about how its provisions might limit innovation.
Critics argue the tech giant is playing a double game: cooperating to avoid stricter sanctions later. But EU lawmakers appear undeterred. As detailed in AI Regulation Crossroads, the EU sees voluntary adoption as a step toward enforceable governance, much like GDPR once began as a set of best practices before becoming law.
Outside the regulatory chambers, the implications are equally vast. For developers, documentation and copyright accountability could slow release cycles. For journalists and educators, it could create new benchmarks of trust. And for Big Tech, the message is clear: regulation is no longer a distant threat—it's here.
A summary PDF published by Cultural Action Europe lays out the full range of commitments expected from providers, including responsible deployment disclosures and alignment with fundamental rights.
This shift comes at a critical time, as AI systems increasingly influence everything from elections to healthcare to education. The Code of Practice may not be perfect, but it marks a turning point. The age of unchecked innovation is ending—and a new era of regulated transparency is beginning.
About the Author
Stuart Kerr is the Technology Correspondent for LiveAIWire. He writes about artificial intelligence, ethics, and how technology is reshaping everyday life. Read more