By Stuart Kerr, Technology Correspondent
Published: 07/08/2025
Last Updated: 07/08/2025
Contact: liveaiwire@gmail.com | Twitter: @LiveAIWire
Author Bio: About Stuart Kerr
Meta has publicly refused to sign the European Union’s Code of Practice for General-Purpose AI (GPAI), breaking ranks with other major tech firms such as Microsoft and Google. The move has reignited debate over how AI should be regulated across borders and whether voluntary frameworks can meaningfully shape the behaviour of companies operating frontier AI models.
According to The Verge, Meta claims that the EU’s Code introduces obligations that could “stifle innovation” and expose proprietary data to unnecessary scrutiny. The company, through policy executive Joel Kaplan, argues that while it supports safety and ethics, the current version of the Code is too vague and could lead to overlapping or inconsistent compliance demands.
The Code, published by the European Commission, sets expectations in three key areas: transparency, copyright compliance, and model safety. It requires general-purpose AI developers to provide plain-language documentation about how their models function and to take steps to prevent bias and misinformation. It also includes commitments around licensing training data and watermarking synthetic content—principles that Microsoft has already endorsed, as noted in PPC Land’s coverage.
For many observers, Meta’s rejection highlights a broader rift within the tech industry. Some firms are racing to self-regulate in advance of the EU’s AI Act, while others, like Meta, appear to be resisting the shift toward preemptive accountability. This contrast is stark when viewed through the lens of our earlier report on AI Regulation Crossroads, where we explored how regulatory clarity is becoming a strategic differentiator in global markets.
Transparency is perhaps the most contested pillar of the Code. As discussed in our feature on AI Guardrails, developers are increasingly expected to disclose training inputs, fine-tuning protocols, and ethical constraints. Meta’s position is that such disclosure could endanger competitive advantage and oversimplify how AI decision-making actually works.
There are also concerns around copyright. Under the Code, AI companies must document whether copyrighted materials were used in training and, where applicable, demonstrate licensing agreements. This echoes long-standing critiques of generative AI voiced in The AI Identity Crisis, which questioned the boundary between public data use and creative exploitation.
A joint review by CADE Project underscores these tensions. While Meta frames its rejection as a defence of innovation, regulators and civil society groups worry the decision signals a broader reluctance to accept guardrails for AI development.
Meanwhile, Microsoft has embraced the EU framework. A Microsoft compliance PDF published earlier this year outlines the prohibited AI practices the company has pledged to avoid—including mass surveillance, manipulation, and discrimination—many of which align closely with the Code’s ethics clauses.
The divide may ultimately reflect divergent business models. Meta’s core platforms rely heavily on personalisation and data-driven content delivery, while Microsoft’s enterprise model offers more room for regulation-aligned infrastructure.
A broader view, summarised in TechRepublic, suggests that companies signing the Code will enjoy greater alignment with the forthcoming EU AI Act. Those opting out, like Meta, risk reputational harm or even exclusion from key procurement opportunities under the upcoming legal framework.
The debate now extends far beyond Brussels. It signals the beginning of an era where tech companies must choose between embracing enforceable transparency or risking regulatory collision. The outcome may define not just the next generation of AI, but the ethical foundations upon which it is built.
About the Author
Stuart Kerr is the Technology Correspondent for LiveAIWire. He writes about artificial intelligence, ethics, and how technology is reshaping everyday life. Read more