The New Feudal Web: Will AI Reinforce Digital Power or Break It?

Stuart Kerr
0

 


By Stuart Kerr, Technology Correspondent

🗓️ Published: 12 July 2025 | 🔄 Last updated: 12 July 2025
📩 Contact: liveaiwire@gmail.com | 📣 Follow @LiveAIWire
🔗 Author Bio: https://www.liveaiwire.com/p/to-liveaiwire-where-artificial.html


The Digital Lords of a Fragmented Realm

As artificial intelligence continues its relentless expansion, a new kind of power structure is quietly taking shape—one many are calling the New Feudal Web. Unlike the decentralised, democratic vision of the early internet, today’s digital landscape is increasingly governed by platform overlords who own the infrastructure, write the rules, and mine the data. AI, once hailed as a levelling force, may be reinforcing the walls rather than tearing them down.

At the centre of this transformation are a handful of corporations who control the APIs, algorithms, and access layers that underpin modern life. These gatekeepers not only serve content—they curate experience, dictate monetisation, and now, thanks to AI, tailor reality itself to each user’s profile. What once required mass communication now happens one person at a time, through synthetic nudges no one voted for.


From Platform to Sovereignty

The shift from open protocols to proprietary ecosystems isn’t new. But AI is accelerating that shift by embedding automation into every tier of engagement—from search results and social feeds to hiring platforms and financial products. And with generative tools now powering content creation, moderation, and prediction, questions of governance become questions of sovereignty.

The Digital Markets Act in the EU attempts to rein in dominant platforms, classifying them as “gatekeepers” who must share data and remain interoperable. Similarly, the AI Act establishes risk-based rules for AI deployment. But global enforcement lags. Outside Europe, regulation remains patchy, and tech giants often operate beyond meaningful jurisdiction.

According to a World Economic Forum white paper, we’re entering a “blended reality” where AI-generated information is indistinguishable from fact—and where trust in institutions risks collapse without transparent governance structures.


The Data Rent Economy

What we’re seeing is the rise of a digital rentier class. Platforms no longer just monetise access—they monetise prediction. They collect behavioural data, train proprietary models on it, then sell access to those models. You don’t own your data. You don’t even know what it’s training. In this system, AI is not the liberator—it’s the landlord.

This was explored in Invisible Infrastructure, which showed how seemingly neutral AI systems rely on deeply centralised inputs. The code may be open-source, but the infrastructure—cloud networks, GPU clusters, compliance APIs—remains closely held.

The OECD has raised concerns that such concentrated infrastructure creates a bottleneck for innovation, leading to fewer choices and greater dependencies for governments, SMEs, and civil society.


Algorithmic Patronage

In medieval times, vassals swore loyalty to lords in exchange for protection and land. Today, creators, coders, and even governments align with digital platforms in return for access to users and tools. Recommendation engines act as gatekeepers of visibility. Search engines become tollbooths. And now, AI agents broker who sees what, when, and how.

But who audits the algorithmic lords? In The Silent Bias, we explored how machine learning systems can encode and reproduce inequality at scale. These systems don’t just reflect society—they shape it. When bias becomes scalable, so does power.

Meanwhile, attempts to decentralise the web often fail due to lack of incentives. Web3 promised a rebalanced internet, but progress has stalled. The challenge isn’t technological—it’s institutional.


Legal Tech and Loyalty Contracts

As AI encroaches on legal interpretation, policy enforcement, and civic discourse, new risks emerge. In Ghost Writers of the Courtroom, we saw how automated systems are drafting legal documents, even influencing case outcomes. But when laws are interpreted by code, whose code counts?

A legal guide by Bird & Bird outlines the EU’s AI Act (PDF), raising concerns about "opaque AI systems" in critical decision-making sectors.


Faith in Filters

In Faith, Fraud and Face Filters, we saw how AI mediates religious spaces, online identity, and even belief. This is not just governance of data—it’s governance of meaning. And when those meanings are shaped by models trained on biased, incomplete, or opaque data, the danger is not just technical—it’s cultural.

What happens when digital lords not only define the rules, but the myths?


The Choice Ahead

The internet was supposed to be flat. AI was supposed to be fair. But the tools we build reflect the power we tolerate. If AI systems are developed, trained, and deployed by a handful of actors, then we must ask not only what the technology does—but who it does it for.

There is still time to resist a feudal web. But it will require more than ethics pledges and interoperability clauses. It will require public infrastructure, enforceable regulation, and, most importantly, a cultural shift in how we understand autonomy in the age of algorithms.

Until then, the AI vassals will keep paying rent in code.


About the Author

Stuart Kerr is the Technology Correspondent at LiveAIWire. He writes about AI’s impact on infrastructure, governance, creativity, and power.
📩 Contact: liveaiwire@gmail.com | 📣 @LiveAIWire

Post a Comment

0 Comments

Post a Comment (0)

#buttons=(Ok, Go it!) #days=(20)

Our website uses cookies to enhance your experience. Check Now
Ok, Go it!