“Whoever controls information controls the future. Whoever controls access to information controls the present.”
— Baruch Spinoza (reimagined for the age of AI)
Alexander’s Morning Problem
Picture an average Russian entrepreneur — Alexander, 38, owner of a small digital agency in Moscow. He has clients, his clients have data, his data has value. In March 2026, Alexander decides to deploy AI agents in his agency: specialized systems to manage media campaigns, analyze user behavior, and make budget allocation decisions. It seemed logical. Modern. A competitive edge.
On the surface, the choice looked simple: Google Vertex AI? Amazon AWS? Some local cloud? Perplexity Spaces? He picks one, uploads his clients’ data — names, purchase histories, click patterns, product recommendations — and the system starts working.
What Alexander didn’t notice is what actually happened.
He didn’t hand over control of technology. He handed over control of access to information — his data, his clients’ data, and, indirectly, the strategic decisions of his company. All of it now lives on Google’s or Amazon’s servers. Not because these companies want to steal it. Simply because the architecture of the modern AI agent market is designed so they control access. This isn’t a bug. It’s a feature. It’s politics dressed in technology’s clothing.
Today, nobody talks about politics. Everyone talks about AI.
The Three-Tier Architecture of Power
When you look at the AI agent ecosystem in 2026, you notice something interesting. McKinsey and Gartner analysts call it “infrastructure” — a neutral word concealing something important. Three layers. But this structure is anything but neutral. It resembles a medieval hierarchy where each level serves the one above it.
The first tier: tech giants as feudal lords. Google, Amazon, Microsoft, OpenAI — they own the foundation. Computing power, base language models, data infrastructure, cloud services. This isn’t a market choice. It’s pure economics of scale. Training and maintaining an AI model in 2026 requires billions of dollars — something only a handful of companies on earth can afford.
But the money isn’t the point. These companies control not just the technology, but access to the tools for building technology. To create an AI agent, you practically must use their cloud infrastructure. Build on your own servers? Theoretically yes. But it will cost ten times more and run a hundred times slower. This is technological imperialism — a situation where opting out means opting out of competitiveness.
Hyperscalers have created a situation where refusing their services means refusing competitiveness. This isn’t a conspiracy. It’s simply the logic of capitalism reaching its natural limit.
Alexander can’t choose “neutral” technology. He’s choosing his level of subordination.
The second tier: integrators as sovereign vassals. These are companies that take the giants’ infrastructure and embed it into specific systems. Salesforce embeds AI agents into CRM. HubSpot embeds them into marketing platforms. Useful? Yes. Competitive? On the surface. But power stays at the top. These companies are franchisees. They lease the appearance of independence.
Alexander chooses Salesforce, thinking he’s escaping Google. In reality, he’s just gotten a nicer interface on top of Google Vertex AI. His clients’ data still flows to Mountain View servers — just via Salesforce first.
Every intermediate layer adds value. Every layer also adds a control point. And control means power.
The third tier: startups as serfs farming someone else’s land. These are the “agent-native” startups — companies building something new, entirely on AI agents. They look innovative, revolutionary, independent. Some genuinely are. But most are just a new layer of dependency.
Gartner projects that by end of 2026, 40% of enterprise applications will include AI agents — up from less than 5% in 2025. Most of it is repackaged chatbots under a new name. Researchers have coined “agent-washing” for companies labeling anything an “agent” just to sound current.
True innovators — those building something genuinely new — still depend on data access. Where do they get it? From the same triangle: tech giants provide compute, integrators provide channels, startups provide… ideas. Borrowed labor. The hope that someday a big company will acquire them.
This isn’t an ecosystem. It’s a food chain.
The Invisible Boundary: It’s Not About Technology
This is where we need to stop. Because all these companies talk about technology — algorithms, neural networks, parallel systems. But the real war isn’t there.
The real war is over data and access to it. AI agents are systems that need access to as much information as possible to make better decisions. That’s their core advantage over legacy systems. A legacy CRM stored data in a database. A new AI agent needs access to your data, your clients’ data, social graphs, purchase histories — even to interpret your internal chats.
Which raises the question: who decides what data agents can access? Who codes the access policy?
The answer: whoever controls the infrastructure.
Alexander uploaded his clients’ data. It now lives inside Google’s, Amazon’s, or Microsoft’s ecosystem. Yes, it’s encrypted. Yes, there are confidentiality agreements. But control isn’t encryption. Control is the right to decide who gets access. And that right belongs to the tech giants.
Who Holds the Lever
McKinsey’s “State of AI 2025” report notes that 62% of organizations are already using or testing AI agents, but only 23% have scaled them across even one business function. Coordination between agents in such systems almost always requires a central management platform. And that platform belongs to the same tech giant that owns the infrastructure.
Imagine a company running several agents — one managing marketing, one sales, one analytics. They need to interact, share information, coordinate. Who organizes that coordination? A central orchestrator sitting on Google’s, Amazon’s, or Microsoft’s servers.
This is no longer just data storage. This is management of business logic. The company thinks it’s making decisions. In reality, decisions are being made inside someone else’s servers. Hyperscalers don’t impose decisions directly. They simply… constrain options. If you want to build an agent that does X, and their platform only supports X, Y, and Z — you’re in their box.
Political scientists call this “architectural power.” No directives, no orders, no threats — just geometry you’re inscribed within.
The Cost of Access: When Pricing Becomes Policy
There’s another layer: pricing.
In 2024–2025, tech giants set access prices to AI agents relatively democratically. It was an investment in market capture. But the law of scale works strangely. As more companies adopt agents, compute becomes more expensive. And tech giants start raising prices.
In late 2025, Google raised Vertex AI prices by 40%. Amazon raised AWS prices in some regions by 35%. Microsoft raised Azure prices by 25%. This isn’t coincidence — it’s coordination. Not direct; there are no meetings at the top. Just the shared understanding that they now have leverage.
When a company depends on one platform, price becomes policy. Amazon can say “prices are rising because compute is more expensive.” Alexander can either pay more or switch platforms. But switching is impossible — he needs the same data, the same history, the same integrations. He’s trapped. Not because someone designed a trap for him. Simply because of the architecture.
The Political Dimension: Whose Law Governs the Cloud
Here’s where it gets genuinely interesting — and genuinely troubling.
When your data lives on Google’s servers in Ohio or Amazon’s servers in Ireland, it’s subject to the laws of those jurisdictions. This isn’t just about privacy. In the US, there’s lawful intercept — the government can compel a company to hand over data without notifying the user. In Europe, GDPR gives users the right to demand deletion. But if data has been used to train AI models, it’s already been copied, processed, embedded into neural network weights. Deletion is impossible.
Now imagine Russia. The Russian government wants access to citizens’ data. Google declines (theoretically). But if you’ve built your business on Google, you’ve already chosen a jurisdiction of power. And that power is in the United States.
This isn’t news. It’s been known for a long time. But AI agents make it more explicit, more consequential for business. It’s no longer just about data storage. It’s about the fact that autonomous systems making your company’s decisions are under another government’s jurisdiction.
Stanford HAI political scientist Khaymi, studying cloud computing’s impact on sovereignty, called this “digital colonization.” He’s not entirely right — colonization implies at least the appearance of independence for the colonies. Here the level of dependency is explicit, mathematical, unavoidable.
Why Local Alternatives Don’t Work
Maybe the solution is sovereign alternatives? Build your own agent on your own servers, with your own data, under your own control?
Technically possible. Yandex Cloud or Sber Cloud in Russia, Alibaba Cloud in China, local cloud services in many countries — companies are trying.
But here the scaling law kicks in. An AI agent trained on millions of examples from a global dataset will always outperform one trained on millions of examples in a single language. Data is AI’s fuel. And tech giants have access to more fuel.
A Russian company wanting to build a competitive AI agent needs data from millions of users. Where to get it? Either collect it illegally (impossible at scale) or buy it — which means submitting to the company that owns it.
A closed loop of architectural dependency.
Calling It What It Is
Let’s be direct.
We’re at a moment when the internet’s architecture is transitioning from a “user-service” model (Web 1.0, Web 2.0) to an “agent-infrastructure” model — Web 3.0 not in the blockchain sense, but in the autonomy sense.
In the old model, control was at least visible. I use Facebook — Facebook owns my data. I use Gmail — Google owns my emails. Clear.
In the new model, control becomes transparent only to insiders. Alexander’s company uses Salesforce. Salesforce uses Google Vertex AI. Vertex AI uses Alexander’s data to train a model that makes decisions affecting his business that generates new data that goes back to Google. It’s a cycle.
And in this cycle, power isn’t concentrated at one point. Power is concentrated in the architecture. In the fact that there are no other options. No choice. Only the illusion of choice — between Google, Amazon, and Microsoft.
What Happens Next
Gartner projects that by end of 2026, 40% of enterprise applications will include AI agents. That means 40% of corporate capability will operate inside ecosystems controlled by three or four companies.
A politician who wants to ban a service — can they ban AI agents on their territory? Yes. But a company can simply move its agents to servers in another country.
A company wanting to protect its data — can it demand local storage? Yes. But AI agents work by training on global data. Local storage means your agent will be dumber, slower, more expensive.
This isn’t conspiracy theory. It’s simply the result of the architecture we chose.
From Awareness to Action
I wrote this article not to frighten you. And not to convince you to abandon AI agents — that would be naive and counterproductive.
I wrote it so you understand: AI agents aren’t just technology. They’re an architecture of power. And like any architecture of power, it can be reimagined, redesigned, rebuilt.
But first you need to see it for what it is. Not as inevitability, not as a law of nature, but as a choice that was made by people and can be reconsidered by other people.
If you find yourself in Alexander’s situation, you can:
- First, understand which data is critical for your business and keep it local. Not all data needs to go to agents. Some data can remain an instrument, not an object.
- Second, demand transparency. Where is your data stored? Who has access? Which laws govern it? If a tech giant can’t answer clearly, that’s a red flag.
- Third, invest in alternatives. It may be expensive, it may be slower — but it’s an investment in independence. And independence, as we’ve seen, is politics. It means choosing who has power over you.
- Fourth, form coalitions. One company can’t compete with Google. But thousands of companies can. They can demand standards, portability, local alternatives — just as those demands became the norm for software in the 1990s with Linux and open source.
This isn’t utopia. It’s simply understanding that the architecture of power established today can be contested tomorrow.
And the most important takeaway: when we’re told a decision is technological, we should ask ourselves — isn’t it political?
Because in 2026, the boundary between technology and politics has been erased. AI agents are just a new name for an old game: whoever controls access controls the future.
Sources and Further Reading:
- Gartner (2025). “Predicts 2026: AI Agents Will Reshape Infrastructure & Operations”
- McKinsey (2025). “Seizing the Agentic AI Advantage”
- McKinsey (2025). “The State of AI: Global Survey 2025”
- Stanford HAI (2025). “AI Sovereignty’s Definitional Dilemma”
- European Commission (2025). “EU AI Act: Regulatory Framework for Artificial Intelligence”
- European Commission (2025). “Data Act”