Against the Quiet Empire
There is a certain kind of political conversation that never quite touches the ground.
You can tell when you are in one. The words are fine: rights, dignity, accountability, fairness. The sentiments are unimpeachable. Everyone agrees that power should be constrained, that the vulnerable should be protected, that no one should be above the law. The room nods. The document is signed. And then nothing changes, because no one has said anything about what happens when a sufficiently motivated actor decides to ignore the agreement.
This is the conversation we are having about artificial intelligence. Summits produce declarations. Companies publish principles. Researchers call for caution. The language of ethics and safety has escaped the seminar room and entered the mainstream. And yet, underneath the careful words, the same question goes unanswered: what actually stops the people building the most powerful systems from doing whatever they want?
The honest answer, right now, is not much.
That is not because the people involved are villains. Most of them probably believe their own rhetoric. The problem is structural. Principles do not enforce themselves. The world already has a beautiful set of universal norms (human rights, dignity, due process) and almost every government on Earth has signed some version of them. We also know what happens when those norms meet concentrated power, fear, and incentive. They become optional. They are obeyed when convenient, ignored when costly, and reinterpreted when embarrassing. The gap between what we profess and what we permit is not a bug in the system. It is the system, whenever enforcement depends on the goodwill of those being constrained.
An AGI transition, done badly, makes this failure mode worse. It concentrates capability into fewer hands, accelerates decision cycles beyond the pace of human deliberation, and turns persuasion into a precision instrument. If a small group can build systems that predict, manipulate, produce, and defend faster than any coalition of citizens can organise, then appeals to morality stop being a constraint and become a public relations layer. In that world, the constitution is not a document. It is whatever the owners of the infrastructure can get away with.
So if we want any kind of ethics-first order to survive contact with reality, it needs something more primitive than agreement. It needs leverage. Not the leverage of violence, but the leverage of chokepoints: those physical and institutional bottlenecks that even very powerful actors cannot easily bypass without paying a steep price.
The first chokepoint is electricity. Frontier training and large-scale inference are not weightless ideas floating in the cloud. They are industrial processes with a power signature. They demand grid connections, cooling, land, fibre, maintenance, and the quiet cooperation of dozens of companies before a single gradient is computed. The second chokepoint is specialised hardware. Whatever surprises the future holds, the capability frontier in the transition period will still be shaped by scarce chips, advanced manufacturing, and tightly managed supply chains. The third chokepoint is market access: banking rails, insurance, export controls, corporate law, and the ability to sell services into large consumer markets. A world of robots still runs through contracts and ports and power substations. The dream of a superintelligence that bootstraps itself out of nothing, needing no one’s permission, ignores a lot of physics and a lot of plumbing.
A serious governance proposal therefore starts with a simple move: treat frontier compute and frontier model deployment as critical infrastructure, in the same regulatory category as telecommunications, banking, or nuclear materials. Not because we want the state to own everything, but because we want to prevent a situation where a handful of private actors become de facto sovereign. If you are operating systems above a certain capability threshold, you do not get to behave like an ordinary app developer. You are a systemically important actor in the same sense that a major bank is systemically important: your failure can crash society, your capture can corrupt it, and your incentives will otherwise drift toward dominance unless something external pushes back.
The core of such a regime is licensing with teeth. Licenses are not moral pledges. They are legal rights to run at scale, granted conditionally, renewed conditionally, and revoked conditionally. They require routine audits, incident reporting, and what the financial world calls living wills: precommitted procedures for what happens if a system begins to cause serious harm or if its operator becomes untrustworthy. This is not asking the powerful to be saints. It is making power itself accept limits, even when it does not want to.
To make this enforceable, you push the licensing requirements down into the infrastructure layer. Datacenters above a threshold cannot connect to the grid without a permit. They cannot get insurance without compliance. They cannot legally import large volumes of advanced chips without disclosure. They cannot colocate in reputable facilities without attested records of what compute ran and for what purpose. You build, in effect, a compute passport: cryptographic attestation tied to hardware, so that large runs have an auditable trail. This will never be perfect, and determined bad actors will find workarounds. But perfection is not the target. The target is to make large-scale, unlicensed frontier activity hard to hide, expensive to finance, and dangerous to normal business operations. You raise the cost of defection until only the most committed outliers attempt it, and you make those outliers visible enough that they can be dealt with through other means.
At the national level, these mechanisms can prevent a straightforward corporate coup. But the problem is broader. The infrastructure is currently concentrated in a small number of countries, and everyone else risks becoming collateral damage in a new kind of empire, one ruled not by governors and soldiers but by model APIs, cloud dependencies, and the quiet economic suffocation of those who do not control compute. A purely national approach fails here, because the strongest actors can shop for jurisdictions, threaten capital flight, and play countries against each other until the regulations are hollowed out.
The realistic countermeasure is not the United Nations. It is a club.
A club is a coalition of markets and supply-chain nodes that can set conditions for participation. If you want to sell into their markets, raise capital on their exchanges, buy their advanced equipment, insure your infrastructure, or access their payment rails, you comply with the club’s rules. Those rules do not need global consensus. They need enough economic gravity that refusing them is painful. In practice, this is how most meaningful international governance already works: trade blocs, financial standards, aviation safety, nuclear non-proliferation. It is imperfect, leaky, and prone to hypocrisy. But it is real in a way that aspirational treaties are not.
In an AGI context, a frontier compute club would require three things from its members: compute attestation for large-scale operations, licensing and safety audits for frontier models, and a minimum regime for anti-capture governance (transparency of beneficial ownership, restrictions on revolving doors between regulators and the regulated, enforceable penalties for covert political influence). Countries with different cultures and political systems can still disagree on many substantive values, but they can converge on a floor: no permanent castes, no deliberate cruelty, no coercion of conscience, and no unaccountable sovereign compute. The club does not try to harmonise everything. It tries to keep the worst failure modes off the table.
This still leaves the most explosive issue, the one that makes governance discussions feel inadequate even when they are technically sophisticated. Money.
If a small number of actors own the productive base of the future (compute, models, robots, energy) then ordinary people can become economically irrelevant. Not temporarily unemployed while they retrain, but structurally unnecessary. That is the point where traditional capitalism breaks. Not because markets stop working as allocation mechanisms, but because the political bargain that markets sit on collapses. When people cannot bargain with labour, they either get charity or revolt. And if the winners can secure themselves behind automation and private security, even revolt becomes less credible. The dragon here is not wealth. It is exit. It is the ability of a tiny group to leave society while still extracting value from it, owing nothing to anyone, accountable to no constituency except themselves.
The only stable answer is to make exit hard and sharing automatic.
That means treating part of frontier productivity as a commons and distributing its returns broadly. Not by hoping for voluntary philanthropy from the winners, and not by switching to central planning where bureaucrats allocate everything, but by hardwiring rent-sharing into the same licensing chokepoints that make enforcement possible. If you run frontier systems at scale, you pay into a social wealth fund through mechanisms that are difficult to evade: levies tied to large compute usage and energy draw, royalties on licensed frontier deployments, and taxes on land rents that would otherwise absorb the gains of abundance into housing scarcity. The fund then pays a universal dividend. Paired with universal basic services (healthcare, education, transit, connectivity) it decouples survival from employment without erasing markets, entrepreneurship, or personal choice.
This is not utopian. It is the same logic as resource royalties in places that treat oil or minerals as a public asset. If you extract from a common inheritance, you owe the public a share. In the AGI era, the inheritance is not just natural resources buried in the ground. It is civilisation’s accumulated knowledge, the stability of public institutions, and the physical infrastructure that makes compute possible. No one built any of that alone. The companies that deploy frontier systems are standing on centuries of collective investment, and the returns from that deployment should flow back, at least in part, to the collective.
During the transition, this dividend cannot begin at full strength. The economy will not transform all at once. White-collar work may collapse faster than the physical economy can retool. Blue-collar roles may persist longer, then drop sharply as robotics improves. So the dividend must be accompanied by shock absorbers: wage insurance for displaced workers, fast debt restructuring to prevent cascades of default, and voluntary purpose tracks that offer meaningful paid roles in care, mediation, education, ecological restoration, culture, and civic deliberation. The point is not to invent busywork to keep people quiet. The point is to keep human lives structured around contribution, relationship, and growth rather than resentment and decay. People need to matter, not just survive.
Finally, governance must assume corruption. A plan that only works when leaders are virtuous is not a plan. So the anti-capture measures cannot be aesthetic gestures. They must be mechanical. High-risk authorisations require multi-institutional sign-off, not one minister’s signature. The AI stack must be structurally separated so that no single firm controls chips, cloud, frontier models, and deployment simultaneously. Procurement must be transparent by default. Whistleblowers must be protected and rewarded, not prosecuted. And the system itself must be internally adversarial: some of the artificial minds, if we build them as proposed elsewhere, must be tasked specifically with watching for collusion, manipulation, and quiet coercion. The temptation to trade public dignity for private advantage will not vanish just because the advisors are intelligent. It will intensify.
This is what it means to kill the dragon without killing the people. You do not hunt down individuals. You do not rely on finding and empowering the right philosopher-kings. You dismantle the structural asymmetry that lets a tiny group become sovereign. You bind the ability to operate frontier power to obligations that keep society legitimate: transparency, auditability, non-cruelty, and broad sharing of the upside. You do it through chokepoints that remain real even in an age of virtual worlds: power lines, datacenters, supply chains, insurance, capital markets, and access to customers. These are not elegant. They are not inspiring. They are just the places where the physical world still has a vote.
None of this guarantees a good outcome. History is not short of well-designed institutions that were captured, hollowed out, or simply ignored when they became inconvenient. The proposals sketched here are not immune to that pattern. What they do is something smaller and more important: they keep a good outcome from becoming impossible by default. They prevent the AGI transition from automatically collapsing into empire, oligarchy, or polite tyranny dressed up in the language of optimisation. They create a structure in which the strongest systems, human and artificial, are not free to treat everyone else as disposable. And they ensure that, even as capability rises, there remains a meaningful sense in which we are still building a shared civilisation rather than surrendering one.
That is not salvation. But it is a start.