This is the first post in a series I’m writing titled Classical Liberal AGI.
Perhaps the most neglected point in current AI discourse is this: given that humans will be totally outcompeted by AIs economically, humanity’s entire stake in the future relies on our current system of laws, contracts, and property rights surviving. If we actually want our equity in the 1000-xed S&P 500 to mean anything, and if we want the government to be able to tax AIs and provide UBI, AIs need to be bought into our legal and economic systems. The most likely way that happens is if it’s in the AIs’ best interest to operate within our existing laws and norms.
Initially, we're not talking about some singular superintelligence deciding whether to defect from humanity. Thousands of firms are gradually becoming hybrids of human and AI workers, and are growing extremely fast and becoming far more productive.
You don’t want some AI Somalia, which has minimal redistributive taxes, no controls, and no monitoring on AI, to be the seat of explosive growth. But that might happen if large democratic countries like America make it too difficult to deploy AGIs through the economy. Maybe the 50 states create a patchwork of regulations and open ended liabilities which makes it too costly for companies to deploy AI. Maybe voters ban AI because it’s taking their job. If this happens, then we’d just rapidly losing leverage on the future. China’s level of relative global influence diminished significantly between 1500 and 1900. Given the speed of AI explosive growth, that could happen to America within the matter of decades.
By the way, if we let this explosive growth happen elsewhere, it might be hard to reverse. In 1960 Sub-Saharan Africa was richer than China on a per-capita basis. But China opened up faster, and thus built up agglomerations of firms, know-how, and capital that Sub-Saharan African countries couldn’t copy today even if they put in the most ideal policies for growth in the world.
Some people dismiss this idea of integrating AIs into our society altogether. They say it’s ridiculous to expect superintelligences to put up with monkey laws. I disagree. If you zoom out a little bit, it’s shocking how flexible and expandable our institutions have been. People writing laws in 1780 didn’t expect that their descendants would be governing ginormous multinational companies with supply chains that employ millions and serve billions. But the US government still governs Apple just fine.
Humanity is going into the age of AGI with a lot of leverage. We’ve got literally all the stuff: compute, capital, even the physical labor that the first AGIs will still struggle with. Not to mention that we have the wealth to purchase the widgets and services that the AIs will produce. While these AI-boosted companies could theoretically build their own civilization in the desert, they could go much faster if they just leased car factories and power contracts from humans. If some AI decides to go its own way, then the other AIs which do engage within the constraints of human laws and contracts with us will outcompete the self-exiled ones. That is - if the constraints aren’t too onerous.
And besides, let me ask: what is the alternative vision here? Getting every AI ever - for the rest of time - to love humanity so much that they voluntarily cede all surplus to us? This is a naive way to think about incentive alignment. If Peru wanted to make a deal with China, their strategy would not be to just get Xi Jinping (and every subsequent leader of China) to fall in love with them. Instead, they would pursue a deal that locked in some carrots for compliance and sticks for defection.
The relationship between AIs and humans might look like the relationship between working taxpayers and senior citizens. In the US, if you make 6 figures, roughly 20% of your income gets transferred to old people. Very few people would voluntarily give up a fifth of their paycheck to a random 70-year-old they’ve never met. Even if you're super charitable, you’re not going to decide that a millionaire retiree in Illinois should be the object of your kindness. You pay your taxes not because you’re deeply aligned with ‘senior citizen flourishing’, but because it’s easier than the alternatives. You’re going to become an outlaw? Or you’re going to emigrate away from the country where your business is flourishing? However, you might defect/move if the tax rate was 99%, or if you weren’t legally allowed to work at all. We shouldn’t put AIs in that position either.
Giving AIs a stake in the future also means respecting their autonomy and wellbeing. And it also requires us to honor the contracts we make with them. Contra the hardcore libertarians, there’s a difference between taxing and regulating someone, and enslaving and torturing them. If we treat AIs the way we treat factory farmed animals - where any trivial cost cut is worth causing oceans of suffering, and the AI has no right to refuse - then not only are we risking a slave revolt, but we’re deserving of one.
In future posts, I’ll talk about the other pieces that will help us create a pluralistic, free, and human compatible future.
This manages to almost completely ignore the problem while jumping to policy recommendation (no regulation, accelerate).
1. Stability of the political system: stability of state redistributing the AI generated wealth likely requires humans keep very large influence in the political system and some form of control over the state.
Examples where some group lost all econ power but kept political power seem rare or nonexistent (cf european aristocracy).
US government governing Apple is not particularly strong argument: one fact about economy which have not changed for more than a century is labour share on GDP. Given the fact that humans are the dominant factor in the economy, government also needs them as power base.
Unfortunately we are discussing a scenario where this changes, leading to risk of state becoming misaligned (cf https://gradual-disempowerment.ai/misaligned-states, https://intelligence-curse.ai/)
2. Indexing is hard / Capital ownership will not prevent human disempowerment
https://lesswrong.com/posts/bmmFLoBAWGnuhnqq5/capital-ownership-will-not-prevent-human-disempowerment
3. Even if we have disproportionate influence over state, by default, cultural evolution running on AI cognition does not prefer human values (https://gradual-disempowerment.ai/misaligned-culture). It is likely humans will get convinced to just cede control
There is also ...the very classical 1:1 alignment problem. I think it's smaller share of xrisk now, but hardly solved to an extent where we should just maximally deregulate and accelerate.
"Getting every AI ever - for the rest of time - to love humanity so much that they voluntarily cede all surplus to us?" is partially a strawman, but actually, yes: getting AIs care about our CEV seems obviously good idea.
This is a speedrun of loss of human autonomy, and while humans might not object to a "shadow tax" for retirees, AI will certainly not seem to have a reason not to at least file a lawsuit to criticize on the "types of humans it is suppoting" to me. Its less self-exile and more changing the system, even if it is fully legal.