I’m wondering how you chose octopi to represent the superintelligent AI - is that the closest biological counterpart that came to mind, or is a Cowenesque writing-for-the-AIs who might themselves be quite happy to be portrayed as these intelligent animals?
1. Before we have AI CEOS, which company will be the first one to offer a fully automated "drop in remote worker" that comes with a significant liability policy if these "employees" didn't perform as expected?
2. To what degree is the government likely defend the rights of AI systems to own property? This seems like a potentially unpopular proposition.
It'd come via the government, which would have to reconfigure the tax system to make up the loss in income tax revenues. Maybe a land value tax? I think it's a hard but tractable policy problem.
Would seem knowledge is more hayekian than this? Being able to gather / process more data is insufficient. There is friction/compute required for the entire ecosystem to create “data” - there are broader active processes that the CEO doesn’t control.
Strikes me there are a lot of non-legible things that make it hard for a corporation to be copied. e.g. supplier agreements, brand.
Not sure the replicated successful firm takes over. Successful strategy differs across markets, and over time, the degree of vertical integration and so on. See ‘The Modern Firm’ by John Roberts. And if they could they’d lose the external discipline of competition, unless contestability was enough (?). If so would our AI future be an efficient equilibrium path?
Sidenote: how much would it cost someone to make a video this long using Veo 2?
I’m wondering how you chose octopi to represent the superintelligent AI - is that the closest biological counterpart that came to mind, or is a Cowenesque writing-for-the-AIs who might themselves be quite happy to be portrayed as these intelligent animals?
Raising a few questions here:
1. Before we have AI CEOS, which company will be the first one to offer a fully automated "drop in remote worker" that comes with a significant liability policy if these "employees" didn't perform as expected?
2. To what degree is the government likely defend the rights of AI systems to own property? This seems like a potentially unpopular proposition.
Why do companies even need AI CEOs and AI employees? Without employees, there’s no consumers. Without consumers, there’s no companies.
Income flows to capital + redistribution, presumably
I’m a bit skeptical of the idea that for-profit companies would willingly redistribute their profits. But I’d love to be proven wrong.
It'd come via the government, which would have to reconfigure the tax system to make up the loss in income tax revenues. Maybe a land value tax? I think it's a hard but tractable policy problem.
Would seem knowledge is more hayekian than this? Being able to gather / process more data is insufficient. There is friction/compute required for the entire ecosystem to create “data” - there are broader active processes that the CEO doesn’t control.
Strikes me there are a lot of non-legible things that make it hard for a corporation to be copied. e.g. supplier agreements, brand.
Not sure the replicated successful firm takes over. Successful strategy differs across markets, and over time, the degree of vertical integration and so on. See ‘The Modern Firm’ by John Roberts. And if they could they’d lose the external discipline of competition, unless contestability was enough (?). If so would our AI future be an efficient equilibrium path?