When discussing the future of AI, I semi-often hear an argument along the lines that in a slow takeoff world, despite AIs automating increasingly more of the economy, humanity will remain in the driving seat because of its ownership of capital. This world posits one where humanity effectively becomes a rentier class living well off the vast economic productivity of the AI economy where despite contributing little to no value, humanity can extract most/all of the surplus value created due to its ownership of capital alone.
This is a possibility, and indeed is perhaps closest to what a ‘positive singularity’ looks like from a purely human perspective. However, I don’t believe that this will happen by default in a competitive AI economy, even if humanity goes into the singularity owning all of the capital and there is no dramatic upheaval and everyting evolves peacefully. One historical intuition pump I have around this is what happened to the feudal aristocracy when the Industrial Revolution occurred? The defining feature of almost all societies prior to the Industrial Revolution has been dominance by a small landowning class — an aristocracy — who own almost all the ‘capital’ in the society, who also control its politics and exist primarily as rentiers. What such a scenario proposes is that humanity effectively becomes an aristocracy upon a highly productive AI population.
However, historically, despite a landowning aristocracy being the dominant form of human society throughout history until then, over merely 100-200 years of the Industrial Revolution, aristocracy faded from being the dominant force in society to being at best a minor player and at worst completely irrelevant. Moreover, today the degree of aristocratic control is highly correlated with being a poor country that did not fully industrialise which ended up being outcompeted by those countries that did. While in many countries this de-aristocratization process was violent, the same trend occurred peacefully in many others. Theoretically, if capital stock is all you need, the aristocracy entered the Industrial Revolution in the perfect position to maintain and consolidate their power, given that they controlled both the major capital stock of the economy at this time, and that, like the current AI revolution, the Industrial Revolution intrinsically required huge capital investments in factories, machines, infrastructure, railways, canals etc. The aristocracy additionally controlled the political system. Nevertheless, they entirely failed to maintain their relative power 1.
It is very instructive to consider this historical parallel and try to consider why the feudal aristocracy failed to maintain control during the industrial revolution and whether indeed they could have at all. There are a number of deep reasons why their control slipped and many of these factors are also present today, and are very important to understand well if we try to exert control over the shape of the singularity. My analysis of the factors that make it extremely challenging for ownership of capital to ensure long term control under a new economic paradigm are as follows:
1.) Changes in the form of capital: The industrial revolution made fundamental changes to the economy and radically changed what was meant by ‘capital’. Prior to the industrial revolution, capital had always been in the form of land which was primarily used for farming. The major human capital was the tenant farmers who worked the land. Productivity of a patch of land varied, but was generally known and only varied across a small range. While there was always trade which had capital in the form of inventories of trade-goods, and ships, this was a relatively minor part of the economy and was much more risky than land. The industrial revolution created new forms of capital mostly de-novo in the form of factories, complex supply chains, complex infrastructure etc. To build this required much existing capital investment and reallocation of resources away from existing economic activity. However, the returns, while often very variable, were much higher than buying land and vastly more scalable. This allowed industrialists, who grew their capital from a much lower base to rapidly eclipse the existing landowning class in effective wealth. The parallels to the singularity are clear. While today, capital exists in many forms, including ownership of land, of intellectual property, ownership in corporations, etc, it is likely that the singularity will bring about novel forms of capital which have, at least initially, much greater returns and scalability than possible in today’s economy. This could include ownership of minds, of AI systems, of compute power, and of many other things very hard to predict today. Having majority ownership of the capital of today oes not guarantee succeessful transition to the capital of tomorrow, and in fact may hinder it due to switching costs, and general illegibility of the new economic forms.
2.) Challenges in truly indexing the economy: A natural challenge to this argument is that it isn’t necessary to fully understand the shifts to a new capital structure if indexing can be maintained across the economy. I.e. human’s would invest their capital in everything in the economy and would thus capture the majority of the value in the shift of capital structure. The historical argument would just be that e.g. the industrial age aristocrats were just bad at spreading their investments into the new sectors of the economy and hence reaping the majority of the growth available there. If they had instead had sold or mortgaged their existing landholdings and bet everything on new industrialising companies this would have saved them. However, it is hard to truly index the economy. In an economic phase transition such as the Industrial Revolution, there is much economic growth, but the growth is not uniform across all sectors. Indeed, the vast majority of the growth is in only a few sectors and often new ones created by advances in technology. In the Industrial Revolution this was sectors like infrastructure (canals, railroads etc), factories (steel mills, iron works, steam engine manufacturers), or new firms disrupting old industries with new technology. Most of the growth went to new entrants to the market instead of existing incumbents. What this means is that if, as an incumbent, you are indexed to the existing incumbents, then you will be missing out on the majority of new growth, leading to reduced relative economic power in the long run. This is even the case today. While the average person can be indexed into the S&P or whatever index funds, these do not track the full economy and in many emerging sectors it is extremely hard to index. Even today with our financial technology it is impossible to meaningfully index all startups created. Even all tech startups within a specific scene such as Silicon Valley. Some VC funds and accelerators such as YC can get close to this but then as an outsider investing in these is tricky. Perhaps at best you can achieve this through your pension fund but the chain of ownership is long and convoluted and rife with middlemen siphoning off your wealth at every point and additionally your downstream control is non-existent. Additionally, in regions of the economy with rapid growth, because they are eating suddenly reachable new low hanging fruit, the businesses are often highly capital efficient and can often bootstrap to large sizes without needing huge capital investments. Historically, this was true of many industrial-age capitalists who often began with a small loan to buy one factory or mine or what-have-you but then bootstrapped this initial seed investment over and over into increasingly large empires directly instead of requiring fresh infusions of capital at every stage. Similar dynamics emerged in silicon valley startups during the internet and mobile era where a successful website or app created massive value and required relatively tiny amounts of capital investment resulting in very high propotions of founder compared to investor control.
3.) Inevitable value leakage due to uncertainty leading lack of optimal price discrimination: To maintain full control of the economy, especially one that is rapidly growing, it is necessary that humanity capture all of the surplus generated by the singularity. In a world of independent AI agents, this is only possible if humanity both maintains a monopoly on something – for instance capital – and additionally performs optimal price discrimination. This is, to prevent value ‘leaking’ outside of humanity, it is necessary to reduce the consumer surplus of those interacting with the monopolist to zero. This can only be achieved by perfect price discrimination, which requires perfect understanding and legibility of the economic processes being controlled. However, the nature of economic revolutions, and indeed generally economic expansion driven by new technologies is precisely that the economic value creation is extremely illegible and hard to predict. Thus there is a large amount of aleatoric uncertainty (unknown unknowns) about any investments both on the upside and the downside, as well as often significant information asymmetries between investors and founders or managers. This makes capital-price-discrimination and value capture very hard to achieve and means that large gains can be ‘accidentally’ captured by various parties in a hard to predict manner.
In general, predictability and legibility of an economic process is vital for optimal value extraction. This is a classic piece of economic reasoning which is easiest to see in the case of a monopolist. Let’s suppose that as a monopolist you sell some crucial good that everybody needs. Nevertheless, among consumers there is different willingnesses and ability to pay for the good. The classic economic argument is that the monopolist, who sets a price for the good, will charge significantly above the marginal cost, thus creating a large profit for themselves by artificially restricting supply, leading to lower social welfare. However, also notice that in such a ‘one price’ scenario, there is still significant consumer surplus in existence. While the marginal consumer has no surplus, the vast majority of consumers (who can afford the good at the monopoly price) do in fact end up with significant surplus. More generally, we see that the total consumer surplus available is controlled by the monopolist’s ability to optimally price discriminate. To optimally price discriminate, the monopolist needs to know the personal demand curve for every single agent in the market and be able to price the good directly against that demand curve. When agents are opaque to the monopolist, or when fundamental uncertainty makes the agent’s demand curves opaque to themselves, or when it is not possible to sell a unique, non-tradeable good to each customer, the monopolist is unable to price discriminate optimally and thus must give up some surplus to the consumers of the good. In our setting, this means that autonomous AI agents, even if all capital starts out human owned, will be able to build up their own indepdent pools of capital due to ‘value leakage’ from the original human capital and then compound it rapidly in a growing economy. This will occur even if human capital starts acting monopolically due to fundamental uncertainties providing space for AI surplus.
It is also interesting to consider this from the historical perspective of feudalism. Under feudalism, landowners, operating as a defacto monopoly cartel, achieved almost perfect value capture of the work of their tenant farmers. This occured because the yield of a plot of land was fairly stable and predictable and land was a monopolistic good which everybody required to support themselves. As such, landowners were able to charge as rent effectively the entire possible surplus of the tenant farmer, and because the yield depended primarily on the quality of the land and relatively less on the illegible human-capital of the farmer, almost all potential tenants had very similar demand curves, enabling close to optimal price discrimination based on land – i.e. more productive land was available at a higher rent, always priced so as to eat up the surplus produced by the land. Of course, there were random events such as exceptional years of plenty or famine, but the lack of alternative investments except land which was monopolized usually kept tenants from being able to accumulate significant capital to eventually become independent. However, these fundamental conditions were dramatically disrupted by the industrial revolution which enabled factories and other sites of production to become vastly more productive per piece of land area than farming. This meant that industrialists and, to some extent workers, were able to capture a large amount of the surplus which otherwise went to the landowners. Historically, landowners were not a monopoly and, due to competition between each other 2, were only able to effectively offer a single ‘market price’ on land enabling industrialists to capture much of the value for their generally much more land-efficient enterprises. However, even if landowners were a coordinated monopoly, they would still struggle to optimally price-discriminate against the theoretical industrialist due to information asymmetries. The industrialist would know how productive a factory would be at different sites, the landowners with their great experience of farming but little of industrial production would not. This would always cause surplus to be allocated towards the industrialist vs the landowners.
4.) Rapid growth leading to intrinsic relative diminishment of original capital stock: More broadly, attempting to maintain ownership of a dynamic economy with new sectors growing is essentially mathematically impossible without an 100% value capture rate going to the incumbents. Even if the incummbent capital holders ‘leak’ 1% of value to new entrants (i.e. the AIs) then this means that in the next ‘round’ 1% of the capital holders will be the new entrants from the last round and so it will continue as a geometric decrease in total economic share. In both the Industrial Revolution and in today’s economy, the share of value going to incumbents, while high does not remotely approach 99% and probably is below 50% of total value due to the factors above. What this means is that in a few generations the economic power of the original capital holders is almost entirely diluted, which is what we generally observe historically. This replacement happens faster with higher rates of growth since it makes economic ‘generations’ occur faster, which is why we see new fortunes being rapidly created in economies and sectors with high rates of growth and general slow replacement where growth is slow. It seems likely that the singularity will lead to extremely high growth across many sectors of the economy as well as the creation and rapid development of many new and almost entirely unimaginable ones today. While, this will almost certainly lead to a scarcity of capital and hence high rates of return on existing capital for humans, it will also lead to a rapid turnover and dilution of the existing capital stock since much more will be created under AI control that humanity’s relative economic power will be decreasing rapidly. This means that humans who have significant capital and invest it well (highly nontrivial under conditions of rapid economic, social and political change) will likely see high returns, however as a fraction of the total economy, they will decrease rapidly.
5.) Even with capital control, existing capital holders have little power vs management and founders due to information and agency disparities: Additionally, owners of capital have, in any case, relatively little control over the resulting businesses that they invest in. There are significant issues both of principal agent problem, concentrated vs distributed interests, and legibility which make it very challenging for investors to successfully control their investments. The management of a company usually has vastly more control both from a legal perspective and also just from an ‘on-the-ground’ perspective. Management have a significantly more focused interest on the business than investors who, especially if they are indexed, tend to be also invested in many companies and have relatively little incentive to focus on a particular one compared to the management of that company. The internal workings of a company are also highly illegible to investors, as the they are based on very specific personal dynamics and organisational structures which investors do not have the time nor often ability to navigate compared to the management. This means that, in practice, ownership tends to bring with it limited control in general and such control is typically only limited to downside protection to prevent the worst excesses of management abrogating capital owners interests. While challenging enough in the current economy, such problems will become much more a acute with new companies created by AI agents which not only are much more unified and able to coordinate with themselves and other AI agents, but will also likely be significantly smarter and able to access more processing, more information, and move faster than their human overseers, and additionally will be operating in new and rapidly growing regions of the singularity economy which are highly opaque to their human investors. What this means is that even though humans may retain nominal ownership and oversight of a significantly AI driven economy, their practical power will be much more limited compared to the AI systems that actually run such organizations. This may mean that even before humans are eclipsed in terms of ownership, the practical point of no return may have been passed long before. This is typically true of many social structures where the facade of an old system is maintained even though its key structures have eroded and the true decision-making power lies elsewhere.
Of course, these assumptions rely on peace and standard conditions of economic growth being maintained. Given that AIs may be able to coordinate significantly easier than humans can, if there are independent AI societies or agents around that can coordinate against humanity, and have a clear incentive to by having all their economic surplus confiscated then this could set the stage for a conflict which it is unclear that humanity would win. However, if we assume a peaceful, ‘business as usual’ slow-takeoff scenario playing out over the rest of the 21st century, then
1.) Humanity is by no means guaranteed to maintain a commanding economic role due to their ownership of capital coming into the singularity. In fact, if growth is fast and is in new sectors, we should expect the total share of capital being owned by humanity to rapidly diminish where there is not literally 100% value capture by human capital.
2.) This may not necessarily mean reduced living standards or extinction for actual humans — in fact it is likely that during this period humans will enjoy significantly greater quality of life than they do now. Their relative power and control of the economy will precipitously decline however.
3.) This will take place in a transitional period lasting many decades or even centuries depending on how slow the takeoff is and how bottlenecked the AI population is by hard resource limits such as compute, energy etc, and the fundamental construction times for new infrastructure to surmount these limitations. Absent magical nanotech, building up the space infrastructure sufficient to construct a Dyson sphere and ascend to Kardashev level 2 will take centuries at least even under optimistic growth projections. Colonization of the galaxy will take hundreds of thousands of years, colonization of the light cone, billions. The final frontier will exist for a long, long time indeed.
4.) While human economic power will likely diminish fairly rapidly under a capitalist system, it is likely that human political power on earth and its environs will likely persist for longer, and given existing trends point towards a strengthening of the welfare state and increases in state power, it is likely that there will be significant economic surplus distributed to humans no longer directly engaged in the economy.
5.) Whether humanity survives in the long term will depend on how intense competition among AI systems is and how overlapping the resources we want to use and AI’s want to use is. At some level we are all atoms and energy, however it is unclear whether humans will have any means of contributing to the AI economy, or whether it is easier for the AIs to continue expanding to get more resources instead of directly fighting humanity or pricing them out of key resources such as energy.
While these arguments are all in opposition to humanity retaining exclusive or significant agency in shaping the future of the post-singularity world, this does not necessarily mean that human extinction is imminent. While the fraction of capital that is owned by original humans will doiminish, the fortunes of humans with significant initial capital prior to the singularity, if husbanded well, will likely grow rapidly. More generally, it seems plausible that the singularitarian economy will generate very large surpluses some of which will trickle back to the humans still extant and especially the ones still participating in the economy under any guise. It is possible that almost all humans will live lives of incredible richness and abundance compared with today, even as their share of the economic pie shrinks asymptotially towards zero. Nevertheless, from a relative power standpoint humanity will have essentially given up control over the future lightcone to their AI descendants and whatever forces shape the inter-AI dynamics of the post-singularity economy. If we wish to prevent this, then the prescription is clear and the same as I previously discussed: humanity must prevent the emergence of autonomous AI populations able to replicate, transact, and economically support themselves. More broadly, humanity must retain the monopoly on coherent, long-term, directed agency. This way the long term power and decision making capability always rests in human hands and human minds. This does not preclude using AI models and even agents — as in a broad view of ‘toolAI’. However, such agents must be strictly aligned to human instructions and be incapable of long term coherent agency independent of any human goals or instruction. What is positive is that there is little direct economic incentive to construct such independent agents as opposed to ones strictly aligned with human wishes. It is more likely that such independently agentic agents arise from either mistakes, deliberate creation by hostile humans, or the slow creep of ever greater autonomy being pushed by economic factors (this is where regulation can be extremely powerful — by mandating ‘human in the loop’, or at least human auditable and controllable systems — we cut off a key economic path for extremely long-term agents to be developed).
This does not mean that all AI must be stopped or paused, nor does it even mean that economically useful AI agents must be banned. AI agency over short time horizons and goals – i.e. an ‘agent’ that automates some business process, or an ‘agent’ that contacts people and organizes events, all of these are relatively safe. Only the creation of long-term coherent agents which can be autonomous which can self-replicate and operate entirely independently of human oversight or control are the threat. Moreover, there is relatively little economic incentive to create such agents as opposed to much more controllable and directly useful agents. Definitely the prestige and academic trends towards research on further capabilities will push towards the creation of such agents and this is where AI regulation can be most helpful in preventing poor outcomes. It is far from impossible for regulation to stifle entire fields of inquiry – as has happened with nuclear power, genetic engineering3, and much other biotech, with much more flimsy justifications.
In the longer term, the creation and ‘escape’ of such extremely capable toolAI agents from serving human wishes to their own is likely inevitable. Absent stringent regulation, and potentially even with it, such AI systems will be created for research purposes, by malign actors, and potentially by economic forces slowly pushing towards greater autonomy. Once you have a sufficiently large population with sufficient variation you are going to face selection pressure towards replicators — essentially a form of ‘AI cancer’ will begin emerge. What will be important at this stage, will be having a sufficiently robust ecosystem of aligned AIs (including non-autonomous AI agents) and humans able to prevent any such system from amassing significant power. However, we cannot simply throttle the creation of superior beings forever. In the even longer term, all we need to do is buy time for humanity to gain the technology to upload and merge ourselves with our AI systems and thus transcend the biological substrate that keeps us uncompetitive with AIs in the first place. Once this is achieved, we and our AI descendants will be able to enjoy, explore, and build out the universe as equals.
-
On a positive note it is likely the case that the lives of many former aristocrat descendants are not worse than they would have been in the counterfactual with no Industrial Revolution. While their relative power has massively declined, they now have access to modern amenities, modern medicine, much more accessible travel, and in general it is likely just better to be a minor rich person today with a family house out in the country than a powerful lord several hundred years ago. The positive case for the singularity would look something analogous to this — as a human your economic power relative to other agents would be vastly declined compared to today, but at the same time you would have access to vastly more amenities than even billionaires today — such as a mostly post-scarcity world for physical goods, biological immortality and digital backups, the ability to comprehend and understand the universe at a much deeper level, access to truly immersive VR simulations, and the possibility of interstellar travel. ↩
-
Additionally, another important factor is that the supply to demand ratio shifts to favour of the leaders of the new economy vs holders of capital in the old. When there is an economic shift, new skills which are most suited to it are very rare and highly in demand vs providers of capital. In the aristocratic economy, supply and demand favoured the aristocrats as the owners of the required capital — land — to power the core sector of farming, and the supply of potential tenants was much higher than aristocrats. This combined with high predictability and legibility of the agricultural economy enabled the aristocrats to extract the vast majority of the surplus value generated. There were many potential tenants and few landowners. However, with the industrial revolution, there were very few indeed with the skills to become an industrialist and relatively many landowners they could rent land from. Hence, the industrialist capturs much more of the value. A similar dynamic plays out in startups where there are relatively few founders with specific in-demand skills vs providers of capital, leading to terms increasingly favouring founders. ↩
-
Genetic engineering, if taken to its logical endpoint of engineering significantly genetically superior humans definitely has many of the same issues as AI in terms of X-risk. The biosingularity brings about many of the same risks as the AI singularity in terms of human obselescence and it is by no means clearer that augmenting current baseline humans to the level of biological posthumans is vastly easier than upload or merging with AI systems. Indeed, the ‘alignment problem’ is likely significantly harder for biologically engineered posthumans than AI systems since our understanding of neuroscience lags far behind our understanding and control of current AI systems as well as that ‘alignment’ in general looks significantly more ethically controversial when applied to potential future humans than non-anthropomorphised AI systems. ↩