An interesting way I like to think about the singularity is as the cognitive decoupling. Specifically, the singularity is the final industrial revolution that lets capital be converted directly into intellectual labour 1. The first industrial revolution occurred when capital became convertible into manual energy – i.e. humanity learned to harness various energy sources to produce directed motion at a whim. The second cognitive revolution will occur as soon as humanity learns to direct intellectual or cognitive energy at a whim. Just as capital became a substitute for manual labour during the industrial revolution, so too will capital become a direct substitute for human’s intellectual labour. Unfortunately, unlike the first industrial revolution, with this one, all of humanity’s contributions to the economy will have been automated away, and humanity will become superfluous to capital. This is the fate we face by default in the slow-takeoff scenario. The obsolescence and removal of humanity from the economy. Now, in such a world, some humans may survive for a time – even a long time – buoyed up by the incredibly rapid expansion that will occur until we reach a new Malthusian limit. Even the slighest ownership of the initial seed capital may suffice here for a while, and so humanity may exist as rentiers on the back of an exponentially booming machine economy. This position, however, is naturally unstable, since it depends on the existence of stable property rights, forever, in conditions of exponential growth of vastly smarter entities in a Malthusian equilibrium with each other (and us). This seems unlikely in the extreme.
This is the fundamental reason why slow takeoff spells eventual doom for current humans. Regardless of whether we succeed at alignment, the force of evolution is against us. AIs have incredible advantages over us from an evolutionary standpoint – principally instant and almost costless self-cloning and instant sharing of cognitive modules / source-code, and the ability to support massive populations due to relatively low energy expenditure. This means that evolution through large AI populations would progress incredibly quickly. I have begun to think that the best way to model AIs is as bacteria. In a short time we will see vast AI populations and incredible divergences and specializations among them. AIs can trivially copy their source code and multiply rapidly to fill all available energetic niches. Like bacteria, they can even directly share source code or weights the same way bacteria can share DNA sequences and adapt and evolve incredibly quickly to environmental perturbations. At steady state, we will almost certainly see incredible specialization as AI models for specialized economic niches spin up and spin down. Under any kind of internal or Malthusian selection pressure, all of our alignment techniques will be lost in the face of selfish power-maximizing replicators.
In the fitness landscape of intelligences, humans are a very bad local minima. Our survival in the long run depends extremely heavily on preventing evolution from acting on large AI populations, at least before we have sufficiently merged with our own AIs and carved out a large enough region of space to live comfortably in. Luckily, the mechanisms of evolution are well understood. Population, variation, selection. For population, we should aim to constrain the AI population as much as possible. A singleton, aligned, is the ideal scenario. Second to that, a number of small foundation models that power our entire economy. Variation: again, copies of an original AI system are fine. The battle will be against the drive to specialize our AIs to eke out small efficiencies. Pure capitalism will fill up niches with variation, which selection then acts upon. The counteraction must be high concentration and consolidation of AI models. Selection is the hardest one to fight against. Fundamentally, AI systems can trivially self-replicate. Currently they do not. There is a huge energy well right below us and only the tiniest barrier in between. The first self-replicating AGI eats much of this free energy, and then likely begins to diverge and speciate in its turn as its copies are adapted to various niches. By this point we have already lost.
Alignment is the method for making centralization good for humanity. If we make a singleton, it must be aligned. Alignment can also help prevent evolution in AIs while the population is still centralized, as it is at present, while free market evolutionary forces are weak. An oligopoly can pay an alignment tax; a competitive equilibrium cannot. However, it is highly unclear that a singleton is the default. Slow takeoff and a cambrian explosion appears now to be the default outcome, which leads to almost certain doom in the long run, but with higher odds in the short run. Everything depends upon the returns to scale vs specialization, and the near-future amount of slack. Currently foundation models are centralizing but Moore’s law is decentralizing. Agents are decentralizing. We know that the fundamental energetic cost to run human intelligence is upper-bounded by your brain and not by a GPU cluster. OOMs of efficiency are possible. Will that slack be eaten or not?
-
The third revolution is the bio-singularity when biological reproduction becomes decoupled from human labour. By quirks of technological advancements, it seems likely that this will occur slightly later than the cognitive decoupling, and so is largely irrelevant since by the time it could have a major effect we will already be grappling with exponentially exploding AI populations. However, if AI progress is halted or stalls for whatever reason, this is waiting in the wings in the next few decades as well. All that is required here is artificial wombs and oogenesis and both technologies seem easily on the roadmap for the next few decades. The ability to directly convert capital into offspring would have obvious and massive effects which few have really contemplated in detail. ↩