Epistemic note: This is the beginning of a planned series of posts trying to think about what a highly multi-polar post-AGI world would look like and to what extent humanity or human values could survive in such a world depending on our degree of alignment success. This is all highly...
[Read More]
Two Mechanisms of Decadence
Epistemic status: Obviously speculative sociology. Probably pretty obvious to some but I’m just trying to crystallize these ideas from my mind onto paper.
[Read More]
Intellectual Progress in 2025
It is now 2026 and we are half way through the decade of the 2020s. If we think back to the halcyon days of January 2020 certainly a lot has happened, especially in AI1. The first half of the decade has essentially been the discovery and then incredible exploitation of...
[Read More]
Initial Quick Thoughts on Singular Learning Theory
Epistemic Status: Just some quick thoughts written without a super deep knowledge of SLT so caveat emptor.
[Read More]
The Biosingularity Alignment Problem Seems Harder than AI Alignment
One alternative to the AI-driven singularity that is sometimes proposed is effectively the biosingularity, specifically focused on human intelligence augmentation. The idea here is that we first create what is effectively a successor species of highly enhanced humans and then these transhumans are better placed to solve the alignment problem....
[Read More]