The alignment problem is not new. We have been grappling with the fundamental core of alignment – making an agent optimize for the beliefs and values of another – for the entirety of human history. Any time anybody tries to get multiple people to work together in a coherent way...
[Read More]
How to evolve a brain
Epistemic status: This is mostly pure speculation, although grounded in many years of studying neuroscience and AI. Almost certainly, much of this picture will be wrong in the details, although hopefully roughly correct ‘in spirit’.
[Read More]
The Scale of the Brain vs Machine Learning
Epistemic status: pretty uncertain. There is a lot of fairly unreliable data in the literature and I make some pretty crude assumptions. Nevertheless, I would be surprised though if my conclusions are more than 1-2 OOMs off though.
[Read More]
Understanding Overparametrized Generalization
This is a successor post to my previous post on Grokking Grokking. Here we present a heuristic argument as to why overparametrized neural networks appear to generalize in practice, and why this requires a substantial amount of overparametrization – i.e. the ability to easily memorize (sometimes called interpolate) the training...
[Read More]
Grokking 'grokking'
Epistemic Status: This is not my speciality within ML and I present mostly speculative intuitions rather than experimentally verified facts and mathematically valid conjectures. Nevertheless, it captures my current thinking and intuitions about the phenomenon of ‘grokking’ in neural networks and of generalization in overparametrized networks more generally.
[Read More]