Linear Attention as Iterated Hopfield Networks

In this short note, we present an equivalence and interpretation of the recurrent form of linear attention as implementing a continually updated hopfield network. Specifically, as the recurrent transformer is performing generation for each token, it simply adds a continuous ‘memory’ via Hebbian plasticity to a classical continuous hopfield network... [Read More]

Intellectual Progress in 2023

2023 has also been an interesting year. The first half of the year was at Conjecture with a brief stint cofounding Apollo and then cofounding a soon-to-be-revealed (with any luck) startup which I shall have to remain fairly quiet on for now. There has been lots of change and personal... [Read More]

Open source AI has been vital for alignment

Epistemic Status: My opinion has been slowly shifting towards this view over the course of the year. My opinion is contingent upon the current situation being approximately maintained – i.e. that open source models trail the capabilities of the leading labs by a significant margin. [Read More]

Addendum to Grokking Grokking

In my original Grokking Grokking post, I argued that Grokking could be caused simply by diffusive dynamics on the optimal manifold. I.e. the idea being that during the pretraining phase to zero loss in an overparametrized network, the weight dynamics minimize loss until they hit an optimal manifold of solutions.... [Read More]