Epistemic status: Shower thoughts
One big recent trend in AI is now constructing ‘scaffolded LLM’ systems which wrap the LLM in an agentic loop and construct various kind of forced ‘thought scaffolds’ to get the LLM to be better at agentic tasks. While agent systems have been hyped for over a year now, they still seem relatively far from general production use, bedevilled by issues of reliability that were visible from the beginning. Despite this, increasingly complex cognitive architectures have been constructed utilising external memory, explicit thinking/feedback loops, multimodal perceptual inputs, and specifically trained reasoning and action chains. Despite this increasing sophistication of LLM scaffolding available, driven by its relatively straightforward implementation, requiring only standard software engineering around an LLM core, it appears that such agents are still relatively far from AGI and primarily appear bottlenecked by the capabilities of the core LLM in a way that different or better scaffolding systems seem largely unable to ameliorate.
This has lead me to think upon how much scaffolding as such actually helps humans learn to reason 1. Now, the core additional components such as long term memory, planning ability, etc are undoubtedly crucial as humans have explicit brain regions dedicated to these which LLMs currently do not. Whether or not their current instantiation in standard scaffolded LLM models is sufficient for this is unclear.
However, when designing agent loops, it is unclear that humans actually think via a rigid loop such as PERCEIVE, THINK, ACT, or that the key advantages of human cognition come from developing better ‘scaffolding’ algorithms rather than e.g. better world knowledge, better input data, etc. In fact, I think it is largely the opposite. I think that while culture and education has clearly been extremely important to human developments across time, and can certainly force people to learn facts, I also notice that most education has very little impact on people’s personality, fundamental reasoning aptitude and general way they approach problems. Instead people seem to have some kind of mostly innate method of thinking that is deeply tied to their personality or general cognitive style that rarely changes. They can learn new facts and have new experiences which enhance their capabilities, but the cognitive core rarely seems to change.
There is a long history of trying to teach people how to think to improve their cognition, of which the latest incarnation is definitely the rationalist movement, and while their intentions are pure like all the others I think that explicit teaching of rationalism made relatively little impact beyond selection effects — a point made in more detail here. Specific rationalist ideas are certainly useful, and generally attempting to be more even-keeled and probabilistic in thinking helps (except where it doesn’t), but trying to explicitly perform Bayesian reasoning on pretty much any big problem you face which isn’t trivially easy leads you immediately into intractable uncertainty and modelling questions and rarely results (at least for me) into any non-obvious insight 2. The same applies to many other fields such as the self-help literature, attempts to inculcate ‘critical thinking’ in schools, and the like. Largely, most of these interventions appear to me to have essentially negligible effects. This would make sense if the core cognitive capabilities underlying our performance do not derive from explicit system-2-style reasoning chains but rather the implicit system-1 heuristics that guide selection and choice of approach.
While people definitely can and do learn different ways of thinking by interacting with a new domain, extremely rarely is it productive to simply rigidly learn to follow a system. Instead, truly expert knowledge and capabilities are highly subconscious and implicit, since success actually comes from adaptively confronting a changing reality piece by piece rather than blindly following some predetermined system ahead of time. Unlike previous AI systems, LLMs are clearly capable of this kind of implicit reasoning. However, it is unclear to what extent scaffolding is helpful for them and I would argue that it is not. Apart from a very general structure, correct action requires being adaptable and flexible depending on the situation instead of following a rigid cognitive loop. In humans, expert performance is characterised by having a number of ‘tricks’ available which are well rehearsed and can be deployed flexibly as the situation demands, as well as strong mastery of the fundamentals of the field of expertise 3.
This is where LLMs can have an advantage — they can gather and train upon significantly more data than any human can on many domains simultaneously. This kind of direct environmental learning to embed and crystallise successful behaviours seems more promising to me than directly trying to explicitly hash out the optimal cognitive loop architecture. But again here the problem is the same problems that make human learning in some domains challenging — the lack of strong, reliable, and fast feedback signals to learn from.
In general though, I am interested in anybody who has contrary experience for this. If you have found that learning some explicit meta method of ‘how to think’ has been helpful for you, please reach out and tell me as well as what the method was.
-
On a related note, it is also interesting to ponder how much prompting helps humans. There definitely is some effect. Many people (including myself) find talking to oneself in an inner monologue helpful for formulating thoughts and solving problems. Additionally, there is a long tradition of mantras in both religious and self-help traditions. However, it seems likely to me that the effect is small if it exists. You cannot dramatically improve your own performance by telling yourself to pretend to be an amazing expert at whatever it is. ↩
-
The lack of people discussing this problem has always worried me and effectively shown how rarely people actually try to sit down and do a Bayesian calculation themselves. If they did they would realise that outside of textbook situations the key issue is not so much reasoning or number crunching but instead how to deal with both extreme fundamental uncertainty as well as confusions about what you ultimately value — i.e. how to distill your desires into a coherent ‘utility function’. ↩
-
One thing that has always confused me is how people have deep intrinsic aptitudes for one field or another, which is much more subtle and specialized than described by ‘g’. While ‘g’ certainly is a common factor, it is rarely fully general in any individual. I.e. it is extremely rare that a single person is extremely good at everything. Instead, we have a distribution of talents in a way that seems innate. However, this makes little sense because the talents are often for things that have no deep ancestral analogue such as mathematicians vs physicists, or poets vs memoirists, or painters vs sculptors. LLMs and our AI models seem much more open to learning any kind of data than we are. They appear to follow a pretty clear g factor based on scale and amount of training data — i.e. a bigger, better trained LLM tends to be better at everything with maybe slight strengths or weaknesses derived from the training data mix. Finetunes introduce specialisation again but in a data-dependent way. This seems strongly unlike humans where we appear to have strong inductive biases towards learning specific kinds of skills and information. This conflicts with the general view of the cortex as a general purpose unsupervised learner. In general, the translation from low-level neuroscience to high level phenotypic traits like personality and aptitudes is deeply mysterious to me. ↩