Three systems: a mechanism for mental and social narrative

Alex Rosenberg says here that we are instinctively driven by stories, narrative and theory of mind - a very useful instinct on the small scale - although that instinct can be misleading on the larger scale of history and politics. His book on this claim is also out, though I haven't read it yet.

It seems uncontroversial that the idea of narrative has a powerful hold on how we think. There are thousands of discussions of storytelling as a way for us to bond with other people, and the biases that come from our desire to see a natural story behind events. I don't think many would disagree that stories are compelling to most people, and that we naturally like to see the world through narrative.

I've been exploring how the mind might implement this, and what the consequences might be.

Readers may recall the System 3 theory from earlier posts. This is how I think narratives fit into that, and how the process works in the mind:


  • People (and other animals) are very good at learning cause-effect relationships. These relationships are binary pairs, A->B, and encode the knowledge that when you see A, B is likely to follow soon after.

    This could be expressed as "A causes B", "A predicts B" or "A implies B" - there are important logical distinctions between the three, but I suspect the intuitive mind isn't very interested in those differences - it just learns the relationship. This way of encoding knowledge about the world underpins psychological basics like behavioural conditioning and associative learning.

    Indeed, this kind of knowledge is necessary for any organism to function in the world. Whether it's a single-celled amoeba floating towards water with more concentrated nutrients, or a flower turning to the sun, there is a simple relationship between stimulus and action. Simpler organisms have these relationships coded into their genes, while more complex ones can learn new relationships from their environment.

    These relationships make up what is typically called System 1. The automatic, instinctive reactions - touch a hot cooker and jump back, or see a car's brake lights and hit your own brakes.
  • The big leap comes when an organism is able to assemble these relationships, these binary pairs, into chains. If A->B and B->C, then A->B->C - and, cutting out the middleman, A->C. If a sparrow in the sky implies that a worm is in danger of being eaten, and daylight leads to sparrows in the sky, the worm might learn that daylight implies getting eaten. The early worm, as they say, gets caught by the bird. This longer chain of implications enables the organism to plan, act earlier, and gain an advantage.

    The implication chains used by humans are usually much longer and more complex, of course. Getting up this morning -> being able to go to work -> not letting down my clients and colleagues -> getting paid -> having enough money to buy food and pay rent -> not starving. So, getting out of bed becomes rewarding despite the discomfort involved.

    Human chains also include branching and uncertainty. I might get paid despite not turning up at work, or I might not. I may have enough money to feed myself next month anyway. There may even be cycles when the chain points back to itself: being fed next month makes me more likely to feel like getting out of bed again. The chain in these cases branch out and become trees, or more complex graphs such as the examples shown below.

    These graphs - I referred to them as causal graphs previously, but I now think implication graph is a better name - are the territory on which the human imagination plays out. When we plan the future, or imagine the outcomes of a decision, we are navigating the implication graph. Our brains are highly tuned for this: they can do it quickly, and efficiently evaluate the outcomes. If navigating a particular section of the graph feels good, the events they represent would probably feel good (be rewarding) in reality too. This is what I propose to call System 3.
  • Something that, as far as we know, we can do but worms can't, is think about the implication graph for other people. It is important when interacting socially with other humans to be able to predict what they will do in different situations. This means we have to have an idea of what implication graphs might be inside their heads. It seems natural that the very efficient brain mechanism for evaluating our own implication graph would be reused to evaluate other people's. Why evolve two separate functions when one will do? (There is also good neuroscience evidence that this is exactly what happens.)

    As Rosenberg points out, this capability is incredibly useful on the level of individual short-term interactions, and rarely leads us too far astray. Whether I'm exchanging my fruit for your bread, cooperating with you to build a barn, or taking care of a child together, my basic understanding of your goals, behaviours and incentives do very well at helping me operate correctly. There is plenty of opportunity to get feedback and adjust my assumptions if they're wrong, and I probably know enough about you to have a reasonably accurate copy of your map in the first place.

    Notably, if my brain is missing any pieces of your implication graph, it can fill them in with copies of my own. If you've never explicitly told me that you love your children and worry about their safety, I can probably assume you're similar to me in this respect. You might not have mentioned your enjoyment of eating cakes, but if I like it, you probably do too. This 'copying' heuristic is likely to work a lot of the time, although it's easy to imagine where it could go wrong sometimes.

    Rosenberg argues that this tendency to imagine other people's reasons for doing things (their implication graphs) is dangerous and can lead us astray when we apply it to historical or large-scale problems. A fair point. There are indeed other tools we can use for these situations, including statistical analysis, logical reasoning, and setting up empirical tests. These are the domain of System 2 - the reasoning we use when we don't want to rely on what feels good.


I am a bit more positive than Rosenberg about our reliance on narratives and theory of mind, partly because I don't believe there is any viable alternative. Maybe when analysing major historical or economic events, we can marshal the resources to apply System 2 and measure things statistically. For a limited number of oft-repeated scenarios (e.g. doing a cashflow forecast for a business) we have invested the time to build reusable tools and data sources that give a better answer than the feelings-driven System 3.

But for novel situations where fast decisions are needed, and for the everyday interactions that make up at least 95% of life for most of us, System 3 is faster, more accurate and requires less data-gathering than System 2; and is more adaptable and powerful than System 1. Stories are our best organising metaphor for the world, and if we discard them we won't be left with reliable truth - we'll be left adrift, with nothing to guide us at all.

Comments

Popular posts from this blog

Is bad news for the Treasury good for the private sector?

What is the difference between cognitive economics and behavioural finance?

Dead rats and dopamine - a new publication