Monday, 28 March 2016

Discussion 2 of 3: No spooky action at a distance - a theory of reward

One of the most powerful ideas in physics is the principle of locality. This principle insists that objects can only be influenced by other objects that touch them. Two items separated by a distance cannot directly exert any force or influence on each other, but must communicate via some medium which physically transmits the force from one to the other.

Albert Einstein described this principle as "no spooky action at a distance" and it applies to his theory of gravity as well as all the other physical forces (it gets more complicated when we consider quantum mechanics, but that would take a whole other article). The Scottish physicist James Maxwell also used it in developing his theory of electromagnetism.

Instead of the magnets directly pushing or pulling each other, each magnet creates an electromagnetic field, and sends out the field into the world around it, transmitted by light waves. When another magnet passes through the field, it is affected because the field has now reached the same point where the object is. The two objects are not directly attracting each other; the force is mediated by the electromagnetic field.


A more familiar example is a chain of dominoes. The last domino doesn't fall as a result of the first one being knocked over. It falls because the second last one is knocked over. That in turn happens because of the one before it. The first and last domino can only affect each other because of all the other dominoes in between.

A different way to think about this principle is in time instead of space: the ultimate result of an event can never be a cause of how the event happens. The first domino doesn't fall down because the last one is going to; indeed, if you ever played Domino Rally as a kid, you will know that when the first one falls, it's often entirely unpredictable whether the last one will also go. Each domino only has a local field of influence: just it and the ones that can physically touch it.

What if we apply this principle of locality to economic decision-making?

We are used to talking about decisions based on "future reward" or "future returns". I don't eat the marshmallow now, so I can have two of them later; I save money now because I will receive interest next year; I exercise today so I can enjoy being healthier in later life.

But can a future event actually cause an event in the present? Can my future health cause me to exercise today? Surely not – that would be time travel.

The principle of locality says that a decision I make today can only be based on stimuli and causes right here, right now in the moment of the decision. Future events cannot influence it. (Neither for that matter can past events, or costs and benefits that will take place outside of my direct experience). My brain and the physical atoms that make it up only know about immediate, present influences. Thus, my decision can only be affected by feelings, rewards and costs that I experience at the time of making it.

In practice, however, we do regularly make decisions to defer gratification. We appear to take into account outcomes that happen later, or outside of the decision context (just as the magnets really do attract each other, even though they are not touching each other). How is this paradox resolved?
The key here is to understand how the outcomes are mediated – how those future benefits can indirectly influence the present. The future reward does not directly cause my decision today. My 65-year-old self does not reach into his past and make a pension contribution. Instead, it's my current feelings and beliefs about the future reward that matter. I can only take that future reward into account if I get some kind of immediate payoff for doing so.

That payoff might be the feeling of security that comes from knowing that my retirement is being provided for. It could be the positive feeling of going along with socially acceptable behaviour. Conversely, the guilt associated with eating a doughnut may stop me eating one. All of these are feelings experienced now, by my present self, even though they are based on what might happen in the future. Whatever it is, I need to get something now to make me act now.

Even though those feelings and beliefs are related to the future, they still cannot directly be caused by future benefits. My feeling of security isn't actually a result of my future comfortable retirement. It's a result of me imagining now what my retirement might be like. My brain has to be able to predict the future, and somehow take an action now based on imagining something good in the future.

This leads to an important conclusion. The brain must have a mechanism for forecasting future outcomes. Having made its forecast, it must be able to produce a present value for each of them – converting it into some immediate force that can influence current decisions. It makes sense to believe that whichever outcome produces the highest immediate force will be chosen by the decision maker.

So, instead of picking options based on which one brings the highest predicted reward, the brain chooses whichever makes the highest immediate impact at the time the decision is made. The size of this impact is certainly related in some way to anticipated reward, but is not the same thing. It is calculated by some mental mechanism that predicts decision outcomes. An obvious research question which follows from this is: how does this brain function translate one quantity (anticipated reward) into another (immediate influence on decisions)?

My last post suggests a possible mechanism by which this could happen. The mind contains an associative network that makes a model of the world, tests out actions and their consequences, and estimates the amount of reward that is likely to be generated. That's just one hypothesis of how this process could work; whatever the mechanism in reality, there must be some process that can estimate which of two anticipated outcomes is better.

That insight leads to a very important question. We know the mind has the ability to experience pleasure from receiving certain sensations. But does it have two separate mechanisms: one for experiencing actual pleasure, and another for weighing up anticipated pleasure in order to choose between two options? If the pleasure I gain from actually eating a doughnut is measured in (for example) micrograms of dopamine, in what units do we measure the anticipated pleasure when I imagine eating the doughnut?

Neuroscience (e.g. this 2014 paper by Linnet) and Occam's razor both suggest an answer with far-reaching consequences. The simplest explanation, and the one that requires the least neural machinery in the brain, is to assume that there is a single quantity in the decision process that does both duties: evaluating immediate sensations and evaluating anticipated outcomes. In other words, we get exactly the same kind of reward from thinking about future pleasure, as we do from experiencing pleasure right now.

This poses an interesting scenario: the question of trading off current reward versus imagined reward in a single decision (one marshmallow now versus two marshmallows in the future). In order for me to exchange the actual pleasure of a marshmallow now for a the imagined pleasure of two, the immediate reward from imagining two marshmallows must be greater than the reward from eating one. In some situations that's the case, but in others it is not.

There's a complication to this: if I get so much pleasure from imagining future marshmallows, why wouldn't I eat the marshmallow now and imagine the future ones? I could go around imagining marshmallows all the time and get unlimited pleasure from it. There are reasons, though, why this wouldn't work: to be discussed in a future post.

As a reward for getting to the end - for those who did have Domino Rally as a kid, take a look at all the add-ons we couldn't afford:

Saturday, 23 January 2016

Discussion 1 of 3: Where do goals come from?

Discussion number 1 in a series of 3: on goal-setting

Much of decision-making psychology (and by extension behavioural economics) explores the processes by which people solve a problem or achieve a goal. Usually the papers in this field contrast the rational, expected-utility way to solve these problems with the approaches people actually use in practice.

An important question they rarely address is "Why that goal?" How is it that people choose the particular problem they want to solve, the objective to work towards? In the psychology lab, the answer is easy: the person in a white coat gives it to them. In real life, that doesn't happen.

Answering this question is essential to developing a comprehensive theory to replace or challenge classical economics. Standard microeconomic theory has a clear, simple answer to this: we always have the same goal, maximising utility. Any other objective (finding the best job, working out how much money to save, picking what to eat, choosing a romantic partner, deciding whether to rob a convenience store - to pick at random from the typical subjects of economics papers) is a means to an end. According to the classical theorist, we choose between these different goals based on which we think will bring the highest marginal utility. Independent goals which don't conflict with each other are pursued more-or-less simultaneously: I seek a promotion at work during the day while trying to find the ideal spouse at night, choose the best mutual fund at lunchtime and weigh up the risk and reward of the convenience store holdup before bed.

Psychologists, while rightly challenging the claim that I can simultaneously optimise across all these different life goals, don't propose an alternative way to choose between them. My conscious problem-solving mind can only focus properly on one objective at a time, but which one?

The fast-and-frugal heuristics school gives an argument that simple heuristics are the best way to solve apparently complex problems like catching a baseball, allocating investment money or walking through a crowd, but doesn't tell me why I want to catch the baseball or invest money in the first place. The heuristics and biases approach tells me that I am anchored on a particular rate of return for my investments but not whether I will spend this afternoon trying to beat that rate or watching football.

You could easily ignore this question and assert that sooner or later I'll get round to dealing with most of the important problems in life, and that the real work of psychology should be focused on how I'll tackle them. But there are plenty of counterexamples. Many people never get around to thinking about investments or savings, or not until it's too late to do anything meaningful about them. Our success in achieving health objectives is strongly influenced by what we spend our time thinking about - unconscious eating and conscious exercise are in conflict. Status quo bias in the labour market and in consumption patterns is responsible for lots of apparently suboptimal behaviour and there's a strong argument that the cause is a (possibly rational) lack of attention to the goal.

Here is a candidate theory of how we select our goals.

I draw inspiration from a Glöckner and Betsch paper, Modeling option and strategy choices with connectionist networks. Although this paper is within the narrow paradigm I'm critiquing - how do people solve a problem that is exogenously given to them - it contains a model we can borrow to address the broader problem. They propose that the mind answers questions by collecting data and using it to populate a network of nodes representing a model of the problem it is working on. It tests how self-consistent this model is, and if it is highly consistent it is more likely to consider the problem solved. If it is not consistent (e.g. two different answers to the question can still be true within the mental model) the mind seeks out more information to try to increase consistency. As they say:
"One of the basic ideas of Gestalt psychology (e.g., Köhler, 1947) is that the cognitive system tends automatically to minimize inconsistency between given piece of information in order to make sense of the world and to form consistent mental representations"
The unconscious (automatic) mind determines that there are two or more inconsistent ideas simultaneously held, and prompts the conscious (deliberative) systems to gather more information with which to populate the model, in order to try to resolve the conflict between them. This process continues until the mental model reaches a certain level of consistency - in effect, when it stops changing and reaches a stable representation. This representation is taken as the answer to the question.

In a very different context, John Yorke writes:
"The facts change to fit the shape, hoping to capture a greater truth than the randomness of reality can provide."
I propose that the mind relies on a similar connectionist, associative network to choose which goals to focus on.

This network represents the actions that the decision maker could take, the consequences of those actions, and the reward that would accompany those outcomes. In any situation a person could take thousands of potential actions with tens of thousands of consequences, and it is unlikely the mind (even the highly parallel automatic system) can simultaneously evaluate all of them. Instead, a small subset of those potential actions and outcomes will be activated by sensory stimuli or familiarity: nodes representing regular, repeated actions are likely to remain active much of the time; nodes representing outcomes such as the satisfaction of hunger may be activated by biological need, and other less frequent actions or consequences may be activated by seeing or hearing messages which remind us of them.

Activation automatically spreads from node to node in this network. The network's connections link actions to their consequences - the action of eating food links to satisfaction of hunger; saving money is linked to a higher bank balance, which in turn links to an emotional payoff from feeling secure; smoking a cigarette links to the quieting of cravings and a feeling of relief. Thus, when an action is activated a consequence will become active; and vice versa, when an outcome is activated the actions which could lead to it will also become active. If only one action-node is activated, the decision maker will take that action. If only one outcome-node is active, the decision maker will choose to pursue that outcome. At this point a goal has been set, and the well-studied processes of decision making will take over.

If nodes representing more than one potential action or outcome have been activated, the automatic system needs to keep working, until it can resolve which one to pursue. This work includes the further spreading of activation to other nodes in the network (the food-eating node could activate nodes that represent spending money and gaining weight, the hunger-satisfaction node could activate alternative actions that lead to the same outcome) which in turn may connect back to some of the same nodes, increasing their activation further. The network might "test out" particular outcomes by activating nodes that represent their second-order consequences. The activation is diluted by this point; the earlier nodes were strongly activated; these ones are weaker.

At this point a similar kind of consistency-testing to that proposed by Glöckner and Betsch comes into play. The activated, imagined actions and outcomes are tested against sensory input and knowledge about the outside world. Are these outcomes plausible? Can I actually take these actions? Are they consistent with what I believe about how the world works? If so, the activation, and the relationship between actions and outcomes, is reinforced. If not, the activation is reduced and the network keeps looking for a consistent, stable, combination of action and outcome.

Eventually, that stable set of active nodes will emerge; or perhaps two or three combinations will continue to compete for attention and plausibility. If so, again following the template of Glöckner and Betsch, the deliberative system comes into play and selects between them. The deliberative mind applies symbolic, logical or linguistic forms of reasoning and decision making instead of the connectionist, activation-driven process of the automatic mind. These symbolic processes (and the biases that can affect them) are the stuff of most decision theory, and I defer to the accumulated body of science to tell us how they work. My claim here is only about the automatic process that selects the options between which we deliberate.

The outcome emerging from this process becomes the goal we consciously seek. It will be paired with an initial action, though that action alone may not be enough to achieve the goal, in which case a planning process of some kind has to take place - again, thoroughly explored by existing decision theory.

This model tells us something about why certain goals or actions might be preferred to others. If an action is particularly salient or easy to imagine, we are more likely to focus on the outcomes that follow naturally from it. If an outcome is particularly consistent with our mental model of the world, we are more likely to take the actions that will cause that outcome. The availability heuristic, effects related to salience and attention, and confirmation bias are all natural outcomes of this emergent-goal process. For now it is a theoretical model, but it is not too hard to imagine empirical tests for it.

Of course, the outcome you choose should not only be consistent with your view of the world, but also be a rewarding one. I agree with both the classical economist and the behaviourist that reward drives us to choose outcomes, and therefore the actions that lead to them. But we clearly do not apply probabilistic, utilitarian calculations to estimate and respond to that anticipated reward, and simple behavioural conditioning is not enough to explain the rich, complex actions and plans we make. In the next post in this series I will suggest a more plausible way to think about reward, how it motivates us to act, and what this means for how we experience life.

Tuesday, 23 June 2015

My writing elsewhere

I haven't been very active here recently, but here are some links to my writing on other sites:


  1. An article for RW Connect about the UK election polls and how behavioural methods could make polling more accurate.
  2. A journal article in the International Journal of Market Research (subscribers only, sorry) about behavioural conjoint analysis methods.
  3. An article in the proceedings of the DCAI conference, "When can cognitive agents be modeled analytically versus computationally?"
If you don't have access to either of the latter articles, drop me an email and I can send you the proof versions.

Sunday, 23 March 2014

On the identity and methods of behavioural economics

The FT has a very good article from Tim Harford today, surveying behavioural economics and asking some important questions about it. People within a field can be so immersed in their unconscious assumptions and practices that it takes an outsider to point out some of the questions they are not asking.

Tim says:
The past decade has been a triumph for behavioural economics...[which] is one of the hottest ideas in public policy....Yet, as with any success story, the backlash has begun. Critics argue that the field is overhyped, trivial, unreliable, a smokescreen for bad policy, an intellectual dead-end – or possibly all of the above. Is behavioural economics doomed to reflect the limitations of its intellectual parents, psychology and economics? Or can it build on their strengths and offer a powerful set of tools for policy makers and academics alike?

Quite. That, of course, is a journalistic question - not one intended to be answered within the article, but designed to provoke the prospect of a good ding-song. But the substantive points come soon. Note that Tim, writing for a generalist FT-reading audience, chooses to address his article to public policy so it doesn't look like an abstruse argument between academics. But actually it's about the effectiveness of BE, and economics in general, as a tool at all. Public policy, private decisions, how businesses operate - all can be informed by whatever economic theory we believe in.

...there is something unnerving about a discipline in which our discoveries about the past do not easily generalise to the future...This patchwork of sometimes-fragile psychological results hardly invalidates the whole field but complicates the business of making practical policy.

Indeed - and it divides the field, into those who believe a (more) unified theory is available, and those who believe rational choice is still the main theory available and that behavioural results are only meaningful in relation to that.

The line between behavioural economics and psychology can get a little blurred. Behavioural economics is based on the traditional “neoclassical” model of human behaviour used by economists. This essentially mathematical model says human decisions can usefully be modelled as though our choices were the outcome of solving differential equations. Add psychology into the mix – for example, Kahneman’s insight (with the late Amos Tversky) that we treat the possibility of a loss differently from the way we treat the possibility of a gain – and the task of the behavioural economist is to incorporate such ideas without losing the mathematically-solvable nature of the model.
Consider the example of, say, improving energy efficiency. A psychologist might point out that consumers are impatient, poorly-informed and easily swayed by what their neighbours are doing. It’s the job of the behavioural economist to work out how energy markets might work under such conditions, and what effects we might expect if we introduced policies such as a tax on domestic heating or a subsidy for insulation.

And the problem today is that, without a clear theory, behavioural economists can't work that out. All they can do is suggest various effects that might happen, and design an experiment to test them. Nothing wrong with that, but it's a bit ad hoc.

The most well-known critique of behavioural economics comes from a psychologist, Gerd Gigerenzer of the Max Planck Institute for Human Development. Gigerenzer argues that it is pointless to keep adding frills to a mathematical account of human behaviour that, in the end, has nothing to do with real cognitive processes.
David Laibson, a behavioural economist at Harvard...concedes that Gigerenzer has a point but adds: “Gerd’s models of heuristic decision-making are great in the specific domains for which they are designed but they are not general models of behaviour.” In other words, you’re not going to be able to use them to figure out how people should, or do, budget for Christmas or nurse their credit card limit through a spell of joblessness.

We come back again to the need for a general theory, and one of behavioural economics' regular combatants agrees:

For some economists, though, behavioural economics has already conceded too much to the patchwork of psychology. David K Levine, an economist at Washington University in St Louis, and author of Is Behavioral Economics Doomed? (2012), says: “There is a tendency to propose some new theory to explain each new fact. The world doesn’t need a thousand different theories to explain a thousand different facts. At some point there needs to be a discipline of trying to explain many facts with one theory.”
The challenge for behavioural economics is to elaborate on the neoclassical model to deliver psychological realism without collapsing into a mess of special cases...The question is, how many special cases can behavioural economics sustain before it becomes arbitrary and unwieldy? Not more than one or two at a time, says Kahneman. 
Thaler says: "...if you want one unifying theory of economic behaviour, you won’t do better than the neoclassical model, which is not particularly good"

It seems that Kahneman and Thaler actually agree with Levine in a way; all three doubt that behavioural economics can crystallise into a single theory, though only Levine thinks this is a serious problem.

George Loewenstein and Peter Ubel wrote in The New York Times that “behavioural economics is being used as a political expedient, allowing policy makers to avoid painful but more effective solutions rooted in traditional economics.”

This point is different but important: if policymakers expect behavioural economics to be a substitute for regular economics they'll be disappointed. The two are complementary, and the most important policy contribution of BE may be to tell us which economic incentives will have the biggest impact, and which will have unwanted side-effects, rather than to obviate the need for traditional incentives altogether.

Should we be trying for something more ambitious than behavioural economics? “I don’t know if we know enough yet to be more ambitious,” says Kahneman.

That's a provocative point. Yet it acknowledges that whatever field eventually manages to incorporate both traditional and behavioural economics may have to be called something different.

Laibson says behavioural economics has only just begun to extend its influence over public policy. “The glass is only five per cent full but there’s no reason to believe the glass isn’t going to completely fill up.

I and many readers of this blog will probably be with Laibson on this point. But perhaps without a new approach, behavioural policy is going to run more and more often into the wall of adhockery - the lack of general theories making us redo things from the ground up in each new situation.

Tim isn't the only person to write about this recently. For a contrary word, try Chris Dillow's comment, which makes some good challenges from his usual half-libertarian, half-Marxist point of view.

Then, here are some links and thoughts from Diane Coyle, including "Is behavioural economics the past or the future" by Chris House. Diane hones down one of Tim's questions into Kao and Velupillai's distinction between classical and modern behavioural economics: modern assumes people are (biased) optimisers, while classical assumes they are satisficers. This is the same distinction drawn by Gerd Gigerenzer, though his research looks at a broader range of decision-making heuristics, of which satisficing is just one. Diane asks, effectively: is the best mathematical approach to tweak the models of maximisation, or to try to build a new behavioural economics based on heuristics?

Chris House's post says:
...in 2007-2008 we were again told that behavioral economics would finally come into full bloom. It didn’t happen though. The wave of behavioralists never came.

While this isn't true in psychology or behavioural policy and marketing - all thriving and fast-growing fields - it is true of economics. My experience is that many new economics undergraduates or entrants to economics PhD programs are intrigued by behavioural ideas, they are often guided by supervisors into more traditional areas where it is easier to define a research question that is going to produce safe, publishable papers. Barkley Rosser, commenting on House's post, mentions the new journal Review Of Behavioural Economics, which along with other emerging initiatives may help to change this.

Otherwise, Chris raises that same point:
Behavioral economics won’t get very far if it ends up being just a pile of “quirks.” Are these anomalies merely imperfections in a system which is largely characterized by rational self-interest or is there something deeper at play? ...if behavioral is to somehow fulfill its earlier promise then there has to be some transcendent principle or insight which comes from behavioral economics that we can use to understand the world.

Then there is the David Levine paper that Tim mentions, "Is Behavioural Economics Doomed?". In this, Levine says (among many other interesting things!):
For most decisions of interest to economists these external helpers [computers, paper and pencil etc] play a critical role – and no doubt lead to a higher level of rationality in decision making than if we had to make all decisions on the fly in our heads.

What a brave claim! Do we really rule out from the realm of economically interesting decisions all consumer purchases, the consumer's intuitive feelings about how safe they feel with a certain amount of savings in the bank, and all the decisions about cars, houses and jobs that - although someone might sit and think about them for a while - still involve a big chunk of emotion?

Actually, there is no need to throw out these kinds of decisions in order to meet Levine's key challenge of "trying to explain many facts with one theory." He asserts that mainstream economics is already successful at explaining many facts. But perhaps, when he discards all those "uninteresting" decisions it isn't so hard to explain what's left. Indeed, it's those "uninteresting" decisions which classical economics does struggle with, and only behavioural economics can illuminate. Contrary to Levine, I am convinced that these decisions actually make up the majority of important economic events. But I do recognise his critique - echoed by Tim and implicitly by Velupillai and Gigerenzer: that behavioural economics does not offer a full theory to replace that of mainstream economics. However, it has given us good empirical evidence which we could build a theory on.

As well as defining away a large portion of the economy as "not interesting", Levine also co-opts some of the parts that he does consider interesting, saying they are already handled by mainstream economics: notably the subject of learning. Non-behavioural economists have considered consumers' imperfect ability to learn the preferences of other consumers, or the rules of the "game" they are playing, as a factor in non-optimal decisions. But psychologists know much more about exactly how people learn than economists do - so a successful model of learning as part of economics can only be built with an openness to psychological research. Where Levine may be right is that behavioural economics will not replace mainstream economics, but instead the two fields will merge - with the behaviour of consumers predicted by a combination of objective economic, and subjective psychological, factors.

Anyway, arguments over the boundaries of disciplines are rarely productive: I don't really mind if Levine considers a model to be behavioural or not, as long as the model advances the cause of making successful predictions.

The real questions are: does standard economics fail to address some important problems? How good is behavioural economics at addressing them instead? And does behavioural economics need a unified approach in order to address them?

Most of the people mentioned above have different answers to those questions:

  • Levine wants a unified theory - but think we have to exclude many types of "uninteresting" decision in order to get one.
  • Kahneman and Thaler want different theories for several different areas - but those incompatible theories will not be able to deal with the many boundaries where different aspects of economics interact with each other.
  • The classical economists already have a unified theory - but there are many things it can't explain.
  • Gigerenzer has a philosophy - but no overall theory. And I'm not sure if he expects or really wants a unifying theory any more than Kahneman does (this may be one of the few things they agree on).
[Update: much of this debate was anticipated in this Werner Guth paper of 2007]

My view, which I think concurs with Laibson's: a single broader theory is possible. I think we've hit a theoretical dead end with the traditional maximising agent, so it will have to be based on more psychologically realistic foundations, such as those of Velupillai, Gigerenzer or Bettman, Payne & Johnson. To achieve this, we need to carefully choose the right elements to build into our model of decision-making in a way, so that it can make useful predictions of how those elements might operate. I have a paper coming out later this year which suggests one direction towards this.

Tuesday, 31 December 2013

Catching up on 2013

I didn't intend to stop posting on here when I started my tour. But things overtook me. Here's a summary of what some of them were:

  • My book, The Psychology of Price, came out. You should buy it!
  • I started a new business, The Irrational Agency, with a business partner. We've taken the ideas of behavioural economics into the market research and marketing worlds, and tried to go a bit deeper than some of the agencies who appear to have based their behavioural services on reading the first half of Predictably Irrational. We've developed a decision process model (based on some ideas regular readers might have seen on here last year) and been lucky enough to work with some quite cool clients to apply it.
  • I developed my theory of cognitive microfoundations a bit further. It's now primarily based on information processing and attention, informed by a range of empirical decision-making work and on some theoretical work from the likes of Payne, Bettman and Johnson and the adaptive toolbox of Gigerenzer and Todd. I've taken the ideas forward at two workshops – the Summer Institute on Decision Making at the Max Planck Institute, and the EADM JDM Young Researchers' Workshop.
  • There has been some development of similar ideas by other economists too: Xavier Gabaix and Michael Woodford for example (more on their work in a post from the AEA conference soon).
  • I've presented at a few academic conferences - ICP, ICT, SJDM, SPUDM...and some other places like the Professional Pricing Society and conversionsummit
  • A few ideas on intangible products have started to emerge - first into a pricing workshop and maybe into a new book next year.
  • I've visited India, Cuba, South Africa, Canada, the US, Spain, Germany, Switzerland, Estonia, Finland and Denmark to follow all these ideas through and meet a bunch of pretty exciting people.
  • I've been doing some writing for other places: as economics editor for the InDecision blog, behavioural blogger for RWConnect, and a contributor to Research Live.

But enough about me. I'm not sure if this break from the blog counts as rational inattention, but I'll get back into the habit of posting regularly in the new year.

Thursday, 7 June 2012

The Cognitive Microfoundations Project: a behavioural economics world tour

There has been much talk about microfoundations on the economics blogs in the last few months [Noahpinion, Mark Thoma, Simon Wren-Lewis twice, Andrew Gelman twice, Karl Smith, Paul Krugman twice, Robert Waldmann, Rajiv Sethi from 2009]. The idea of microfoundations is that a model of the overall economy should be consistent with how individual people act. The aggregate behaviour of variables like GDP, government deficits and unemployment should be derived by adding up the choices of individuals, not by treating the whole population as if it were a single entity.

(A microfounded model might start off like this: "Imagine N agents, each of which has income yn, consumes cn and saves sn. Then yn = cn + sn. For each agent, sn varies with the interest rate r according to the following relation..." while a non-microfounded model is more likely to start: "Total spending in the economy is C and saving is S. C+S must sum to Y, total income. S varies with the interest rate r...")

But does the microfoundations approach really work? It seems a good idea in principle. It works well in some other fields like physics and chemistry (though less so in biology). Building things from the ground up protects us against falling into certain mathematical traps. Some concepts (like the idea of people trading different goods with each other) don't really even exist at the aggregate level, so are hard to talk about without microfoundations. The idea that we can understand things in this level of detail is an appealing one.

Unfortunately, the idea of microfoundations has come to be closely associated with rational agent theory. Most microfounded economic models are implementations of DSGE (dynamic stochastic general equilibrium), which assume a population of rational utility-maximising agents who are given certain preferences and resources and respond logically to those. Readers of this blog, or of any behavioural economics book, will be unsurprised to hear that real people do not maximise utility in the way DSGE models insist - as demonstrated in numerous psychology experiments. Economists usually respond to this objection in one of two ways, neither of them quite satisfactory.

Response one: to claim that rational utility maximisation is close enough to the truth to describe the economy reasonably well. Sure, there are exceptions: people might not always discount future earnings in a consistent way, and sometimes we buy things because they’re on sale and not because our utility from the product exceeds the price paid - but those are minor errors, they mostly cancel each other out, we learn to be more rational over time, and the limits imposed by our income force us to act fairly rationally. So, DSGE models, maybe with a couple of small tweaks, are still the best way to describe the economy and work out how to manage it. We can still make inferences about how tax rates will change the choices of individual workers, or how interest rates will affect investment and savings decisions, and draw conclusions from that about how the whole economy will evolve.

Response two: to agree that individual rational agent models are too far from the truth to be useful but then to give up. For many, the failures of economic forecasting in the leadup to the 2008 crisis prove this. There are better ways to describe individual decisions - behavioural economics gives us some hints - but these are mathematically too hard to build models with. Therefore we shouldn’t bother with microfoundations - instead, we should reason from aggregates, such as the total amount of money, production, employment and debt in the economy. It is possible to work out, for example, that if companies try to save more money (as we can see they currently are), individuals try to pay off their debts (as they are), and governments try to cut their deficits (as they say they are) something must give. The model may not tell you which one will fail, but it can tell you that something must. These models can’t describe all economic phenomena because the aggregates don’t always tell you enough, but maybe they are all we have.

The first response is wishful thinking. The second is fatalism.

What if there is another way? Maybe, by choosing the right models from cognitive psychology and behavioural economics, and aggregating them in the right way, we can develop an accurate representation of large-scale systems after all. Then perhaps we can get the benefits of a microfounded model - which lets us understand many different economic phenomena, and gives us confidence via experiments that its conclusions are sound - but with greater accuracy, predictive power and robustness than today’s DSGE models.

Such models, microfounded not on rational utility theory but on real cognitive processes, might focus on specific domains such as consumer product markets or labour markets. They might let us explore the effects of specific economic policies such as tax or interest rate decisions. Eventually, they might develop into a unified theory that can be used to investigate any aspect of the economy - the cognitively sound equivalent of Arrow-Debreu general equilibrium theory.

Can this be done? It’s too early to say for sure, but it’s one of the most important questions for the economics discipline to ask itself.

So this year I’m going on tour. I will travel to wherever I can meet researchers in different economic domains and work out with them how psychology can be incorporated into their models. Although it might be possible to work out cognitive microfoundations from first principles, I suspect it will be more practical to start asking what kind of foundations will illuminate each different economic domain.

My initial objective is to work with people in each of the following disciplines:
  • Consumer behaviour
  • Competition and market organisation
  • Labour economics
  • Trade and international economics
  • Fiscal policy
  • Development economics
  • Monetary theory
  • Industrial organisation
  • Personal finance
  • Financial markets and asset pricing
  • Environmental economics
  • Health economics

I have a few collaborations lined up already, but there’s no restriction to just one in each field. So if you work in one of those areas - or would like to propose another - get in touch and I can add your location to my itinerary.

So far I’ve been to Madrid, Barcelona, Marseille, Paris and Honolulu. From today, my immediate plans are:
  • Until 13th June: San Francisco and Berkeley.
  • 13th-19th June: Atlanta.
  • 19th-30th June: the northeastern US - DC, NYC and all points between.
  • July: the UK and South Africa.
If you’re near any of those locations why don’t we meet up? If we discover anything useful there’s a co-author credit in it for you.

Monday, 30 April 2012

Did he jump or was he pushed; is there a difference?


This New Yorker article about why so many Americans are single reminded me of the debate about unemployment prompted by Casey Mulligan. Here’s why:

From the New Yorker: "...do people live alone because they want to or because they have to?"

Paraphrasing Mulligan and his critics: “Are workers choosing to be unemployed or are they forced to be?”

[actual quotes from Mulligan: "there are sensible people...who will recognize that 2009 is not the time for them to...commute a long distance to work...[unemployment insurance has] dramatically reduced the costs to them of making this the year they coach junior's baseball team, or do some work on their house, or spend time with an ailing parent" "the market tends to create and allocate jobs for those people who are most interested in working" and "my research has been to examine...changes in the willingness and availability of people to work" versus Dean Baker's "this does not mean that less-educated workers could find jobs if they really want them"]

Both quotes reveal a simplistic view of the nature of choice. It’s as if our choices are fixed – and we will always make the same choice unless there is some barrier in the way. The New Yorker assumes that each of us either definitively wants to be single or wants to be married, and that we’ll get our way unless something thwarts us. The debate over Mulligan's claims, on the other hand, take literally the fact that we have free will – so if someone laid off from a Detroit factory or a Texas high school has chosen not to take the minimum wage job at Walmart, their unemployment is voluntary.

Mulligan’s view is often mocked – Ryan Avent calls it “The Great Vacation” (did he coin the phrase?) – but it does at least have some internal consistency. People intuitively object to this story because it seems to imply people’s preferences have changed, and they have just decided they now want more leisure. But in fact this model assumes that preferences are exactly the same, and it’s the available options that are different. Simon Wren-Lewis writes here:
"In RBC models, all changes in unemployment are voluntary. If unemployment is rising, it is because more workers are choosing leisure rather than work. As a result, high unemployment in a recession is not a problem at all. It just so happens that (because of a temporary absence of new discoveries) real wages are relatively low, so workers choose to work less and enjoy more free time"
One defence of Mulligan is to read his claim more narrowly: that unemployment benefit reduces people's desire to work by a bit, increasing unemployment by an unknown amount - which seems plausible - and not that the whole recession arises from this cause.

Regardless, the right approach to both claims - that unemployment is voluntary or that Americans are forced to be single - is that people's actions are a result of both our individual wants and the environment we find ourselves in. Our wants might indeed change over time – though this tends to be a slow process. Our choices change much more quickly, because the same person in a different environment will make a different choice from the same options. A man with £10 in McDonald’s may choose to eat, while a man with £10 in Gordon Ramsay’s may choose not to eat (even though technically he could buy something from the lunch menu). The man is the same, and his budget is the same, but his actions are different.

Even the idea of “the same options” is dubious. Has the man in Gordon Ramsay’s really been offered “the same options” as the man in McDonald’s? Is a worker turning down a cashier job in Walmart choose from “the same options” as a worker taking a project management job at Boeing? Economics is partly about abstracting away the differences between different situations, but we must recognise when we’re abstracting too much.

Standard economics takes us up to about this point, and Noah Smith says as much in this post. Each person has preferences which determine the relative exchanges they’re willing to make. These preferences define a particular value for my time – £20/hour – so that if the wage offered (adjusted a little to take account of employment terms, location etc) is greater than £20/hour, I’ll take the job; otherwise I’ll stay at home. Similarly, my preferences may determine that the effort and sacrifice of being in a couple has a certain cost to me, and only if the benefits outweigh that cost will I enter a relationship. Thus, I may be more willing to go into a relationship with a person who I find more attractive (thus increasing the benefit of the relationship) or if housing prices rise (increasing the cost of staying single).

The psychology of decision-making says things aren’t this simple. The factors that determine the cost and benefit of each option are not stable. My preferences fluctuate according to how I feel, and my perception of the options I’m choosing between will change according to what I’m thinking of, what I’ve been reminded of, and what I’m looking at. Some factors become more important because they are more salient, and others may be ignored altogether.

Some particular factors become important to me which, according to a rational utility model of choice, should not matter at all. For instance, the wage I was paid last month should not be relevant to whether I accept a job at Walmart this month. But you can be sure it will be. There are a whole range of possible reasons for this: I may have mortgages and bills to pay that require me to earn over a certain level; I may treat my last wage as a signal of what I’m likely to be able to earn if I hold out for a better job offer; or I might simply feel ashamed to accept a 50% cut in pay. Whatever the reason, either my preferences, or my beliefs about the context I’m in, or both, are now seen to be dynamic and not static.

This means it is too simple to say “my preferences have changed” or even “the environment has changed”. Both are always changing. My choices are constructed in each moment out of the information available to me from inside and outside my mind.

There is not even a clear distinction between preferences and context. My preference to work at £21/hour instead of staying at home is in turn influenced by the context I live in, in particular the level of my mortgage payments or whether I think the economy is getting better. So the choice is in fact a tradeoff between one external factor (the job offer) and a series of others (mortgage, economy) with my mind as the calculating device that sits in between, weighing up the factors.

My mind of course is not perfect, and it can only roughly estimate the strength and future path of each factor. So it relies (I rely) on rules of thumb, heuristics, to save time and make it possible in practice to actually make any decisions at all. Those heuristics themselves can change over time, as I have new experiences which I learn from – and which may invalidate old heuristics or lead to new ones. Maybe the last time I took a low-paying job, in high school, my brother got a better one the following week, and laughed at me. The heuristic that I might learn from that is fairly clear, even if I’m not conscious of it when I make my decision now. If I turn down this job and don’t get another offer for three months, perhaps my heuristic will change.

Can we even distinguish between heuristics, preferences and environment? Not clearly. From the outside we cannot tell whether the man turning down the Walmart job is doing so because he has a clear, conscious preference for a £20/hour job, or whether he’s subconsciously applying a rule of thumb his brother taught him by teasing 20 years ago, or whether he simply cannot afford to work for less because it won’t pay the mortgage and he has to hope for a better offer next week. Even internally, the man himself probably does not clearly know the difference between these three causes. So is it meaningful to say that they are three distinct phenomena?

We haven’t even discussed the signalling and cultural implications of taking a job at Walmart, or the influence of the way in which the offer is communicated (“We’d really appreciate if you’d take this job, to help us to serve your community better” or “Head Office has approved your application for employment, and subject to security and identity checks you may arrive on Monday at 8am sharp.”). Language and culture too shape our interpretation of the choices we are offered and the factors that we take into account; this can be seen as part of the cognitive process or as part of the environment in which we choose.

We are left with two ways to think about choice. The first option is to declare the process of choosing to be too complex for simple interpretations like “unemployment is voluntary” or “Americans are single because they have no choice” to be entirely true (or entirely false). The second option we can find a new and more accurate abstraction to describe choice – I like to think of it as “a cognitive algorithm which translates external and internal signals into actions”. In this view, the idea of voluntary unemployment or involuntary singledom simply lose their meaning. The very term voluntary becomes moot.

Then, did the unemployed man jump or was he pushed? All we can say is that a confluence of factors - physical, or emotional - and his response to them, caused his fall.

Where, then, has free will gone? Was it ever there in the first place? That must wait for another post.