### Decision and behavioural research from Peter Wakker

I attended a very interesting seminar this evening about how people evaluate (and reveal their estimates of) probabilities under conditions of ambiguity.

This idea of ambiguity (or Knightian uncertainty) is very present in the economics conversation in recent months - the idea that we can't evaluate the probability of an event if we don't have reliable models, or any data to measure its past frequency. In some cases people thought their models were reliable, but they turned out not to be. In others, the key decisions were made by people without any models, relying on intuitive estimates based on the recent past.

Some of the intriguing results that Peter Wakker presented today show that people respond in somewhat predictable ways to this situation.

First, people tend to act as if the probability of an event is a bit closer to 50% than its real value. So, if the real probability is 75% (and even if we know that it is 75%) we will behave as if it is 69%, or 61%. The amount of bias is different in different models of decision-making - expected value (EV) is the theoretical rational model; expected utility (EU) incorporates diminishing marginal utility of money, and non-EU models incorporate other factors such as disappointment aversion.

Under ambiguity conditions, where we have no real way of evaluating the probability, Wakker uses a quantity called B ("belief") to represent our underlying (perhaps subconscious) assumption about how likely the event is. In a theoretical model he demonstrates that people make decisions in this case in the same way as if they did have a knowledge of the probability; but with an even stronger bias towards the 50% mark.

We seem to have an 'attractor' at 50% which guides us to overweight the probability of unlikely events and underweight the likelihood of near-certain events.

Another related result is that people do not evaluate probabilities of mutually exclusive events additively. That is, let's say we predict there is a 20% chance that a Williams sister will win Wimbledon this year. Then when we separately predict the probabilities of Serena winning and Venus winning, our predictions should add up to 20%. However, they don't. In fact, the separately predicted probabilities are much more likely to add up to more than 20%, not less.

I can see how this tendency could contribute to, say, stockmarket bubbles. We may know that the total profit opportunity of the Internet sector is, say, \$100bn a year. But when we individually evaluate the probable profits of 10000 separate firms, we will probably consistently overestimate them. And especially when we make these decisions in isolation from each other, we will ascribe a higher-than-rational value to each individual stock. Of course, probabilities aren't the same as profits, but I suspect similar uncertainty results will hold. Indeed, you can transform one question into the other - the two are isomorphic.

Interesting work, and provides a useful contribution to building an underlying model of behaviour that we can use to make tractable economic predictions. Peter is going to send me his presentation so I may expand on these thoughts in a future posting.