Today's highlights for me were:
- Martin Hilbert of USC has built an information processing model which can potentially explain seven different cognitive biases. In the model, external signals are stored in memory; and then retrieved again when required to make a decision. Assume that the channels into and out of memory are subject to random noise, and then place the following two constraints on the noise: first, that there is less noise than signal (i.e. our beliefs are more likely to be close to reality than not) and second, that the noise is symmetrically distributed (unbiased). From these assumptions, we can derive Bayesian likelihood, placement bias, subadditivity, hard-easy, overconfidence and conservatism effects. Martin also predicts a seventh bias which has not yet been observed, which he calls the exaggerated-expectation bias. We look forward to this being experimentally tested.
My view: it's important to find underlying mechanisms which explain multiple biases. The noisy-channel model is interesting and plausible, but I am not convinced that this specific model is applicable in a wide range of decision scenarios. It seems quite specific to Bayesian-style probability scenarios. I think Martin has a more general version in mind, but I'm not sure what it is.
- David Tannenbaum looked at the ethical values of consumers. He measured experimentally the degree to which people hold "protected" values - that is, values which they are not willing to trade off against others. Typical unprotected values might be price, quantity or quality - the kind of things we trade off in our product purchases all the time. Protected values are typically based in ethics - for example, has the rainforest been cut down to make the desk I'm buying? His experiments suggest that consumers starts with a lexicographical decision-making process - where consumers seek a do-no-harm choice which will make no impact at all on their protected value - and if no such choice is available, they revert to a traditional one, where they are willing to make trade-offs if the price or quality difference is sufficient.
My view: I am not sure if it is correct to model protected values as all-or-nothing, rather than just as very strong preferences. No matter how deeply I care about the rainforest, I might not be willing (or able!) to spend $1 million to buy a desk which does not damage it. However, to a first approximation this is probably a useful model. There are useful applications for brands here: one implication could be that a brand which offers a do-no-harm option (the no-rainforest choice) can take advantage of a high willingness-to-pay among the consumers who care about that value. An analysis of bundling shows that firms are likely to sell products which have high quality in multiple dimensions at once - so you can buy organic chocolate which also tastes good and has beautiful packaging, but nobody sells organic chocolate which tastes worse than a regular Cadbury's bar. I'm not sure if this bundling effect operates differently for protected values, but David's comment after the talk suggested that it might be.
- Dan Cavagnaro showed a useful methodology for running experiments to distinguish between different theoretical models, called Adaptive Design Optimisation. In this mode, instead of having a fixed set of experimental tests designed in advance, the software automatically selects in realtime a test which will maximise the resolution of the distinction between the options being tested, given previous responses.
My view: an extremely powerful approach, indeed one which I'm incorporating (in a different form) into our software offering. A simple example: if I'm testing willingness-to-pay for a product, I might start by testing consumers at a £50 price point, planning to test £30 and £70 afterwards. However, if I find that all of my initial test subjects are willing to pay £50, there is little point testing the £30 level. Instead, the test software should adjust to test an £80 price point and then perhaps £120. I'm not sure whether Dan's software will be precisely applicable to what I do, as it is about distinguishing between models rather than distinguishing between parameter values. But it's good to know someone else is applying a similar approach - and, it appears from his results, with striking success.
- Sig Mejdal lightened the tone with a fun discussion of how he's persuaded the St Louis Cardinals to hire him as a quantitative analyst, to help them select the best new players from each year's draft. It took him three years to convince them, despite Michael Lewis's bestselling book Moneyball which showed how the Oakland A's did the same thing eight years ago with amazing success.
- Jennifer Trueblood described a "quantum" model of judgment, applied to how jurors decide the guilt of an accused person after hearing prosecution and defence arguments. I usually have a real problem with people making this kind of dodgy analogy from physics (especially quantum theory) to human behaviour. However, I read this as being merely an adaptation of quantum theory's mathematical techniques, rather than an attempt to claim Heisenberg's Uncertainty Principle actually applies to juries (which is the kind of thing you'll hear occasionally from social scientists).
My view: the mathematics was too dense to convey in a 20-minute talk - even for me, with training in quantum physics - but it could be a useful model. I do have doubts on two points: one, I don't see how the model would plausibly arise from our mental capabilities; and two, even if it does, it will be very hard for most researchers to apply this model in practice, due to the complexity of the mathematical concepts used. Still, I am hoping to find out more before the weekend finishes.
- Laurence Maloney proposed that essentially all probability biases can be derived by assuming that people remember the log of probability rather than the probability itself, and that a constant error term is applied to the log.
My view: this could be a really fundamental result, if further research bears it out. There is a plausible cognitive mechanism for it (sampling behaviour which is proportional to the log of the population size) and it can be easily applied. Worth watching.
- Anuj Shah gave a very entertaining talk about some work he's done with Eldar Shafir and Sendhil Mullainathan on why people take out payday loans and otherwise behave against their long-term interests. He's a great speaker and someone to watch in the popular behavioural press in the future. The hypothesis is that if people are resource-constrained (poor) they have fewer resources available to help them make the right decision, meaning they are likely to overborrow and will end up even poorer in the long term.
My view: Good, solid results which I think shed light on a potential different explanation. It could be that when we have money available, we find it much easier to envisage the alternatives we could be enjoying, and regret not having them. If I don't have any money - or ability to borrow it - I won't much miss the restaurant meal or holiday that I can't have. But if I could put that vacation on my credit card, and am simply choosing not to, the potential enjoyment is much more salient and I'm quite likely to cave.
And it's nice to meet some people in real life whom I've previously encountered only in journal papers and the occasional TED talk.