Wednesday, 30 September 2009

Fame at last - watch me debating some libertarians

My panel on behavioural economics from earlier this year is now available at WorldBytes. Thanks to The Lantern Group for spotting this before I did.

If you look hard enough you may spot missmarketcrash in the audience.

Monday, 28 September 2009

The economics zeitgeist, 27 September 2009

This week's word cloud from the economics blogs. I generate a new cloud every Sunday, so please subscribe using the RSS or email box on the right and you'll get a message every week with the new cloud.

I summarise around four hundred blogs through their RSS feeds. Thanks in particular to the Palgrave Econolog who have an excellent database of economics blogs; I have also added a number of blogs that are not on their list. Contact me if you'd like to make sure yours is included too.

I use Wordle to generate the image, the ROME RSS reader to download the RSS feeds, and Java software from Inon to process the data.

You can also see the Java version in the Wordle gallery.

If anyone would like a copy of the underlying data used to generate these clouds, or if you would like to see a version with consistent colour and typeface to make week-to-week comparison easier, please get in touch.

Friday, 25 September 2009

Supply and demand for bankers

Tyler at MR has been asking recently whether the structure of bankers' pay caused (or contributed to) the financial crisis. Matt Yglesias also has something to say about it (via the above link).

I agree with the general skepticism about this - it is a bit too easy as an explanation.

Limited liability on the other hand is definitely a contributor - shareholders' interests are actually almost the same as those of employees: take lots of risk as the upside is much higher than the downside. If a bank makes $100 billion, shareholders and employees get to share it out. If it loses $100 billion, shareholders lose their whole stake, employees lose a large part of theirs, but creditors are likely to lose many times more. Or if the creditors in question are insured depositors, the taxpayer loses out instead.

Ultimately, banks manage much more of their depositors' and other lenders' money than shareholders' money.

So this had some impact on risk-taking.

But I am starting to come round to the view that simple supply and demand accounts for the high rewards of bank staff, and indeed of bank shareholders. I suspect that the principal-agent problem makes as much impact here as it does on risk-taking: the supply of new bank staff is deliberately restricted by insiders because more competition would drive down salaries and, soon enough, overall returns to the finance sector.

How many new graduates does Goldman hire every year? How many hours a week do they work on average? How many people does that imply they could hire if they didn't limit the numbers?

And how many meaningful new entrants are there in the investment banking sector every year? Not as many as you'd think, given the returns available.

Thursday, 24 September 2009

Global Debt Clock - public AND private

The Economist has a Global Public Debt Clock on their site now, along similar lines to the American and Irish debt clocks but monitoring the total public debt of the whole world.

But lots of people are concerned by the buildup of private, consumer and corporate, debt as well as debt incurred by governments. So, inspired by previous conversations with Nick Rowe, I decided to extend it to show the net total of ALL debt in the world - public and private.

The clock will automatically update to the correct figures in real time.

Here it is - enjoy:

Wednesday, 23 September 2009

Irrational expectations

Frydman and Goldberg (authors of Imperfect Knowledge Economics, which I still must get around to reading) are in the Economists' Forum with the sexily titled article "An economics of magical thinking". They believe that nobody can predict economic crises and that swings in asset prices are natural. Because all knowledge is intrinsically uncertain, and nobody has access to it all, the market cannot find a "true" price for assets.

So far I agree. But the next part of the argument is suspect:
Behavioural economists have uncovered much evidence that market participants do not act like conventional economists would predict “rational individuals” to act. But, instead of jettisoning the bogus standard of rationality underlying those predictions, behavioral economists have clung to it...

The behavioural view suggests that swings in asset prices serve no useful social function. If the state could somehow eliminate them through a large intervention, or ban irrational players by imposing strong regulatory measures, the “rational” players could reassert their control and markets would return to their normal state of setting prices at their “true” values.

This is implausible, because an exact model of rational decision-making is beyond the capacity of economists - or anyone else - to formulate.
Yet again Nudge seems to have made people think that behavioural economics is all about state intervention. Far from it. But even if we accept the argument above, see where they take it next:
...sometimes price swings become excessive, as recent experience painfully shows. Even accepting that officials must cope with ever-imperfect knowledge, they can implement measures - such as guidance ranges for asset prices and changes in capital and margin requirements that depend on whether these prices are too high or too low - to dampen excessive swings.
So behavioural economists are not allowed to guide the state to limit asset price swings...but officials must implement measures to dampen excessive swings?

Where is the sense in this argument?

The article also confusingly conflates three separate concepts: individual rationality, the efficient markets hypothesis and rational expectations. These are quite distinct, and invalidating one does not throw out the others.

Oddly enough, the whole article, though it thinks it is a critique of rational expectations, is actually a good argument for it. A more coherent argument comes from Arnold Kling:
Gilles Saint-Paul writes:
"...any macroeconomic theory that, in the midst of the housing bubble, would have predicted a financial crisis two years ahead with certainty would have triggered, by virtue of speculation, an immediate stock market crash and a spiral of de-leveraging and de-intermediation which would have depressed investment and consumption. In other words, the crisis would have happened immediately, not in two years, thus invalidating the theory."

David K. Levine writes:
"Do you believe that it could be widely believed that the stock market will drop by 10% next week? If I believed that I'd sell like mad, and I expect that you would as well. Of course as we all sold and the price dropped, everyone else would ask around and when they started to believe the stock market will drop by 10% next week - why it would drop by 10% right now."

[and Arnold says]
...if policymakers saw a crisis coming, then they would take steps to stop it, so that it would not happen. Thus, any crisis that does occur has to be one that was not forecast.
Indeed, a simple logical argument arises from these observations.

If market expectations in general (that is, the expectations of the median investor) are different from current prices, prices will immediately move to match what people think they are going to be. Thus, the only way a market can be stable is if it is exactly the same as what people expect it to be - or, equivalently, people's expectations are an exact prediction of the real outcome of the market.

And hey presto! Rational expectations theory.

And much as I hate to admit it, there might just be some truth to the idea of rational expectations. The market is a powerful mechanism for transmitting information - as long as it's liquid, moves immediately to a clearing price, short and long positions are available, and traders are not capital-constrained.

Equivalently, provided a given market has mechanisms enabling it to stabilise (at least in the short-run), rational expectations can hold - and should keep prices at a sensible level, relative to other things in the economy; if they do not, then turbulence, overshooting or full-scale bubbles can emerge.

Tuesday, 22 September 2009

What's wrong with cover pricing?

The OFT has just fined a bunch of building companies £129 million for "cover pricing", which is described as "the practice of submitting an artificially high bid for a contract which you do not intend to win".

But, I thought, companies do that all the time. If a client comes to me with a project that I don't want to do for £20k, I may well be willing to do it for £50k. So I put in £50k, fully expecting not to win, but if I do, then great.

Hearing the vague explanation on the BBC this morning, I figured there must be something more to it. This article from Contract Journal explains it better.

The issue is not actually the high prices as such. It's the fact that there's collusion between suppliers, ensuring there is no real competition for the tender. As CJ says:
What is cover pricing?
Cover pricing is when a contractor bids for a job with no intention of winning the tender.

For example, Company A has been invited to tender by a client, but for various reasons, it has no interest in winning this particular job. However, Company A wants to stay in favour with this client, and stay on its tender list.

So, Company A contacts Company B, which is also bidding for the job, and asks for a 'cover price'. Company B supplies Company A with a price roughly 5%-10% higher than its own bid. Company A supplies this price to the client as its own, safe in the knowledge it will not win. Company B is happy with the arrangement as it has not given away knowledge of its actual bid.

Why is cover pricing illegal?
The process of putting in an artificially high bid is not a breach of competition law - however, the brief conversation between two bidders which confirms it is sufficiently high not to win is an infringement.
So - collusion is anticompetitive and illegal - no quibbles on that.

But this still allows for some uncomfortable scenarios. What if Multicorp contacts Joe's Builders as a potential subcontractor on the job? Multicorp may genuinely wish to put the whole job out to Joe, and add a 5-10% margin for profit. It may not be aware that Joe is also bidding directly. Joe happily tells Multicorp his (objectively determined) price for the job, Multicorp adds a margin and bids, and Joe also bids to the end client at the same price he quoted Multicorp.

Now most tenders do contain a clause forbidding companies from both bidding directly and also being a subcontractor to another bidder, which is presumably designed to stop this happening. But it does rely on Joe and Multicorp being willing to share information on who the client is. In a more commoditised, less transparent market, Multicorp might well ask Joe for a quote without giving full details of the project or the client. In this case, the bidders might inadvertently be colluding - with exactly the same effect on competition and on the price paid by the client - but be completely unaware of it.

I believe that client procurement procedures are partly to blame for this practice. This doesn't excuse collusion, but the customer is partly in control of this situation.

A friend points out that one reason for clients to request a tender is because they don't know an accurate market price for the job. In this case, they are relying on suppliers giving an honest answer as part of their price discovery process.

While I can see the logic of this argument, it seems to show a naive faith in market liquidity. There are any number of reasons apart from collusion why this process might fail to produce a representative or indeed low price. Bidding costs, lack of understanding of the project, fluctuations in resource availability affecting the supply curve, uncertain risk (and the market for lemons), the incentive to underprice the core project and overcharge for variations - there are many reasons why competitive tenders may not work.

And why should clients expect suppliers to provide them with free market intelligence? Unfortunately many clients will use a new supplier as a stalking horse - to get leverage in a negotiation with an existing supplier, having no intention of switching. Suppliers know this, and are wary of spending a lot of time on a bid which they have little chance of winning. Therefore, a bidder is likely to include a high contingency element unless they have confidence that there is a reasonable chance of winning.

At the heart of this problem seems to be the dysfunctional practice of requiring suppliers to bid for every contract in order to stay on the list for the next opportunity. Surely if a supplier is not interested in a project they should be able to opt out without penalty?

And on the other side, surely suppliers who don't want a contract should just increase the price until they do want it. Never mind talking to your competitors - just give whatever figure will make it profitable and attractive for you. If that happens to be the lowest bid, then lucky you - you have won a project you didn't expect, at a price you're happy with.

If, on the other hand, you are sharing information with competitors because you don't trust your own ability to set the correct price - that is, if you're worried about winner's curse - then perhaps you shouldn't be in the industry in the first place.

Again, the illusion of efficient markets and rationality misleads market participants into suboptimal behaviour. To operate effectively, both clients and suppliers need to take into account the cost of acquiring information and the uncertain nature of the service to be delivered.

Monday, 21 September 2009

Cognitive/behavioural links and macroeconomic models

Everyone is looking for new macroeconomic models these days. Paul Krugman's recent article has been a prompt for a reopening of intense discussion on the matter.

It seems that there are two major classes of proposal emerging: those based on cognitive/behavioural insights, and those which incorporate financial firms as part of the model instead of just assuming they transparently pass demand and money around the economy.

Financial models include "New Models for a New Challenge", Cecchetti, Disyatat and Kohler's proposal (via Mark Thoma - though a number of the comments on his posting point back towards the behavioural option). Another is Kobayashi's, which I may have mentioned before.

I've explored the behavioural models more in past columns but I hadn't noticed this conference in Australia which looks to have had some interesting presentations. Krugman hints at behavioural explanations in his commentary but has not yet suggested a model incorporating them. George Akerlof's Nobel price acceptance speech in 2001 introduced some ideas from behavioural research into macroeconomics but his recent book, "Animal Spirits", co-authored with Robert Shiller, has, disappointingly, not moved the discussion forward much. Arnold Kling has some useful comments here. I'm still waiting for a model that will provide a convincing behavioural explanation of major macroeconomic phenomena.

Some people such as Rob Killick have a different objection to the behavioural models - not an economic but a political one. He complains that the idea of cognitive biases puts the blame on people for the crisis and lets "the system" off the hook. I think this very much depends on how the insights are used - my advice would be that behavioural insights should absolutely be used to change the system, not to excuse it.

Some such as Paul Mason believe there will be no revolution - his language is reminiscent of some of the heterodox economics arguments from these guys (who also discuss Krugman's article here).

Outside of the quest for new models, there are interesting discussions going on about the existing models. Scott Sumner's ideas are interesting and a more immediately relevant (but more technical) aspect of Krugman's article is the discussion of Say's Law and Keynesianism, on which I'll have more to write soon.

Sunday, 20 September 2009

The Alchian-Allen theorem, and how we learn preferences

Apart from Alchian and Allen, Tyler Cowen is the only economist mentioned in the Wikipedia entry for the Alchian-Allen theorem. Is this because he is the pre-eminent commentator on that theorem in the contemporary economics world? Or just because he has a popular blog?

This post won't answer that question, but it is prompted by Tyler's thoughts on the theorem in his book, and in an interesting Econtalk podcast with Russ Roberts.

The theorem states that if a fixed unit cost is added to the prices of all products in a set, relative consumption will shift towards the most expensive one. Or more simply: high shipping costs lead to higher quality goods.

The classic example is apples. In Somerset (arguably the apple capital of Britain, for readers unfamiliar with its many joys) let's imagine a juicy, hand-picked, top quality Pink Lady apple costs £0.10; while a tasteless, mass-produced Golden Delicious costs £0.02. That's a fifth of the price! Local consumers of apples will therefore be likely to eat a fair number of Golden Deliciouses.

But when the apples are transported to London, with a £1 shipping and packing cost, the Golden Delicious now costs £1.02 and the Pink Lady £1.10. The (proportional) difference is almost nothing, and therefore Londoners should eat far more Pink Ladies. Of course they won't consume - in absolute terms - more than Somersetters, but the balance between the two varieties will be tipped much more towards the more expensive type.

Now this simplistic explanation ignores many factors such as the likely volumes of each product shipped, the apportionment of fixed costs and the higher margins that will be obtained by growers and retailers for the more desirable product - which will counter part of this effect. But the basic microeconomic insight is sound.

So if the theorem is sound, why then - as Tyler points out - does it not appear to be true*? In reality people seem to eat much better apples in Somerset than in London. The best lobster is found in Maine, not Nebraska; and do people in Australia really drink better Californian wine than people in California?

Well, in the last case I suspect they might. Again not in absolute terms, but certainly in relative ones. The cheapest Californian wines probably don't ever find their way to Australia (but then, do the most expensive ones?)

In any case, for many other products the insight doesn't seem to hold. If in London you want the best quality steaks, they will always come from Suffolk or Scotland, not Ireland or France.

Tyler hints at the answer in the podcast - it's about how we learn what tastes good. People who eat apples all the time have a much better knowledge of what is a top quality apple; the farmer who eats his own steak every day will develop a different set of preferences - and much higher standards - than the city-dweller who only gets it once a month.

So perhaps the theorem is true when comparing two individuals with identical preferences. If that beef farmer moves to the city perhaps she will want to eat only the best steaks available. But once we let the clock run and allow preferences to evolve over time, the effects of high consumption may lead to different long-run results.

Wine, being a more globalised product, lends itself to stable preferences, which differ less by location, and thus the theorem is more likely to hold. Indeed the relatively low shipping and storage costs of wine may indicate that the theorem only works when it doesn't work too well.

Two final thoughts.

There's a famous behavioural result showing that people are much more sensitive to relative than absolute prices. More people are willing to travel across town to get a $25 saving on a $100 microwave than a $25 saving on a $20,000 car. Perhaps the A-A effect is somehow at play here?

Secondly, just because Tyler says the theorem isn't true, does that mean it isn't? He, after all, lives in a city and famously travels all over the world sampling the food when he gets there. No wonder he gets the best of everything when he arrives; it doesn't mean that he's eating what the locals eat. As for that Maine lobster: want to bet that the people who consume the most expensive specimens are those from Chicago who spent $1500 on their New England vacation?

Try this Google search if you want to see some empirical evidence. I'm left without any firm conclusions on whether the theorem really works, but with an extra insight at least into how preferences are created. Traditional microeconomics takes consumer preferences as an externally given, fixed set of utility functions; but to understand how the world really works we need to be able to incorporate changing preferences endogenously into our models.

* Note that Tyler says the theorem is true for cultural goods - he casts doubt on it only for food products, which are the traditional exemplar of the theorem.

The economics zeitgeist, 20 September 2009

This week's word cloud from the economics blogs. I generate a new cloud every Sunday, so please subscribe using the RSS or email box on the right and you'll get a message every week with the new cloud.

I summarise around four hundred blogs through their RSS feeds. Thanks in particular to the Palgrave Econolog who have an excellent database of economics blogs; I have also added a number of blogs that are not on their list. Contact me if you'd like to make sure yours is included too.

I use Wordle to generate the image, the ROME RSS reader to download the RSS feeds, and Java software from Inon to process the data.

You can also see the Java version in the Wordle gallery.

If anyone would like a copy of the underlying data used to generate these clouds, or if you would like to see a version with consistent colour and typeface to make week-to-week comparison easier, please get in touch.

Saturday, 19 September 2009

Pricing, utility and the four types of good

The Office of Fair Trading is conducting a market study on advertising and pricing.

This is of interest to me because one of my company's services is advising clients on how to structure their prices. Finding the right price structure is in the interests of both supplier and consumer; although pricing can look like a straight zero-sum game where any gain by the supplier is a loss to the consumer, this is not at all true in general. Thanks to the OFT for pointing out the study to me.

The authors have requested comments on what its scope should be; I have made the following submission:

Consumers' experienced utility of a good is not always predictable in advance, and pricing can be a key factor in several situations relating to this. Purchases can broadly be classified into four types:

In the first type, consumers have a good prior understanding of the utility they can expect to gain. This is the type of purchase dealt with by rational choice theory. Many of the pricing practices you propose to examine deal with this type of good. The key challenge for consumer protection in these cases is to ensure that consumers can clearly see the price, compare it with other offers in the market, and make the purchase that gives them maximum consumer surplus.

For example: a consumer planning a flight to Italy may know in advance that they are willing to pay up to £200. In this case the most important goal is to ensure competitive, fair comparisons between a flight costing £60 and another costing £80, so that the consumer has the ability to maximise their surplus. Another important consideration, though unlikely to arise in a competitive market, is to ensure the consumer does not unwittingly end up paying more than £200.

In the second type of good, consumer utility from the purchase is fixed in advance, but the consumer does not know what that utility is. In these cases, price is one of the most important signals on which buyers rely to predict their experience of the service. A restaurant meal priced at £59 is likely to be better than one priced at £19, and in the absence of other clear signals, consumers are likely to use that fact to help them make the best decision on which goods to purchase.

The third type of good is those for which consumers' preferences are not even fixed prior to the purchase. An example might be a buyer purchasing their first car; the use of the car itself is likely to invoke a brand loyalty in the mind of the consumer, affecting their preferences for future purchases. Price, once again, is a key influence on the shaping of consumer preference - with people often using the price of the product as one of the determinants of their preferences. Some consumers derive pleasure from the very fact that they paid £100 for their trainers, £300 for a meal or £1.2 million for their house.

Finally, the fourth type of good (or more often, service) is where the exact nature of the good itself is not determined prior to purchase. Many business services fall into this category; and the agreed pricing structure is one of the factors that influences the nature of the service that will be designed and delivered. For instance, if two businesses agree a 'structured pricing' model in which the supplier's reward will be a percentage of the profit made by the customer, then the supplier will be incentivised to provide a different kind of service than if they are paid a fixed price or an hourly rate.

My suggestion is to acknowledge within the scope of the study the differences between these four types of good or service. In the first type, it's usually clear that the consumer's interest is in achieving the lowest possible price, and the producer's interest in achieving the highest price. In the second, third and fourth types, consumer and producer interests are less clear a priori, and so the same factors and remedies will not always apply.
I am not sure if these four types cover all categories of good; readers' suggestions are invited for any major scenario that is not captured by them.

Tangentially, a few more or less relevant pricing-related posts are:
  1. Are hourly rates justifiable? by Ron Baker
  2. Krugman on ketchup economics from Consumerology (particularly the last paragraph or two)
  3. The Grand Unified Theory on the economics of free at Techdirt
  4. The price of Dan Ariely's Kiss from Curious Capitalist
  5. Calculating consumer happiness at any price in the New York Times

Friday, 18 September 2009

Behavioural links and comments 2009-09-18

  1. Dan Ariely has done some really interesting experiments about how to induce honesty or dishonesty. He thinks the results are a sad fact about human nature, but I don't take that view. For me, it's just more insight into how humans behave - which helps give us the power to improve.

  2. The Wall Street Journal has a couple of examples of businesses using behavioural economics to influence their customers' behaviour. Revealingly, these examples are mostly of the 'Nudge' type - an electricity company helping customers to reduce energy use, or a pharmacy enrolling more customers in home-delivery services to help the customer save money. This isn't what's relevant to most businesses, though. The third example is much more apposite - an insurance firm using (cognitive) behavioural insights to upsell more supplementary services. Much harder for this company to pretend it's just trying to help its customers out - but much more honest for them to put their story in the WSJ! Actually, though, it wasn't the customer who placed the story but the consultancy, Diamond, that advised them to do it. Who will now, I'm sure, have generated plenty of enquiries from other firms wanting to do the same. Good for them.

  3. Some similar insights from the Harvard Business blog (also from Diamond, I have just noticed) - Jetblue had to suspend an 'all-you-can-eat' flights programme because - unlike homo economicus - real people's utility curves stop following nice smooth linear shapes when they get close to the zero bound. The article identifies a nice checklist for behavioural work: framing, aversion, social context and timing or FAST.

  4. A famous result in the economics community (of the "aren't people thick" type) is referenced in this article from Free Exchange. They quote Bryan Caplan reporting that when researchers:
    "...asked the public to name the two "largest areas of government spending" from a list of six areas (foreign aid, welfare, interest on the federal debt, defense, Social Security, and health)"
    most people picked foreign aid and welfare. Foreign aid, however, makes up less than 1% of federal spending and welfare is less than half the cost of either defense or social security.

    Now is that a sign of an ill-informed public? Sure. But does it show a specific prejudice about foreign aid (which is how the results are typically interpreted) or just a bias towards picking the first item in a list? It's well-known that people pay more attention to the first couple of items in a list than to the rest.

    The original research can be found here but unfortunately only in summary form, and without giving the order in which the list was presented to respondents. It's an important question with big implications for how people interact with information. It's very easy to draw simplistic conclusions from surveys. Sometimes those conclusions are justified, sometimes not.

Thanks to Marginal Revolution and Consumerology (links in right-hand column) for pointers.

Agency problems at Lloyds

Is Lloyds, as Robert Peston suggests, really acting on the instructions of its shareholders in trying to withdraw from GAPS (the Government Asset Protection Scheme)?

Or is it, as seems more likely, trying to give its management more power?

The EU and Treasury intervention that he refers to will restrict the freedom of Lloyds executives to hang onto the empire they've built; and incidentally, to set their own pay and credit policies.

Selling off the Halifax branches might well be in the interest of Lloyds shareholders if not in the interests of the board.

What's more: if the government does wish to make more credit available to UK businesses and consumers, one way to do it is to make sure banks have lots of capital available to support lending. If Lloyds really can raise £20 billion of new capital, wouldn't it be better if that can support £100 billion of new lending, rather than just replacing the GAPS scheme without increasing loan capacity at all?

With the economy in the fragile state it's in, this cannot be a good move for the state and probably not for Lloyds shareholders either.

This sort of power structure and incentive, much more than the details of pay and bonuses, is the real problem with the governance of banks in the UK.

Thursday, 17 September 2009

Culture not constitution? The economic consequences of Mr Brown

A slightly misnamed book, as its subject matter is New Labour's progress on social policy goals, not economics. But, like most policy debates today, it's taken from an old Keynes quotation, so I guess that's OK.

The author, Stein Ringen, led a debate at the RSA this week. The core thesis of the book is that the Labour government which took power in 1997 was well-intentioned, sincere, competent, visionary and had the economic wind at its back - but still achieved almost nothing in its four key social goals: health, education, reduction of crime and poverty.

Ringen suggests that this failure arises from a built-in conservatism in the British constitution and political system. Because of centralisation of power in the prime minister's and chancellor's offices, there is nowhere in the political process to conduct a well-informed critical debate. Because of politicians' inability to "mobilise the effort of millions" towards new ideas, all policy becomes technocratic. And technocratic policies, however well-managed, do not bring about real change.

Despite the title - which is explained when you realise that Ringen includes all domestic policy under his banner of 'economics' - there are some simple economic blind spots in the book. He says:
The combination, in the public services, of more money and less productivity is a mystery. Why would doctors and teachers respond to more money with working less productively?
But to any economist, this is not a mystery at all. It's called "diminishing marginal returns" and what happens is this: Some of the extra money goes to paying existing doctors and nurses more; but more of it goes to hiring new doctors and nurses. If you increase the number of doctors by 20%, there is no way they are going to do 20% more work. Apart from anything else, there are not 20% more patients to treat. Also, new employees are less productive than old ones; if they weren't, the new ones would have been hired already.

So the working hours of doctors are reduced instead; the existing workload is shared out among more workers, and while the total output does go up, it goes up less than the number of doctors. Targets and wage inflation may (debatably) have an impact on productivity too, but they are second-order effects.

But economics is not really the point of this book, so it's not quite fair to criticise it on those grounds. The real question is: is his diagnosis right, and will his prescriptions work?

It's very hard to confirm the diagnosis because we have no counterfactual. We don't know whether, if Labour hadn't been in power, health and education outcomes, crime and inequality would have been worse than they are. There is lots to debate in the detail of public sector performance since 1997. But let's say we accept the interpretation that Labour has managed to achieve virtually no meaningful social change in twelve years. What should be done to enable a future government to do better?

The solution Ringen proposes is:
  1. Parliament to retake its role as the primary seat of political authority; new legislation to be properly scrutinised and debated, and the executive to move to a position of leadership not management.
  2. A reinvention of local democracy.
  3. Replace all private funding of political campaigns with public finance.

But for a writer who criticises technocracy, this is a rather technocratic set of changes. Fortunately we can look at other polities where similar structures are in place, and see how they work.

For instance, the United States. Obama's healthcare proposals are a good example of the kind of social justice policy that New Labour might have tried to implement, had they inherited the US system. And the US constitution certainly meets at least two of Ringen's three criteria: there is no doubt that Congress is the real seat of (policy) power, and that there's a thriving local and regional democracy, much more powerful and independent than in the UK. American political funding is clearly far from perfect, but is arguably more regulated than in the UK; at least there is some role for public funding, which (apart from party political broadcasts) doesn't really exist here.

And yet, the kind of healthy scrutiny and debate that Ringen calls for doesn't seem to be happening at all. Instead, a vocal and unrepresentative minority shouts down the changes and manages to steer the debate onto its terms. Wouldn't this happen in the UK as well?

This minority is probably acting rationally - in a certain sense - with regard to its own material interests, but despite the rhetoric, has no commitment to working out the best solutions for the whole of society.

I wonder, therefore, if the real need is not for constitutional fiddling but for a change in political culture. I don't mean the culture of politicians, but the political attitudes and behaviours of the other sixty million of us. I am wary of the common left-wing idealisation of the continental European polities, but it does seem that in the Netherlands and Scandinavia, there's a more informed and constructive debate on most policy than in the UK or the US.

Surely the only way for leaders to "mobilise the effort of millions" is to have both the grass-roots structures to do so, and the cultural expectation that it is an appropriate thing to do. I doubt there are any shortcuts to this cultural change - education, consistent use of political capital, and appropriate policies in the public-service media will all make a difference, but only over a period of decades. It's hard to be sure that governments or any other institutions will be able to act consistently over a long enough timescale to create this change, but it will certainly take a lot more than three easy constitutional tweaks.

Update: Matthew Taylor, chief executive of the RSA, makes some similar points on his blog.

Tuesday, 15 September 2009

Loss aversion and utility in Formula 1

If you didn't see the Italian Grand Prix on Sunday and you're still planning to watch it, look away now. But really. It's been three days.

So for those who didn't see it and are not planning to watch it, think about this question.

Should the order in which drivers are placed affect the aggregate happiness of all fans? (Assume for now that all drivers and teams have the same number of fans.)

Surely not, right? No matter whether Rubens Barrichello wins or Robert Kubica does, people will be - on average - equally happy. There's a certain utility gained from your driver coming first, a lower amount for coming second, third, and so on. The total utility gained by all fans is the sum of U(first) + U(second) ... + U(20th), and the only difference is the distribution of happiness between people.

And yet.

On Sunday, Lewis Hamilton was running in third place and ready to get on the podium. He entered the last lap a couple of seconds behind Jenson Button and trying to catch him, but without much chance. Suddenly he pushed a bit too hard, bounced off a kerb and crashed his car. Oops.

Hamilton fans must have been absolutely gutted. The rest of us may have felt a momentary glee, but mainly sympathy. Ferrari and Kimi Raikkonen fans such as myself were happy to see Kimi promoted into third place, but how much difference did it really make to us?

My hypothesis is that the Hamilton fans' loss of utility far outweighs the gain for the rest of us. While they naturally keep a stiff upper lip - at least on the evidence of the one watching it in the pub with me - they're absolutely eaten up inside.

And yet. If total utility comes additively and directly from your driver's position at the end of the race, it surely must be equivalent in any scenario. Kimi fans should inherit all the happiness the Lewis fans would have had; Sutil fans just as happy at his fourth place as Kimi fans would have been; and so on down the line, right back to Lewis who gets all the undoubted joy that would have accrued to Timo Glock's 12th position (because the drivers placed 13th and above were a full lap behind, Lewis gets placed above them).

But surely Timo's (many) fans do not care that much that he came 11th instead of 12th. Even Kimi's podium position doesn't seem that interesting - after all, he won the previous race. The frustration of Lewis's fans must be much stronger than the pleasure from everyone else's; because the endowment effect of his long tenure in 3rd place triggers a powerful loss aversion when he loses it.

So if we wanted to design the ideal sport to make people as happy as possible, what would it be? There's a common perception that a competitive game where the lead swings back and forth all the time is the most exciting kind of sport. But that just exposes fans to continual pain every time their team or player is overtaken. What we really need is a bit of stability.

Indeed, to maximise total utility, whoever first gets into the lead should stay there for the duration of the contest. Same for the driver who gets into second place, and for that matter every other position. You could even argue that the positions should be determined before the race even starts, so that people have - let's say - a day to get used to the position and build up a strong attachment to their driver's ultimate finishing result. A result which will not be threatened by any potential overtaking or mechanical unreliability.

Regular viewers of Formula 1 will of course recognise this precise state of affairs from the Monaco Grand Prix. Not to mention Valencia, Hungary and nearly every other racetrack in the modern sport. Overtaking has been almost eliminated by the design and regulation of racing cars since the 1990s.

Now we have all been led to believe that this is an unintentional consequence of technological development. But it's a funny thing...I'm starting to see that there's more logic than we ever suspected behind those aerodynamic regulations.

Bernie Ecclestone is surely not just a deviously clever businessman, but one of the best sports economists out there. He keeps us happy by sparing us pain and loss. In my catatonic state, I must go out now and buy some of whatever that sponsor is selling...

Monday, 14 September 2009

Value, price, fMRI and consumer surplus

Mark Thoma posts an interesting article from EurekAlert about some Caltech research. The researchers set out to solve the free rider problem by measuring people's real valuation of public goods.

The classic problem with public goods is that if you ask people what the service is worth, they have an incentive to lowball their answer. I may claim that a new railway line, or the NHS, is worth only £10 a year to me. That way, I am likely to pay lower fees or taxes for it, and since I think that lots of other people will put a higher value on it, the government will build the railway and keep funding the NHS regardless of my feelings.

It's the same dynamic as in the tragedy of the commons; no matter what I do, the behaviour of all the other people will determine whether the good is provided. My action won't make any real difference to the outcome, so I may as well act selfishly.

The consequence, however, is that if everyone claims to put a low value on the outcome, the government may ultimately decide not to build the new railway line. Society ends up with less infrastructure than it really wants.

So the Caltech experiment sets out to measure the real value that people put on public goods, in order to work out how much of them should really be provided. In a brilliant example of glossing-over-the-details, they "simply" put people in an fMRI machine and measure their true valuation of public goods.

I do have two quibbles* with the article, but I want to explore a different point so they are in a footnote.

In any case, it apparently worked. The researchers were able to find the real value that people place on public services, and this allows the public to make a fair and economically efficient decision about whether to build a highway and how to make people pay for it.

So far so good. Econproph, a commenter on Mark's post, raises an intriguing question:
In private corporate hands, the same technology could be used by a monopolist or differentiated monopolistic competitor to achieve perfect price discrimination. All our surplus now belongs to the corps.
Scary, perhaps? But let's explore it a bit more.

We need to look at how the relationship between customer and supplier is going to evolve in the future. In the past, these have held each other at arms length. The supplier created a product, placed it on a shelf, priced it, and waited to see if the customer would buy. If they didn't, then the price was cut until they did. If it went too low, the supplier would stop making that product.

Nowadays, most commercial forces are bringing the relationship closer. Information technology allows (and therefore requires) more personalisation of goods and services; competition drives more specialisation and smaller niches.

In this context, if the customer keeps their motivations and valuations secret, they may not get the best product available.

Imagine there are two providers of delicious Russian vodka - brand leaders Ripoff and Stealichnaya - both of which can produce it at a cost of £10 and currently sell it at £15 (the £5 difference is their return on invested capital, and can be maintained due to the fixed costs incurred by any competitor entering the market with no sales volume on day 1).

Now imagine that a typical consumer really derives £25 of value from a bottle of vodka. This means that they gain a consumer surplus of £10 (the value they place on it, minus the cost they pay).

If the individual maintains the pretence of only getting £15 of value, then the story stops here. They secretly get £10 of surplus - maybe a little more in the future if a new type of potato reduces production costs - but that's it.

However, if Stealichnaya finds out the consumer's real valuation they have two choices. They could try to put prices directly up to £25. If we have a competitive market, that won't happen - but admittedly that's a big assumption. Still, let's assume it is true for now. Let's also assume that Ripoff has not got the fMRI data and competitively keeps its price at £15.

The natural course of action for Stealichnaya is to try to differentiate its vodka by adding extra value up to the value of £25. Perhaps they can offer free shot glasses, or send an attractive Russian blonde (of whichever is my preferred gender - let's assume they can figure that out from the fMRI too) to my house to deliver the bottle.

In order to be willing to pay the full £25 rather than buy a £15 bottle of Ripoff, I must gain at least another £10 of consumer surplus from the enhancements. Let's say the enhancements are worth £12 to me; so I am willing to pay £25 for £37 of value, gaining £12 of consumer surplus.

To make this worthwhile for Stealichnaya, they have to be able to offer the enhancements for a cost of less than £10. It's quite likely that they can, especially with their overheads and marketing costs already absorbed in the cost of the original bottle. And if I allow them to understand my desires and personal situation, they have the perfect opportunity to design something of high value to me.

So by revealing my genuine valuation to a supplier, I offer them - provided there's a competitive market - the ability to give me extra stuff which is worth even more to me than the higher price that they now want to charge. The economy as a whole generates extra economic profit, so even the taxman is happy.

Key caveats:
  1. As Econproph says, this does not apply in monopolistic or monopolistically competitive markets. All the more reason to keep markets competitive.
  2. I have glibly assumed that Stealichnaya can easily come up with £12 of enhancements for a cost of £8. This is not necessarily true; but if it isn't, then we are simply left with the old situation where they sell the bottle on its own for £15. I do believe, however, that there is great scope for suppliers to add value in this way.
This is definitely not the end of the story on this subject, and the case for revealing true value is in fact much stronger when a service is custom-designed for the individual consumer. More later when I have time to write up a model for that scenario.

* First, the following quote:
...for decades it's been assumed that there is no way to give people an incentive to be honest about the value they place on public goods while maintaining the fairness of the arrangement.
Rather an overstatement.

Second, how on earth do they measure people's real valuations with a fMRI machine? I can't really see how this is even possible with current technology. Most likely, they used a clever game theoretic design to make people
think the fMRI machine worked, and thus give the subjects an incentive to be honest. Remember that scene from The Wire?

Sunday, 13 September 2009

The economics zeitgeist, 13 September 2009

This week's word cloud from the economics blogs. I generate a new cloud every Sunday, so please subscribe using the RSS or email box on the right and you'll get a message every week with the new cloud.

I summarise around four hundred blogs through their RSS feeds. Thanks in particular to the Palgrave Econolog who have an excellent database of economics blogs; I have also added a number of blogs that are not on their list. Contact me if you'd like to make sure yours is included too.

I use Wordle to generate the image, the ROME RSS reader to download the RSS feeds, and Java software from Inon to process the data.

You can also see the Java version in the Wordle gallery.

If anyone would like a copy of the underlying data used to generate these clouds, or if you would like to see a version with consistent colour and typeface to make week-to-week comparison easier, please get in touch.

Saturday, 12 September 2009

Bankers' pay: agency and supply

I intended to mention this little tussle between Felix Salmon and John Carney a couple of weeks ago. As it happens, it's provoked an idea on a solution to the eternal problem of bankers' pay.

Carney points out that we don't actually want traders to take the minimum possible risk in all circumstances. If they did, they would never make any returns at all. Instead, we want them to take the right level of the scale of the whole economy, this is the socially optimal level; at the scale of an individual company, it's the optimal level of risk for shareholder value.

He says that without guaranteed bonuses, traders will take less risk than shareholders want them to, because they will need to retain some amount of guaranteed upside to pay their mortgages.

Felix's argument against this is interesting, because he doesn't quibble with the theory. As various people have pointed out, because shareholders have limited liability, they have an incentive to get their companies to run up large debts and gamble with them. Presumably they want their traders in turn to be incentivised to do this.

But Felix simply says that, in reality, this doesn't happen.
The fact is that guaranteed bonuses are a tool used by smaller, weaker banks who are desperately trying to beef up their trading desks to compete more effectively with the larger trading powerhouses. You don't hear much about Goldman Sachs or Citadel paying their traders guaranteed bonuses.
The interesting thing here is that it reverses the normal dynamics of employment...usually, a company is taking a risk by employing a new person whose skills are unproven. In this case, the employee is taking the risk by joining a bank whose ability to win business is unproven. Thus, they put the bank on a probationary period by demanding a guarantee.

Felix argues that the theory doesn't hold up in another way:
Riskier banks always trade on lower p/e multiples than boring banks which take very little risk. Invariably, when banks take on lots of risk, their employees get most of the upside while their shareholders wind up with the first loss.
It's slightly problematic to use p/e ratios here; riskier banks will have a higher return on equity, meaning that their earnings are higher and thus the denominator of the p/e ratio brings down the value. So this argument is not especially convincing.

But the second sentence does ring true: if there's a shortage of skills or employees in a sector, it's very plausible that employees will capture a high share of gross margins, reducing shareholder returns. This of course is one of the major features of trade unions and guilds - creating barriers to entry disproportionately increases wages for existing employees. While there are few trade unions active in the City, there are high barriers to entry - erected mainly by the existing employees of banks, in a classic principal-agent conflict - and this maintains the high returns to existing employees. Those high returns enable early exit for successful executives, which in turn reduces supply further.

Perhaps the best way to control the escalation of bankers' pay is not by regulation, but by a simple boost to supply. Bank shareholders should insist that their companies recruit 20% more new people each year. Perhaps the British and American Treasuries would even volunteer to subsidise this reduction in the graduate unemployment rolls, and at the same time increase the value of their accidental equity stakes in the major banks.

Thursday, 10 September 2009

Trust in Markets

Sam Robbins and I attended a fascinating workshop yesterday at the OFT, titled Trust In Markets.

It covered three areas: the nature of trust and its importance in economic exchange; trust and the law; and how trust is manifested in some specific industry sectors (online marketplaces and finance).

I'm especially interested in trust as an economic concept. Clearly trust is an absolute prerequisite for many kinds of economic systems to even function. Any system in which transactions are not instantaneous; anything where the quality and nature of the product is not fully known in advance of purchase; and any financial system where credit is offered - will operate smoothly only if parties broadly trust each other. The crowning theory of microeconomics, the Arrow-Debreu theorem, can only work where futures markets are available - and futures markets can only work if parties know that their contracts will be honoured over time.

Sometimes trust can be replaced, in the short term at least, by physical enforcement of contracts. But the transaction costs involved in enforcement are so huge as to act as a huge drag on market efficiency, and few if any markets in the real world can rely solely or even substantially on enforcement.

So trust is fundamental to the operation of free markets. But how much do people really trust each other and what does this mean in an economic model? If I enter into a transaction with you, I want to trust that (1) you'll act with honesty and good faith; (2) you will actually be able to deliver what you say you will; (3) that we both clearly understand what we're actually agreeing to do. For a perfect market to work, all three of these kinds of trust should be perfectly fulfilled.

In the real world, though, none of them are completely true. So how far are the buyer and seller from this ideal in any given transaction? How close do they need to be for the transaction to work?

And in a series of transactions between similar parties, how much cost can be absorbed in the early iterations in order to gain trust to make the future ones more profitable?

If we don't have full trust in each other - perhaps because we don't know each other - what institutions, frameworks or third-party services can supplement or replace this trust?

The idea of building a model of trust has been investigated broadly in computer science, so Kieron O'Hara was on the panel with a nice starting point for what trust might mean. But in economics, the concept hasn't been widely modelled.

For example, a recent article which touches upon the concept is Competition builds trust by Francois, Fujiwara and van Ypersele at VoxEU - but this is an econometric study which takes trust as a simple survey measurement and does not examine it cognitively. Similarly, Greg Mankiw asked a few weeks ago what kind of institutions we trust but doesn't explore what it really means to trust something.

So the above questions point the way to what model of trust might be incorporated into a larger cognitive-behavioural theory of economics. Any model that we propose should give us a way to answer, or at least examine, most of these questions.

The conference stimulated a lot of other ideas so I'll be posting more about this in the coming weeks - including a first version of my economic model of trust.

Sunday, 6 September 2009

The economics zeitgeist, 6 September 2009

This week's word cloud from the economics blogs. I generate a new cloud every Sunday, so please subscribe using the RSS or email box on the right and you'll get a message every week with the new cloud.

I summarise around four hundred blogs through their RSS feeds. Thanks in particular to the Palgrave Econolog who have an excellent database of economics blogs; I have also added a number of blogs that are not on their list. Contact me if you'd like to make sure yours is included too.

I use Wordle to generate the image, the ROME RSS reader to download the RSS feeds, and Java software from Inon to process the data.

You can also see the Java version in the Wordle gallery.

If anyone would like a copy of the underlying data used to generate these clouds, or if you would like to see a version with consistent colour and typeface to make week-to-week comparison easier, please get in touch.

Tuesday, 1 September 2009

Behavioural links and comments 2009-09-01

The Geary behavioural blog explores some research from Garth Brooks into time discounting and uncertain preferences. Who knew he had a second career in economics? But he evidently does: Garth has proved his credentials as a behavioural economist by not writing down an actual model for his theories; instead, he just tells an anecdote and we're meant to make our own inferences. He'd fit in just fine in J.Econ.Psych.

Multitaskers are bad at...multitasking, according to the BBC. In my own model of the mind, this is one of the key factors that accounts for much of the behaviour we see in experiments. In complex situations, to rationally optimise for the ideal outcome requires us to near-simultaneously adjust and monitor several different variables. In reality cognitive limits prevent us from doing this, so we either miss opportunities to optimise, or we use heuristics which combine multiple variables into one (and that can only be an approximation).

In this context, heuristics may include the idea of monitoring an interest rate as a substitute for understanding the whole breadth of monetary variables in an economy; making simplifying assumptions about people to avoid dealing with their full range of personal attributes; using the price of a product as a proxy for the more complex variable of quality; or counting up to 5 portions of fruit and veg so you don't need to work out your detailed nutritional input for every meal.

The intriguing point here is that those who choose to, or attempt to, multitask more turn out to be worse at it than those who don't. Perhaps then, multitasking is a compensating mechanism for the inability to design good single-variable heuristics.

Keiichiro Kobayashi proposes a new macroeconomics which includes financial intermediaries as an entity in the model. The traditional approach treats the savings, investment and consumption of consumers as the fundamentals, and assumes the finance markets are a completely transparent mechanism which simply transmits price signals and resources around the world. I absolutely agree with this impetus, though I'd ideally like to expand the approach to provide a different treatment for different kinds of firms: those servicing consumers directly, those servicing other businesses, and a further classification based on the types of knowledge they add or the cognitive approach they take.

The finance markets, after all, are a way to overcome cognitive limits of the kind I outlined a few paragraphs ago. Because I don't have the capacity to monitor six billion people's demand for money, steel or cars, or their supply of oil, labour or savings, I rely on a network of intermediaries to do it for me. In a world of rational agents with no limits on their ability to process information, most financial firms would not exist. But our world is sufficiently far from that ideal that the sector can employ one in thirty people just to mediate between us.

As for Benjamin Friedman's related question of "is it worth it?" you simply have to ask: if I didn't have this sector out there, equalising prices and moving capital around, would I be more than three percent worse off? Personally I have no doubt that I would.

Expanding the question to include business services in general makes it more interesting. Around 30% of the British economy is financial and business services which are not directly consumed by individuals. Does the existence of advertising, consultancy, lawyers, accountants, software professionals, recruitment agencies, graphic designers and insurance companies as well as the banking and finance sector make us all collectively 50% richer than we'd otherwise be? My instinct says yes: but I would love to be able to prove it. I hope that a mixed macro and microeconomic model will be able to answer this, and many other, questions.