On Rationality, part I


Is rationality really optimal?

Economists love 'rationality': it makes for simpler models. Wikipedia offers the following definition of perfect rationality (the sort usually attributed to agents in economic models):

In economics and game theory, the participants are sometimes considered to have perfect rationality: that is, they always act in a rational way, and are capable of arbitrarily complex deductions towards that end. That is to say, they will always be capable of thinking through all possible outcomes and choosing the best possible thing to do.


Since rationality means agents will always choose the best possible thing to do, economists are not unreasonable in assuming it should be prevalent in any population that has been subject to a sufficiently long evolutionary process. Take a perfectly competitive market: The 'rational' companies survive, the ones that make production decisions according to the position of the stars will eventually vanish.

This, however, is not always the case; in a large number of situations, irrationality will lead to a superior outcome. In those cases, the 'evolutionary' argument for rationality falls apart.

Think of 'The Battle of the Sexes'. Dave and Pepy are dating. Dave likes bowling, Pepy likes ballet. Dave strongly prefers going bowling with Pepy to watching ballet with her, while Pepy's preferences go the other way round. Neither will enjoy going out at all if they are on their own.

If both Dave and Pepy are rational, they will as much as possible end up going to the same place: their nights will sometimes involve bowling and other times ballet. But now assume that Pepy is irrational and always chooses to go to the ballet regardless of what Dave does. Irrational Pepy achieves a superior outcome: she spends every night with Dave, while never having to set foot at the bowling alley.

Of course, Pepy does not really need to be irrational: a mere credible commitment to acting irrationally would do. Life, however, is a continuous stream of game theoretical situations as the one described above, during which a reputation is established. The sure-fire way to commit to act irrationally is to actually be irrational, at least to some extent.

Think of all the times you didn't punish your four month old baby for waking you up in the middle of the night. By virtue of being a baby, and so clearly irrational, it is allowed to go on asking for attention whenever it feels like without punishment.

If you are a dictator with potential access to weapons of mass destruction, it helps to have a reputation for being mad. It is difficult to commit to carrying out a nuclear strike if your country is invaded - if your goal is to protect yourself and national interests, such a strategy would be counterproductive. But what if you are mad? Your commitment becomes credible, and your adversaries would never dare invade in the first place.

While the above examples may appear to be quite particular, situations like these arise constantly. In a world where the imperfectly rational individual is rewarded over the large number of games that constitute life, it would be wrong to expect evolution to eventually produce the perfectly rational agents inhabiting economic models.



by datacharmer | Monday, May 21, 2007
  , | | On Rationality, part I @bluematterblogtwitter

1 comments:

  1. Anonymous Says:

    "Rational" expectations are an example of a very well-named theory. If the name sticks (as it has done) then all other forms of expectation formation are necessarily "irrational" to make them distinct, even if it might be fully rational (small "r") to form expectations by e.g. adaptive or backwards looking processes.

    A big problem with rationality comes with the slightly stronger form wherein for many macro models expectations are formed in a way that is fully consistent with the model being analysed (as well as the normal assumptions of being forward looking and using all available current information). This seems a particularly weak assumption when economics itself is in a state of continual flux regarding what the right models actually are, yet somehow at all times the rational agents in those (often Robinson Crusoe) models always form expectations completely consistently with the current model (whatever it may be).

    Similarly the idea of building in all current knowledge into the information set is a very strong (but often used) assumption given the costs of collecting and processing information.

    It's a pity there is often so much bundling with the concept of rationality. It is helpful to look at models with forward-looking expectations, but the frequently bundled features of internal consistency with models and knowing all available current knowledge are unappealing.