Let Me Confirm Your Belief That Your Irrationality Is Rational

This opinion piece in the New York Times, entitled “Why We Make Bad Decisions,” by Noreena Hertz, explores the implications of a well-established psychological/behavioral phenomenon known as confirmation bias.  In a nutshell, confirmation bias describes the general tendency to overweigh information in line with one’s prior beliefs and/or give too little weight to information contradicting those beliefs or attitudes.

This phenomena is clearly relevant to politics in a wide array of settings.  Voters may ignore “negative” information about their own favored party or give too much credence to “negative” information about other parties.  Individuals may selectively pay more attention to positive information about the policies they favor or ignore information that reflects poorly upon those policies.

Well, as is my usual approach, I wanted to briefly point out that observing such a bias may not be irrational.  I have two explanations for this behavior. The first demonstrates why positive and negative information should be evaluated differently in certain (common) contexts.  The second explanation demonstrates why individuals should stop exerting effort on updating their beliefs (i.e., paying attention to information) in certain (again, common) choice situations.

Both explanations rely on a simple presumption about beliefs: I will presume, as typical, that an individual’s beliefs are important only insofar as they affect the individual’s behavior. This is an important presumption, and it is definitely contestable, albeit not in standard social science models.  I will touch upon it again in the conclusion of the post.

Before continuing, note that I am not arguing that Hertz is “wrong.”  To the degree that one is confronted with “pure and costless information” in a single decision-maker situation, there is absolutely no reason to do anything other than faithfully revise your beliefs as far as possible according to Bayes’s Rule.  This is a mathematical fact.  That said, situations in which information is pure and costless and one’s decisions and incentives are in no way contaminated by strategic considerations are pretty few and far between.

That said, let’s move on to the explanations.

The first explanation is demonstrated by the following example. Suppose you have decided to get your yard reseeded: the bare spots have gotten unbearable, and something must be done.  Now suppose that you head down to the local nursery to get some seed and, while jawing with the sales person, he or she says “you know, buying sod is a much faster way to get a beautiful lawn.”  Should you believe this statement? Yes.

Should you change your beliefs about the effectiveness of seed and sod? Well, that’s not clear. In particular, you need to consider the source of the information and his or her motivations.  Sod is much more expensive than seed, and the sales person is presumably motivated to increase the amount of money you spend.  Accordingly, you should give less weight to the information—particularly in terms of whether you should put the seed back and buy sod instead.  Furthermore, once you realize that you should probably not act upon the information in terms of changing your choice, it is not even clear that you should process/pay attention to anything that the sales person says about the relative value of sod over seed.[1]

The second explanation is based on the following example.  Suppose that you have spent many months studying (say) two different houses, and you will buy one and exactly one.  Eventually, you have enough evidence to conclude with near-certainty that house A is the best one to buy.  At some point, your belief about the relative values of the two houses (if you are good Bayesian decision theorist) will be sufficiently certain that you would not pay any nontrivial cost for additional information.  Incorporating and processing information is costly insofar as it requires mental effort.  Even if it doesn’t, belief revision is important only to the degree that it will affect your decision.  But rarely is decision revision costless.  That is, most decisions about (say) personal health and finances are ongoing: changing one’s habits and diet, rearranging one’s asset allocation and consumption—these all require effort.  Put these two together, and it is clear that in some cases, ignoring information that is contrary to one’s beliefs may actually be rational.[2]

Finally, before concluding, I want to quickly mention that beliefs can be directly valuable in a “psychological” sense.  To be quick about it, suppose that you enjoy believing that your future will be bright.  Say you took an exam yesterday and will find out the results in 2 weeks.  You enjoy thinking that you did well, and you dislike negative disappointment.  In such cases, it is often optimal for you to have beliefs of the following form:

1. Walk out of the exam thinking you did INCREDIBLY WELL.
2. Keep thinking this.  No reason to find out anything about it, even if you can, until the end.
3. In the moments before you find out, recalibrate those beliefs downward.[3]

The same logic applies in situations in which the decision is a fait accompli for other reasons.  That is, suppose you have to buy a given house.  If you enjoy thinking that “the future is bright” even a little bit, then you have no immediate incentive to update upon/pay attention to information that disconfirms your “apparent prior beliefs” (i.e., the beliefs that would justify buying the house if you had a choice).

The link between this and the apparently pathological belief revision by (say) smokers/drinkers/drug users/UNC football fans and others “addicted” to arguably unhealthy lifestyles is clear: if you know—for whatever reason—that your decision is invariant to your beliefs, there is no reason to hold rational ones.  Indeed, there is probably a clear argument on “psychological” grounds that you should update in ways consistent with confirmation bias.

With that, I leave you with this and this.

__________________

[1] Of course, the opposite is true if the sales person tells you, “oh, you are definitely doing the right thing not buying sod.” In this case, the information is more credible because of the source’s motivations, so that this would rationally be given at least as much credence as if it were provided by a “neutral” source, and would therefore similarly look like confirmation bias.  In politics, this is known as the “it takes a Nixon to go to China” logic.

[2] You could call this a “tune in, update, and drop out” kind of logic.  And, it is beyond the scope of this post, but it is also a justification for apparent overconfidence in some strategic situations.  In particular, committing to not updating on the viability of a joint venture can bolster members’ incentives to contribute individually to the team production components of the venture.  In other words, if I am less worried that you might be listening to evidence that could make you doubt the value of the venture, I am in some situations less worried about my own individual effort being in vain because of you happening to have heard, on your own, that the venture might be less profitable than we all believed when we started out.  There is a link to NFL quarterbacks and head coaches in here somewhere.

[3] Years ago, I wrote a paper with George Loewenstein and Niklas Karlsson, on exactly this behavioral optimization problem.  The basic idea is that irrational beliefs (and irrational belief revision) is even “more easily made optimal/rational” if one allows for beliefs to matter on their own, which I rule out in the two explanations, but clearly is descriptively realistic.