Most of my research starts with the presumption that individuals are rational. By this, I mean that they know the rules of the game, and they also know that the other players are rational. Simple empirical observation indicates the inherent contestability of this presumption. So, why do I continue to adopt it? Well, I have written before about this point in a number of posts, but (or, “accordingly”) I’ll say the presumption of rational behavior establishes the robustness of the phenomena that the model aims to “predict/explain/mimic.” That is, presuming rationality makes my life harder as a theorist. If I did not have rationality as a stake for the tent I’m putting up, it would be pretty darn easy to place that tent wherever I want: as a theorist, I can explain anything if you let me bring irrational behavior to the table. Setting that to the side, however, there’s another point I’ve been thinking about lately that I wanted to drone on about a bit. If we step away from presuming rationality, we need to put something in its place. There is a huge and growing literature on deviations from rationality in social sciences. In a quick form, my challenge to those who would have models incorporate these findings is as follows:
What are the “parameters” of irrationality?
A related version of this is “which varieties of irrationality should I put in my model?” There are various forms of irrational choice: those that deal with
- intertemporal choice (e.g., overconfidence in your own patience in the future),
- evaluating risk (e.g., is it a gamble over winnings or losses?),
- updating on information (e.g., favorable vs. unfavorable information?)
So, this leads to two questions. First, which of these is more important to include? Common sense (and the quest for causal identification, if that’s your cuppa tea) suggests that throwing them all in will yield something unmanageable.
Second, and more precisely, even if I choose one to include (for example, to see how the availability heuristic affects how voters evaluate incumbents and challengers, and how this affects campaign decisions), how do I parameterize/represent this bias?
Put another way, most (formal and informal) models of behavioral biases rely on at least an implicit notion of perception: in a nutshell, most biases in choice can be described as some “label” or “feature” of the decision problem that the decision-maker should not actually take into account actually bearing on the decision-maker’s ultimate choice(s).
The question, and conclusion of this post (in search of feedback), is how do we model the world of “features?” That is, how does one make an even plausibly “general” theory of a phenomenon that includes behavioral biases? After all, most models are designed to be generally applicable—where “generally” means across time, or space, or something. Traditional models of choice/behavior presume that people have a “general” feature known as a preference ordering (or, essentially equivalently, a utility/payoff function). But the (both formal and informal) “behavioral” models have been built largely on noting a series of deviations from the predictions of the classical choice model in a specific (often laboratory) setting. This isn’t an objection that the findings aren’t externally valid—it’s an honest question as to how one extends the findings to general (i.e., external) settings.
So, what ingredients should I throw into the pot if I want to cook up a model of the mind to deploy in my models of political behavior and institutions?
With that, I call upon a favorite, and leave you with this.
 There’s even more in here, as I wrote about here. I will set the issue of common knowledge of rationality aside for the purpose of this post, as it merely amplifies the point I am trying to make.
 I won’t bother to cite any of it, because it would be merely gratuitous and possibly lead to erroneous inferences about what I intend to be inferred by the inevitable omissions from such a list. That concern, if you think about it, is meta. (What? Oh, okay… start here.)