Game Theory is Punk

I’ve joked before with people that I liken social science models to rock songs.  My actual mapping is horribly incomplete.  So I’ll set that chatter to the side.

That said, the practice of modeling, in my experience, is a lot like rock ‘n roll.

You give me a topic, and I’ll think for a minute, make an awkward joke to stall, and then say, “well…I think we can throw in a bit of Romer-Rosenthal, maybe a touch of Crawford & Sobel, plus a flourish of valence, and Voilà! … We have a model.” (Participants at EITM 2013 can vouch for this…for better or worse.)

But….I’m serious. Modeling is a delicate balance of divine insight and practice.  And, given the relative and regrettable scarcity of divinity in practice, more practice than insight.

Modeling requires balancing (1) a substantive question, (2) generality, (3) the finitude of time. It lies at the heart of both what are putatively purely-empirical and purely-theoretical enterprises (the only class of social science theory that is “not-putatively-but-actually-purely-and-absolutely-theoretical-and-therefore-unambiguously-correct-and-applicable” is social choice theory.)  Methodologists, game theorists—they all rightly make assumptions to get to the point of their argument.

This is ROCK AND ROLL: YOU HAVE TO FIGHT FOR YOUR RIGHT TO MODEL.

If I said, “tell me how to make a yummy dish,” you’d ask “what’s yummy?” If I, being as obstinate and/or distracted as I usually am, did not answer—you’d have to make some assumptions about what I might like. If you assumed that I liked what everybody else liked, you’d probably hand me “Joy of Cooking.”  On the other hand, if you assumed I asked because I’d looked in the Joy of Cooking and not found what I liked, you would appropriately presume that I wanted something other than “the normal,” and you’d then be  seen by the outside world as playing punk. You’d probably (rightly) take off-the-shelf tools, utilize standard analogies, and leverage structure that threatens few to provide me with a new conclusionThat’s punk.

[During the perhaps overly-artsy bass solo, let me confess that not all punk is good. But all punk is, tautologically, punk.]

…Cue big build, drum crescendo, and….harmonic ending that sends crowd into rhapsodic frenzy…

Ok, What I’m saying is a short thing: good (formal/stat/etc) modeling is punk: it takes “old” tools, “expected” tricks, and combines them to “make the house rock,” or “get the message across.”  (Lucky are those situations when “the house rocks to the message.”)

Does the Pixies anthem “Gouge Away” address every possible situation?  Is it robust to every ephemeral, existential robustness barrage one might throw at it?

Hell no. That’s why the phrase “holy fingers” is so haunting. After all, “holy fingers” are rare unless you count Chicken Fingers ™.

So, when you want to say “well, your explanation for that is just an example, I’ll just say `Get Over It.’ … And then I’ll be gone, making more noise pop, playing a flying Fiddle to the Quotidian.

With that, I leave you with this.

Ironic, quick second takes on sequential rationality

I just finished writing this meandering post about sequential rationality. Subsequently, I thought of these better examples that come from different angles.

First, a classic example (mostly) of failures of sequential rationality: food challenges. In most cases, most people who think they can eat that amount of food can eat that amount of food. They just end up, part of the way through the challenge, not wanting to do so.

A so-so story about motivation that, upon reflection, suggests an interesting take on harnessing “nonsense” intrinsic motivations to overcome sequential rationality/commitment problems: Jerry Seinfeld’s “don’t break the chain” tip for writing. (But, how do I motivate myself to bother writing the “X” on the calendar? I’ll just remember, right?  HAHAHA.)  In other words, it’s usually more fun to not work than work (because, hey, somebody‘s got to bang on that drum ALL DAY), but it also sucks to think about breaking a streak.  Well, at least a streak that is endorsed by Jerry Seinfeld.

Oh, I Thought You Said You Wanted To Sell A Bus…

It’s a new year, the ground is covered with more-than-ankle-deep snow, and I told a friend tonight that it seems like, over the past couple of weeks, the notion of a “day of the week” has lost all meaning.

This is how winter break winds to a quiet close.

It is an opportunity to take stock, and think about what we do as faculty and researchers.  In a sense, it is the quintessential time for “well, time to get BACK AT IT.”  After all, the spring semester lurks right around a single left-click on the google calendar, and both students and journals are inexorably rising from a temporary slumber, bringing the holiday respite to a close.

The details of my own thinking are ephemeral even to me (though you never know when hindsight might prod one to think such classifications naive or at least short-sighted).  But the classic math of politics point that keeps coming back to me during this thinking is

Don’t start what you won’t want to finish.

Notice it’s not can’t finish…this is about incentives/motivations/desires, not ability. In the end, time is finite and, ironically, fleeting.  A project you’re not “into” has a very small chance of being finished.  In game theory, this idea is captured by what is referred to as sequential rationality. In a nutshell, a sequence of actions or decisions is sequentially rational if, at every opportunity one has to revisit and revise the sequence, continuing the sequence as originally prescribed is still in the decider’s interest.  In other words, it is a sequence of decisions that the decision-maker never has significant regret about continuing.

As one example, I (re)designed two syllabi this week.  I am teaching Intro to American Government, a course that I have always wanted to try, and graduate game theory, a course that I have taught for years, but never perfectly.  In both courses, there are topics I like more than others.  In both courses, there are topics I really don’t find relevant…at least for the way I think about “the course.”  However, many of these topics are de rigueur.

So, should I cover these topics?  I can stomach the claim that I should cover them well. But, to be blunt, both sequential rationality and experience—the two of which pair quite nicely, if you think about it for a minute—suggest that that fortuitous combination is not likely.  Rather, my real choice is between covering them as well as I will or simply omitting them.  (I’ll set aside the possibility of minimizing them for the purpose of maximizing clarity.)

On the one hand, it is likely (at least in my mind) that the time spent covering these topics in a subpar fashion could be better spent on other topics I feel more strongly about.  So, covering these topics comes at a cost.  On the other hand, not covering the topics, in addition to potentially leaving gaps in the student’s knowledge,[1] sends a signal about my own tastes/abilities/dedication.  Setting aside the “gaps in knowledge” worry, as none these topics couldn’t be covered by students on their own at least as well as I would cover them, given my lack of interest in them, the comparison is even more stark.

Specifically, if one presumes that my interest in/dedication to the topic—i.e., the sequential rationality of me “committing” to a syllabus where I will talk for 3 hours about the topic[2]—is positively associated with quality of the teaching, then an earnest student would, prior to the course beginning, wish that I would allocate my teaching of the various topics roughly in proportion to my interest in them.  But, I have an interest in being seen as “teaching well.”  Or, less admirably, I have a latent but abiding interest in avoiding scrutiny.  If I teach what I think most appropriate according to the logic of sequential rationality, and for some reason a student complains about the course, I would potentially have to explain why my course omitted some de rigueur topics.  My argument could, of course, be that I don’t find those topics as interesting, relevant, and/or important as others.  But, to be both clear and hyperbolic, this would be arguing against the discipline.

Now, luckily for me, it doesn’t really matter right now.  But, this is a classic and subtle point about accountability, discretion, and commitment.  My hypothetical earnest student is arguably also a plausible benchmark for the hypothetical earnest parent and/or university administrator.  But, once the “fire alarm oversight” of a disgruntled student (or, to be fair, many students) is “pulled,” the sequential rationality of even the hypothetical earnest parent and/or university administrator to stand up for “well, the syllabus was ex ante efficient given Professor Patty’s own sequential rationality constraints” is suspect.

In other words: the road to commitment problems is paved with unrealistic intentions.

With that, I leave you with this.

_____________________

[1] To be clear, in some cases, I might be preserving the students’ knowledge by not covering a topic.

[2] Arguably commensurate and definitely countervailing to the dropping of worries about the gaps in students’ knowledge, I am also setting aside the potentially deleterious effects of proffering a syllabus that is ultimately not followed. That’s the more traditional portrait of a failure of sequential rationality: the plan is ultimately “not followed.”  A piece of advice for those who have yet to teach your own course: varying from the syllabus is the surest way to lower your course’s evaluation without significantly risking your own dismissal. Trust me on this. I have plenty of evidence, all of which I regrettably “manufactured.”

The Ties That Bind Theory

I am a game theorist.  I love thinking about situations as strategic interactions.

As a game theorist, I make assumptions all the time. (And I assume you do, too, in whatever you do. THAT’S META.)

In political science, there are, broadly defined and in my estimation, four categories of theory.  Game theory (which includes mechanism design) is one, of course.  The oldest of the four is typically simply called “political theory” or, more precisely, normative political theory within the discipline, and—while it is a decidely and wonderfully big tent—essentially can be described as work focused on theoretical concepts such as justice, fairness, & democracy, and how they should be measured.  (That “normative” political theory is about measurement is my take—the usual framing of the research is “this concept implies that one should do the following.” If you think about it for a second, the implication of accepting such a conclusion is “adherence to this concept implies that you will do the following,” by which you can measure (from behavior, for example) how much someone values, or at least adheres to, this concept.)

The third category is known as social choice theory. This category includes studies of what is possible to achieve. There are a lot of ways this inquiry can be carried out.  Examples include “can we always find a fair choice,” “is it possible to design a voting system that always rewards honesty,” or “can we combine people’s preferences to make a sensible public decision?”  This field has been declared dead by more than a few social scientists, ironically due to a handful of noteworthy (and Nobel prize-winning) achievements, including Arrow’s (im)possibility theorem.  That is not, in fact, dead is proven by much current research, including the forthcoming book authored by Maggie Penn and myself, Social Choice and Legitimacy: The Possibilities of Impossibility.

Finally, a fourth category has emerged over the past forty years or so (beginning with the works of Schelling, Axelrod, and others).  This area, computational modeling, develops and analyzes mathematical models of behavior and institutions to derive empirical predictions about political phenomena.

While open hostilities are luckily rare, the four categories do not “get along,” generally. This is both silly and a shame.  It’s a shame because the four areas have a lot of opportunities for profitable and though-provoking collaboration.  It’s silly because they have a lot in common (independent of their extant potential synergies).  In particular, “all four are theory.”  That is, the best work in each of the four values and embodies clarity and, to the degree available, parsimony.

In the interest of parsimony I have drawn a picture of (at least some of) their connections. I have put game theory/mechanism design at the middle of the graph because, even if that’s not right, it instantiates the old adage that “the blogger draws the maps.”

This is a picture I drew...OF THE WORLD OF THE MIND.

This is a picture I drew…OF THE WORLD OF THE MIND.

Very quickly, why the distinctions between the four categories?  Well, I have been thinking about them—and I am comfortable describing myself as “actively” working in all four areas—and I thought it might be useful (if only for me) to work through a quick description of the lines in the picture.

Normative political theory is, by and large, completely concerned with internal consistency.[1] In this sense, it is linked with game theory and mechanism design and social choice theory: all three categories highly value what is known as a “fixed point,” which describes a solution to a problem that, when put forward as the solution to the problem as described by the theory, does not then lead to a different solution being produced.

Social choice theory is concerned with generality and, relatedly, agnosticism.  (There are important exceptions to this, such as in this article by Maggie Penn, Sean Gailmard, and myself.) That is, much of social choice theory tackles the tough questions of what can be achieved/guaranteed with as minimal restrictions on “who may show up” or “what people might want/believe” as possible.  The link between social choice theory and normative political theory is hopefully clear: much normative political theory either in essence argues for what one should want or believe or works out how one should act given what one wants or believes, while most social choice theory finds what can happen when we can’t or don’t want to restrict what people want or believe.[2]

Normative theory and social choice theory are each less concerned with the details of the specific situation: they are more general than game theory.[3]

On the other end of the generality spectrum is (the vast majority of) computational models. Most computational models require specification of the parameters of a particular situation to generate (often very precise) predictions.  If one has this specification at hand, (the right) computational models offer a direct path to the desired outcome: a prediction and the ability to start tinkering.

Part of the power of computational modeling is its deliberate setting aside of what I call epistemological concerns.  Put another way, a good computational model is not required to be internally consistent with respect to individual or group motivations.  That is, and this is often properly trumpeted as a strength of computational modeling but, in my opinion, for the wrong reasons, computational models do not necessarily yield what a game theorist would call “equilibrium outcomes.”  In other words, there is no reason to suspect that a computational model of a given political situation would remain accurate once the model was shown to one or more reflective agents involved in that situation.[4]

I’ll conclude now by restating my point bluntly: each of these are political theory. And, yes, game theory is at the middle: it requires specification of more details than social choice and normative theory (which I would argue, if pressed, truly differ only in their languages) and it requires “more of itself” in terms of internal consistency than is (appropriately) the norm in computational modeling.

Like I’ve said before about other analogous “comparisons,” asking which of these is the best is like asking whether a screwdriver is better than a hammer.  It depends on whether you want to screw someone or simply hit them over the head.

With that, I leave you with this.

____________

[1] There are lots of great normative arguments that encounter the dirty feet of reality, but that is to the field’s credit, as opposed to a defining quality, in my opinion.

[2] Note that this illustrates an important category of empirical implications of social choice theory: social choice results inform one about the degree to which one can infer anything about the preferences and/or beliefs of a group of individuals from the decisions and choices one observes the group make.

[3] Mechanism design is “in the middle,” as Austen-Smith and Banks make clear in their incredible treatises (volume 1 and volume 2).  That is, mechanism design asks, in essence, what general goals/outcomes can a group achieve through the proper design of a specific institution (i.e., “game”).

[4] I won’t discuss it here, but the dotted line in the figure connecting social choice and computational models reflects a very active area of research that asks when, how, and whether such reflective individuals might be able to change their behavior to make themselves better off.

Inside Baseball: Making Models of Minds, Making Models “Behave”

Most of my research starts with the presumption that individuals are rational.  By this, I mean that they know the rules of the game, and they also know that the other players are rational.[1]  Simple empirical observation indicates the inherent contestability of this presumption.  So, why do I continue to adopt it? Well, I have written before about this point in a number of posts, but (or, “accordingly”) I’ll say the presumption of rational behavior establishes the robustness of the phenomena that the model aims to “predict/explain/mimic.”  That is, presuming rationality makes my life harder as a theorist. If I did not have rationality as a stake for the tent I’m putting up, it would be pretty darn easy to place that tent wherever I want: as a theorist, I can explain anything if you let me bring irrational behavior to the table. Setting that to the side, however, there’s another point I’ve been thinking about lately that I wanted to drone on about a bit.  If we step away from presuming rationality, we need to put something in its place.  There is a huge and growing literature on deviations from rationality in social sciences.[2] In a quick form, my challenge to those who would have models incorporate these findings is as follows:

What are the “parameters” of irrationality?

A related version of this is “which varieties of irrationality should I put in my model?” There are various forms of irrational choice: those that deal with

  1. intertemporal choice (e.g., overconfidence in your own patience in the future),
  2. evaluating risk (e.g.is it a gamble over winnings or losses?),
  3. updating on information (e.g., favorable vs. unfavorable information?)

So, this leads to two questions.  First, which of these is more important to include?  Common sense (and the quest for causal identification, if that’s your cuppa tea) suggests that throwing them all in will yield something unmanageable.

Second, and more precisely, even if I choose one to include (for example, to see how the availability heuristic affects how voters evaluate incumbents and challengers, and how this affects campaign decisions), how do I parameterize/represent this bias?

Put another way, most (formal and informal) models of behavioral biases rely on at least an implicit notion of perception: in a nutshell, most biases in choice can be described as some “label” or “feature” of the decision problem that the decision-maker should not actually take into account actually bearing on the decision-maker’s ultimate choice(s).

The question, and conclusion of this post (in search of feedback), is how do we model the world of “features?”  That is, how does one make an even plausibly “general” theory of a phenomenon that includes behavioral biases?  After all, most models are designed to be generally applicable—where “generally” means across time, or space, or something.  Traditional models of choice/behavior presume that people have a “general” feature known as a preference ordering (or, essentially equivalently, a utility/payoff function).  But the (both formal and informal) “behavioral” models have been built largely on noting a series of deviations from the predictions of the classical choice model in a specific (often laboratory) setting.  This isn’t an objection that the findings aren’t externally valid—it’s an honest question as to how one extends the findings to general (i.e., external) settings.

So, what ingredients should I throw into the pot if I want to cook up a model of the mind to deploy in my models of political behavior and institutions?

With that, I call upon a favorite, and leave you with this.

___________

[1] There’s even more in here, as I wrote about here. I will set the issue of common knowledge of rationality aside for the purpose of this post, as it merely amplifies the point I am trying to make.

[2] I won’t bother to cite any of it, because it would be merely gratuitous and possibly lead to erroneous inferences about what I intend to be inferred by the inevitable omissions from such a list.  That concern, if you think about it, is meta. (What? Oh, okay… start here.)

Inside Baseball: Weather you like it or not, models are useful.

As a theorist, I write models.  (There is a distinction between “types” of theorists in political science.  It is casually and superficially descriptive: all theorists write models, just in different languages.)

One of the biggest complaints I hear—from both (some) fellow theorists and (at least self-described) “non-theorists”—is the following equivalent complaint in different terms:

  1. Theorists: …but, is your model robust to the following [insert foil here]
  2. “Non-theorists”:  …but, your model doesn’t explain [insert phenomenon here]

It is an important point—perhaps the most (or, only) important point—of this post that these are the same objection. I have been busy for the past month or so, and in the interest of getting those phone lines lit up, I thought I would opine briefly on what a social science model “should” do.  Of course, your mileage may vary, and widely.  This is simply one person’s take on an ages-old but, to me at least, underappreciated problem.

Models necessarily establish existence results. That is, a model tells you why something might happen.  It does not even purport to tell you why something did, or will, or did not, or will not, happen.  (Though I have a different take on a related but distinct question about why equilibrium multiplicity does not doom the predictive power of game theory.)

Put it another way: a model is a (hopefully) transparent, complete (i.e., “rigorous”) story or narrative offering one—most definitively not necessarily exclusive—explanation for one or more phenomena.  I regularly (co-)write models of politics (recent examples include this piece on cabinets and other thingsthis piece on the design of hierarchical organizationsthis piece on electoral campaigns, and this forthcoming book on legitimate political decision-making).  All of them are “simply” arguments.  None of them are dispositive.  The truth is, reality is complicated.

Politics is a lot like meteorology.  We all know and enjoy, repeatedly, jokes along the lines of “hell, if I could get a job where you only need to be right 25% of the time….,” but the joke makes a point about models in general.  Asking any model of politics to predict even half of the cases that come to you after reading the model is like asking the meteorologist to, say, correctly and exactly predict the high and low temperature every day at your house.  No model does that.  Furthermore, it is arguable that no model should be expected to, perhaps, but that’s a different question.  More importantly, no model is designed to do this…because it defeats the point of models.

Running with this, consider for a moment that a lot more is spent on meteorological models than political, social and economic ones (e.g., the National Weather Service budget is just shy of $1 billion and that of the Social and Economic Sciences at the NSF is approximately 10% of that). Models are best when they are clear and reliable.  Sometimes, reliability means—very ironically—“incomplete in most instances.”  Consider a very reliable “business model”:

“Buy low, sell high.”

This model, setting aside some signaling, tax, and other ancillary motivations (which I return to below), IS UNDOUBTEDLY THE BEST MODEL OF HOW TO GET RICH. 

However, it is incomplete.  WHAT IS LOW? And you can’t answer, “anything less than `high,'” because that merely pushes the question back to WHAT IS HIGH?

Of course, some people will rightly say that this indeterminacy is what separates theory from praxis.  The fact is, even the best good models don’t necessarily give you “the answer.”  Rather, they give you an answer.  One can reasonably argue, of course that a model is “better” the more often its basic insights apply.  But that is a different matter.

Returning to the “buy low, sell high” model, consider the following quick “thought experiment.”  Suppose that Tony Soprano approaches you and says, “please buy my 100 shares in pets.com for $10,000.”  Should you?  According to the model, the answer is clearly no: shares in pets.com are worth nothing—and never will be worth anything—on the “open market.”

But, running with this, Tony has approached you for “a favor.”  Let’s not be obtuse: he bailed you out of that, ahem, “incident” in Atlantic City back in ’09, and you actually have two choices now: pay $10,000 for worthless shares in pets.com or have both of your kneecaps broken. (Protip: buy the shares.)

The right choice, given my judicious/cherry-picking framing, is to buy shares high and “sell them low.”  Well, this proves the model wrong, right?  No.

It simply change the definition of “value when sold.”  It reveals the incompleteness of the theory/model.

This is basically my point: no model is truly “robust,” even to imaginable variations and, conversely, it is certainly the case that smart people like you can come up with examples that at least seem to suggest that the model doesn’t describe the world.

It’s kind of an analogy to a foundation of empirics and statistics: central tendency.  Models should indicate an interesting part of a phenomena of interest.  In this sense, a good model is an existence proof, sort of like the Cantor Set: it demonstrates that things can happen, not necessarily that they do.  The fact that those things don’t always happen doesn’t really say much about the model, just like you read/watch the weather every day even while making those jokes about the meteorologist.

And with that seeming non sequitur, I push forward and leave you with this.

Inside Baseball: The Off-The-Path Less Traveled

[This is an installment in my irregular series of articles on the minutiae of what I do, “Inside Baseball.”]

Lately I have been working on a couple of models with various signaling aspects.  It has led me to think a lot more about both “testing models” and common knowledge of beliefs.  Specifically, a central question in game theoretic models is: “what should one player believe when he or she sees another player do something unexpected?” (“Something unexpected,” here, means “something that I originally believed that the other player would never do.”)

This is a well-known issue in game theory, referred to as “off-the-equilibrium-path beliefs,” or more simply as “off-the-path beliefs.” A practical example from academia is “Professor X never writes nice letters of recommendation.  But candidate/applicant Y got a really nice letter from Professor X.”

A lot of people, in my experience, infer that candidate/applicant Y is probably REALLY good. But, from a (Bayesian) game theory perspective, this otherwise sensible inference is not necessarily warranted:

\Pr[\text{ Y is Good }| \text{ Good Prof. X Letter }] = \frac{\Pr[ \text{ Y is Good \& Good Letter Prof. X Letter }]}{\Pr[ \text{ Good Prof. X Letter }]}

By supposition, Prof. X never writes good letters, so

\Pr[ \text{ Good Prof. X Letter }]=0 .

Houston, we have a problem.

From this perspective, there are two questions that have been nagging me.

  1. How do we test models that depend on this aspect of strategic interaction?
  2. Should we require that everybody have shared beliefs in such situations?

The first question is the focus of this post. (I might return to the second question in a future post, and note that both questions are related to a point I discussed earlier in this “column.”)  Note that this question is very important for social science. For example, the general idea of a principal (legislators, voters, police, auditors) monitoring one or more agents (bureaucrats, politicians, bystanders, corporate boards) generally depends on off-the-path beliefs. Without specifying such beliefs for the principal—and the agents’ beliefs about these beliefs—it is impossible to dictate/predict/prescribe what agents should do. (There are several dimensions here, but I want to try and stay focused.)

Think about it this way: an agent assigning zero-probability to an action in these situations, if the action is interesting in the sense of being potentially valuable for the agent if the principal’s beliefs after taking the action were of a certain form, is based on the agent’s beliefs about the principal’s beliefs about the agent in a situation that the principal believes will never happen. Note that this is doubly interesting because, without any ambiguity, the principal’s beliefs and the agent’s beliefs about these beliefs are causal.

Now, I think that any way of “testing” this causal mechanism—the principal’s beliefs about the agent following an action that the principal believes the agent will never take—necessarily calls into question the mechanism itself.  Put another way, the mechanism is epistemological in nature, and thus the principal’s beliefs in (say) an experimental setting where the agent’s action could be induced by the experimenter somehow should necessarily incorporate the (true) possibility that the experimenter (randomly) induced/forced the agent to take the action.

So what?  Well, two questions immediately emerge: how should the principal (in the lab) treat the “deviation” by the agent?  That’s for another post someday, perhaps.  The second question is whether the agent knows that the principal knows that the agent might be induced/forced to take the action. If so, then game theory predicts that the experimental protocol can actually induce the agent to take the action in a “second-order” sense.

Why is this? Well, consider a game in which one player, A, is asked to either keep a cookie or give the cookie to a second player, B. Following this choice, B then decides whether to reward A with a lollipop or throw the lollipop in the trash (B can not eat the lollipop).  Suppose also for simplicity that everybody likes lollipops better than cookies and everybody likes cookies better than nothing, but A might be one of two types: the type who likes cookies a little bit, but likes lollipops a lot more (t=Sharer), and the type who likes cookies just a little bit less than lollipops (t=Greedy).  Also for simplicity, suppose that each type is equally likely:

\Pr[t=\text{Sharer}]=\Pr[t=\text{Greedy}]=1/2.

Then, suppose that B likes to give lollipops to sharing types (t=Sharer) and is indifferent about giving lollipops to greedy types (t=Greedy).

From B’s perspective, the optimal equilibrium in this “pure” game involves

  1. Player B’s beliefs and strategy:
      1. B believing that player A is Greedy if A does not share, and throwing the lollipop away (at no subjective loss to A), and
      2. B believing that A is equally likely to be a Sharer or Greedy if A does share, and giving A the lollipop (because this results in a net expected gain for B).
  2. A’s strategy:
      1. Regardless of type, A gives B the cookie, because this (and only this) gets A the lollipop, which is better than the cookie (given B’s strategy, there is no way for A to get both the cookie and lollipop).

Now, suppose that the experimenter involuntarily and randomly (independently of A’s type) forces A to keep the cookie (say) 5% of the time.  At first blush, this seems (to me at least) a reasonable way to “test” this model.  But, if the experimental treatment is known to B and A knows that B knows this, and so forth, then the above strategy-belief profile is no longer an equilibrium of the new game (even when altered to allow for the 5% involuntary deviations). In particular, if the players were playing the above profile, then B should believe that any deviation is equally likely to have been forced upon a Sharer as a Greedy player A.  Thus, B will receive a positive expected payoff from giving the lollipop to any deviator.  Following this logic just about two more steps, all perfect Bayesian equilibria of this “experimentally-induced” game is

  1. Player B’s beliefs and strategy:
    1. believes that player A is equally likely to be a Sharer or Greedy if A does not share, and thus giving A the lollipop
    2. It doesn’t matter what B’s beliefs are, or what B does if A does share. (Thus, there is technically a continuum of equilibria.)
  2. A’s strategy:
    1. Regardless of type, A keeps the cookie, because this gets A both the cookie and lollipop).

By the way, this logic has been used in theoretical models for quite some time (dating back at least to 1982).  So, anyway, maybe I’m missing something, but I am starting to wonder if there is an impossibility theorem in here.

Inside Baseball: Uncommon Knowledge

Note: This is the first of what might be an irregular “column” of sorts, “Inside Baseball,” focusing on the minutiae of my research, as opposed to current events. 

 

The heart of game theory is “what would everyone else think if I do what I am about to do differently?”

This is slightly different than the standard “introduction to game theory” approach, where the focus is often on the related question, “what would everyone else do if I do what I am about to do differently?”  But while the difference is slight, it is fundamental.  Game theory is about beliefs, or more appropriately, about consistency of beliefs.

This point bedevils empirical applications (or, more crudely, “tests”) of game theory for at least two reasons.  First, we rarely, if ever, can measure beliefs in anything approximating a direct fashion.  There is a core concept in game theory that is amenable to this test, known as rationalizability, and—unsurprisingly to me as a game theorist—people frequently refute the claim that all actions are rationalizable.  But let’s leave that point to the side.

The second point is more important, to me at least.  At the heart of game theory is the idea that not only are beliefs consistent with one’s own actions (that’s rationalizability, in a nutshell) and consistent with others’ actions (that’s Nash equilibrium, very loosely), they are are consistent with each other.  That is, in any reasonable game theoretic notion of equilibrium, every person not only acts in accordance with his beliefs about what others will do, he or she also understands (“believes”) correctly what everyone else in the game believes, understands that the other players believe correctly about what the player in question believes, including that the player believes correctly about what the other players’ believe about what the player in question believes about their beliefs, and so forth….

This uncommon notion is an instance of what is referred to as common knowledge in game theory.

Well, this uncommon notion is simultaneously elegant and unambiguously empirically false.  For example, it flies in the face of the reality that Florida Gulf Coast University made it to the Sweet 16.

But more seriously, this point is exactly the point of game theory. Game theory is a theoretical enterprise and accordingly requires a priori constraints for the purpose of being meaningful.  And, since this constraint is theoretically elegant and epistemologically appealing, one must always keep in mind that game theory is an inherently philosophical endeavor.  While one can (and should) certainly employ the trappings of game theory for empirically-minded endeavors, the goal of equilibrium analysis is inherently normative or prescriptive.  In other words, game theory models ask “what can (in theory) be achieved in a world in which individuals are intimately involved with the interaction at hand?”

A key (and illuminating) point in this regard is the beginning point of this post: what can happen in equilibrium is, in most interesting settings (i.e., “games”), dependent on what each individual believes about what will happen—or, more fundamentally, what other involved individuals will believe—if he or she acts differently.

When you take this point seriously, you must realize that “testing” game theory models is an inherently ambiguous enterprise.  Suppose the model “works.”  Did it work for the “game theoretically correct” reasons?  Suppose the model doesn’t work.  Why did it fail?

These are important questions, and any answer to either one has no bearing on the “validity” of game theory. Rather, the fact that one could ask either of these questions, the context within which these questions is accordingly posed, is to the credit of game theory.  In a nutshell, every time equilibrium predictions fail, an empirical angel gets his or her wings thanks to game theoretic reasoning.

With that, I leave you with this.