Can a Game Know Its Own Rules?

Hi again! The question I’m about to pose is one that, I’m reliably informed, clears rooms at cocktail parties.1 But I think it sits at the foundation of why institutions are so hard to reform — and why the people who try to reform them so often end up making things worse. That’s for next time, though. Today, I want to talk about games.

Taking Your Ball and Going Home

Here’s a scene everyone recognizes. Two kids are playing a game — basketball, say. One of them is losing. So he picks up the ball, says “this is stupid,” and goes home (note: he never says, “I forfeit the game,” maybe he was in a hurry?) Anyway, pragmatically at least, “uhh, game over.” Sounds like a lot of (mostly less fun) games I have played in life. (I won’t tell you which character I was playing, but I will admit/confess that I have played both “roles,” so to speak. I’m a “double threat,” I suppose. Is that a compliment to myself?)

Now: what just happened, strategically? Within the rules of basketball, there is no explicit provision for this exact situation. Instead, the “rules of basketball” understandably tell you “what happens” when you shoot (depending on whether it “goes through the hoop,” for example), when you foul, when the clock runs out. They do not tell you what happens when a player picks up the ball and leaves the court, never to return. This action is, formally speaking, outside the game. Your first instinct might be: “Well, obviously — he loses. He quit.” And that’s a perfectly reasonable/”practically accurate” interpretation. But notice that “he quit, and therefore he loses” is your (and, yes, most of society’s) inference, not the rules’ literal interpretation.

To make this less ethereal, suppose instead the kid says, “I’m so sorry — my parents are here, I have to leave!” Should that kid lose because of his parents’ timing/schedules? (And, in spite of my inclinations, no, “don’t be a stickler right now.” Yes, that’s about to get “ironic AF”.)

The rules of basketball define how you score and how the clock works; they don’t contain a general provision for “a player decided to leave and never come back.” You’re filling the gap with common sense — and common sense, as we’ll see, is doing a lot of heavy lifting that the formal rules cannot. Let me push on this with a darker example, because I think it reveals something important.

The Penalty Ceiling

Suppose, in the course of an NBA game, you want to prevent an opponent from scoring. You could commit a blocking foul. You could commit a hard foul — a flagrant foul, in the NBA’s terminology.2 The NBA distinguishes two levels: a Flagrant 1 (“unnecessary contact”) gets you two free throws and possession for the other team, while a Flagrant 2 (“unnecessary and excessive contact”) adds an ejection. That’s where the ladder ends. There is no Flagrant 3. So: what if, instead of committing a hard foul, you grab the opposing player and strangle him? Within the formal rules of basketball, the in-game consequence is… [flips through pages speedily….] well, it’s identical to a Flagrant 2 foul. Ejection. Two free throws. Possession. The rules literally cannot distinguish, in terms of game outcomes, between a very hard basketball play and attempted murder. Everything above the Flagrant 2 ceiling looks the same to the game. Criminal law handles the strangulation, of course — but that’s an external enforcement system, a different “game” entirely. Within the four corners of basketball’s rules, the marginal in-game cost of escalating from a hard flagrant to actual assault is zero.3

Now, you might (yes, quite reasonably) think: “Fine, but no one actually strangles an opponent during a basketball game. The criminal law deters that.” True. But the fact that you need to invoke an entirely separate system of rules (here: “the rules of the legal system”) to handle actions that are physically possible within the game is essentially precisely the point. From a logical perspective, the rules of the “game of basketball” themselves have a ceiling,4 and above that ceiling, deterrence vanishes.

This matters beyond basketball. Consider: why have police unions historically resisted making the penalty for assaulting an officer as severe as the penalty for killing one? It’s not squeamishness. It’s strategy. If assaulting a cop carries ten years and killing a cop carries life, then a suspect who has already committed the assault faces an enormous marginal cost for escalating further. The gradient protects the officer. But if both carry life? The marginal cost of escalation drops to zero. A suspect who has already crossed the assault threshold faces no additional deterrence against killing. The punishment structure only deters escalation when there’s room to escalate into.

The general principle: any finite penalty schedule creates a flat region at the top where marginal deterrence fails. And raising the ceiling doesn’t solve the problem — it just moves the flat region higher. You haven’t eliminated the zone where deterrence vanishes; you’ve simply changed where (i.e., “conditional on what action?”) the deterrence “has its bite.”

And there’s a second problem with “if you do X, you lose” — one that is, if anything, even more fundamental. Everything I’ve said so far implicitly assumes a two-player game. In a (zero-sum)5 two-player game, “you lose” means “your opponent wins,” and since you have exactly one opponent, this is unambiguously bad for you. The fix might fail for other reasons, but at least it’s a punishment. Add a third player and even this breaks down. “You lose” no longer determines who wins — it just removes you from contention. And the question of which remaining player benefits from your removal is now a strategic variable. If you prefer Player C to Player B, and your continued participation is helping B more than C, then losing is not a punishment — it’s a gift to your preferred outcome. “If you break this rule, you lose” becomes, in effect, “if you break this rule, you get to kingmake.”6 The penalty has been tranformed from a deterrent into a strategic instrument, and, having assigned a definite/predictable outcome to the violation in question, the rules have no way to prevent (or, somewhat ironically, deter) this type of behavior. They did exactly what they were supposed to do. The problem is that what they were supposed to do “isn’t enough” — or more appropriately, they are not incentive compatible within the game itself.

This is not that exotic, of course. In sports, it’s called tanking: a team deliberately loses late-season games to secure a more favorable draft pick or dodge a stronger playoff opponent. In elections, it’s strategic withdrawal: a candidate drops out not because they can’t win, but to determine who among the remaining candidates does. In legislatures, it’s the entire logic of strategic voting and logrolling.

Simple and universal point: whenever “a game” has three or more players, even the declarative “you lose” outcome is no longer necessarily the worst possible outcome. How you lose, and when you lose, and who benefits from your loss are all strategic variables that the rules have handed you.7 The penalty, intended to close the game, has opened it. (Readers of this blog will note the family resemblance to a certain famous theorem about what happens when you have three or more alternatives: it sort of rhymes with “Mia Farrow.” We’ll come back to this.) I want to convince you that this problem is not trivial at all. In fact, I think it’s a deep problem, one that connects to some of the most important results in mathematics, and political economy.

The Chessboard, Overturned

Consider chess. Chess is, compared to basketball in the driveway, a remarkably well-specified game. The rules define every legal move, every legal position, and every terminal outcome (checkmate, stalemate, draw by repetition, and so on). Chess even has a formal provision for one action that might seem “outside” the game: resignation. If you tip over your king, the game ends and your opponent wins. Clean, elegant, formally complete. But now imagine a player who, upon finding herself in a losing position, sweeps all the pieces off the board and onto the floor. What happened? Not a resignation — she didn’t tip her king. Not a checkmate. Not a draw. The rules of chess, so carefully specified, have nothing to say about this. And here’s what’s interesting: it’s not obvious what they should say. The most natural response — the one most people jump to — is: “Well, obviously she loses. Flipping the board is just resignation with theatrics. We can infer that she wanted to concede and was simply… efficient about it.” And in a single game of chess, maybe that resolution works well enough. But notice what it’s doing: it’s interpreting a physical action (scattering pieces) as a strategic action (resignation) by reasoning about the player’s intent. The rules of chess say nothing about intent. We’re filling the gap with inference — and inference, as we’re about to see, opens its own can of worms.

The Game Within the Game

Here’s where it gets interesting (Ed: …Finally?). Suppose our chess player isn’t playing a single game. She’s playing a best-of-seven match. She’s down a game, and the current game — game 3 — is going badly. She has two options within the formal rules: play on to the bitter end, or resign. But these two options are strategically different in the context of the match, even though they produce the same outcome in game 3 (she loses). Playing to the bitter end reveals information — about her style, her preparation, her responses to specific positions — that her opponent can exploit in games 4 through 7. Resigning early conceals that information. Accordingly, the timing and manner of her concession is itself a strategic variable, one that the rules of chess (which govern individual games) don’t acknowledge at all. The match is a game; each game within the match is a game; and the two levels interact in ways that neither level’s rules fully capture. Now: is it “legitimate” for a player to play badly — or concede early — in game 3 in order to improve her chances in game 4, 5, 6, and/or 7? While I play chess, I’m not serious at it (Ed: you mean you’re not that good at it?) That said, I suspect that most chess players would say this offends the spirit of competition (to understand why, ask yourself, “does anybody think being described as tanking something is a compliment?) But the rules of a best-of-seven match, as typically specified, say nothing about it. We’re back in the gap between what the rules formally cover and what is physically (and strategically) possible.

What Poker Understands

This is a good moment to note that at least one common game does understand the problem we’re circling around — or at least one important dimension of it. In standard Texas Hold’em, when all of your opponents fold, you win the pot. You may then show your cards to the table, but you are explicitly not required to. This is a rule about information, and it is one of the rare cases where a game’s designers grasped that the strategic management of private information is itself part of the game. Whether you show a bluff, show a strong hand, or show nothing at all is a decision with consequences for future hands — and the rules protect your right to make that decision. Most rule systems are not nearly this sophisticated. They either ignore the information dimension entirely (chess doesn’t care (or, more accurately, is realistic about the fact that it “can’t measure”what you were “thinking” about doing) or — and this is the case that will matter most for us — they try to compel disclosure, and immediately discover that compelled disclosure is extraordinarily hard to enforce.

Belichick’s Injury Reports (and Other Mendacities)

Which brings us to the NFL, and to a man who made a career out of finding the gaps between what rules say and what rules mean. The NFL requires teams to publicly disclose player injuries before each game. The purpose is transparent: betting markets, opposing teams, and fans should have access to the same basic information about who’s healthy and who isn’t. The rule was designed to “level the playing field” — to prevent teams from gaining a strategic advantage by concealing private information about their own roster. This is, on its face, a reasonable rule. It is also exactly the kind of rule that is most vulnerable to manipulation, because it attempts to regulate something — private information — that the regulator cannot directly observe. The NFL can see what a team reports. It cannot easily verify whether the report is accurate. And so Bill Belichick, with characteristic precision, listed half his roster as “questionable” every single week. Technically compliant. Informationally useless. The rule required disclosure; Belichick disclosed — in a way that conveyed nothing. The spirit of the rule was defeated by the letter of the rule, and the letter couldn’t be tightened without creating new problems. (What does “accurate” mean? Must a team disclose a player’s private medical details? Who adjudicates disagreements about severity?) Notice the irony: the injury disclosure rule was created specifically to prevent teams from “gaming the game” with private information. But the rule itself became the game that got gamed. This isn’t a bug in the NFL’s rule-writing process. I think it’s a theorem — and we’re about to see it again.

Belichick’s Safety

Let me give you a second Belichick example, because one might be an anecdote but two starts to look like a pattern (and, yes, I am both a proud Tarheel and Steelers fan, so I am not “unbiased” with respect to Billy B). In a 2003 NFL game, Belichick’s New England Patriots were leading the Denver Broncos late in the game. Facing a 4th down deep in their own territory, the conventional play would be to punt. But Belichick did something that, at the time, struck many observers as bizarre: he had his punter intentionally run out of the back of the end zone, conceding a safety — two points for Denver. Why? Because a safety, unlike a punt, is followed by a free kick from the 20-yard line, which typically travels farther and is harder to return than a punt from deep in your own end zone. Belichick wasn’t breaking any rules. He was following them. But he was exploiting a feature of the rule mapping — the relationship between safeties and free kicks — that the rules’ designers almost certainly never intended as a strategic option. The rules said: “if a safety occurs, the following happens.” They assigned an outcome to the event. And that assigned outcome, in the right circumstances, made deliberately causing the event profitable. This is not a curiosity. This is a theorem.

Gibbard-Satterthwaite, in Football Pads

The Gibbard-Satterthwaite theorem, one of the foundational results in social choice theory, tells us (informally) that any sufficiently rich system of rules that isn’t dictatorial — that is, any system where more than one person’s actions matter — is manipulable. There exists some situation in which some agent can achieve a better outcome by acting contrary to the system’s intended purpose. Both of Belichick’s exploits are Gibbard-Satterthwaite in football pads. The NFL’s rules are “sufficiently rich” (they cover a complex, multi-agent strategic environment) and non-dictatorial (both teams’ actions matter). So the theorem guarantees that there exist situations where a team can benefit by doing something the rules didn’t envision as a strategic choice. The intentional safety was always there, latent in the rule book, from the moment the safety/free kick provision was written. The meaningless injury report was always available, from the moment the disclosure rule was written. It just took decades — and a coach who modeled the game differently than the rule designers — to find them. And notice the computational point: these exploits were hard to find. Not hard in the sense of requiring genius (though Belichick is a genuinely brilliant strategic mind), but hard in the sense that the space of possible rule interactions is vast, and most people never think to search it. The manipulability is guaranteed by theorem; the discovery of any particular manipulation is a search problem of potentially enormous complexity.

The Trilemma

Now let’s go back to our ball-taker and our chessboard-flipper and think about what a game designer could do about these “outside” actions. I think there are exactly three options, and none of them is satisfactory. 

Option 1: Leave the action outside the game. The rules simply don’t address it. This is the status quo for chessboard-flipping. The game is formally incomplete: there exist feasible actions with no assigned outcome. This might seem acceptable — we handle these situations with social norms, tournament rules, or just the general understanding that you’re not supposed to do that. But “not supposed to” is doing an enormous amount of work here, and it’s not part of the formal game. We’ll come back to this. 

Option 2: Assign the action a bad outcome. “If you flip the board, you lose.” This is the most natural response, and it’s what most rule systems try to do — define penalties for rule-breaking. But here’s the problem: the moment you assign an outcome to an action, you’ve brought that action into the game. It’s now part of the strategy space. And once it’s part of the strategy space, it interacts with everything else. Belichick’s safety is exactly this: the rules assigned an outcome to the “bad” event of a safety, and that assigned outcome, in interaction with the rest of the rules, made the event strategically attractive. The injury report is a subtler version: the rules assigned a requirement (disclose) with a penalty (fines, draft picks) for noncompliance — and in doing so created a new strategic question (how to comply in form while defecting in substance) that didn’t exist before the rule did.

Worse, any newly incorporated action can be used as a threat. “Trade with me or I flip the board” is now a meaningful strategic statement, because “flip the board” has a formally defined consequence. You’ve just enriched the game in ways you may not have intended. And recall the multiplayer problem from earlier: even the seemingly nuclear option — “if you do this, you lose” — is only a deterrent when the game has exactly two players. The moment there are three or more, “you lose” becomes a strategic instrument rather than a punishment, because the violator gets to influence who among the remaining players benefits. This is not a minor caveat. Most real-world “games” — legislatures, markets, regulatory environments, organizations — have many players. In these settings, Option 2 doesn’t just fail because penalties create new strategic possibilities. It fails because the maximum penalty — total defeat — is itself a strategic resource. The penalty schedule cannot be made severe enough to deter a player who would rather kingmake than compete. There is, quite literally, no “bad enough” outcome to assign, because the badness of the outcome for the violator is not the relevant quantity — the relevant quantity is the differential effect of the violation on the remaining players, and the rules cannot control this without controlling the entire game, which is the problem we started with.

This, I think, is where the blog’s namesake result makes its quiet entrance (Ed: I just knew you were into “branding”). The two-player case is well-behaved: there’s one opponent, preferences are opposed, and penalties can work (modulo the ceiling problem). Add a third player — or a third alternative — and the structure changes qualitatively. Stability dissolves. Manipulation becomes ubiquitous. Three implies chaos

Option 3: Define an external enforcement mechanism. “There’s a referee, and the referee handles situations the rules don’t cover.” This works — until you realize that the referee’s judgment is itself a rule system. What are the rules governing the referee? Can a player “go outside” the referee’s rules? If so, you need a meta-referee. And meta-meta-referee. You’ve begun an infinite regress — or, if you prefer, you’ve acknowledged that the game is embedded in a larger game, which is embedded in a larger game, and somewhere the buck has to stop at a system that is, itself, formally incomplete.

Why This Matters (or: Gödel Was Here)

If the “trilemma” above reminds you of something, it should (Ed: Oh goodness, is this another “truels post“?). Gödel’s incompleteness theorems tell us, roughly, that any formal system rich enough to express basic arithmetic cannot be both consistent and complete. There will always be true statements that the system cannot prove from within.

The analogy to games is, ahem, more than an analogy (is there a word for “X is analogous to X,” beyond “tautological” (Ed: Not that tautologies have ever stopped you before). A “self-enforcing” rule is one where breaking that rule is never incentive-compatible, given the other rules of the game. This is another way of understanding “internal consistency,” for those of you playing at home.

To verify that a rule is self-enforcing, you need to check it against all other rules and all possible strategies — which is itself a statement within the system. And for any sufficiently rich game, the system cannot verify all such statements from within. There will always be some actions, some contingencies, some interactions that the rules cannot “reach” without expanding the system — at which point you’ve created a new system with new gaps. A game, in other words, cannot fully know its own rules. It cannot certify, from within, that all of its rules are self-enforcing. There will always be a kid who can pick up his ball and go home, and the game — qua game — has nothing to say about it.

A more tangible way of understanding this: any interesting game must have some rule X that the other rules of the game that define “winning the game” must sometimes give you an incentive to break “rule X.”

I now dub that the Billy B Rule and it expands far beyond American Football, Chapel Hill, and indeed time and space itself! (Ed: Seriously? ….Oh, what the hell, if they’re still reading, let’s go for it, I guess.)

The Impossibility Migrates

I want to close (Ed: What? Oh, I thought you were just getting started.) by suggesting that what we’ve identified is not merely a curiosity about games. It’s a conservation law. The trilemma says that the “gap” in a rule system — the space between what the rules formally cover and what strategic agents can actually do — cannot be eliminated. It can only be relocated.

You can leave it as incompleteness (Option 1), and accept that some actions have no formal consequence.

You can try to close it by assigning penalties (Option 2), and discover that the gap reappears as manipulation — new strategic possibilities created by the very rules you wrote to prevent the old ones.

Or, you can hand it off to an external enforcer (Option 3), and watch the gap reappear one level up.

In any event, the problem is conserved; it just changes form. This pattern — call it the migration of impossibility — shows up far beyond sports and parlor games.

The “Hook”: Consider algorithmic fairness. There’s a well-known result (due to Kleinberg, Mullainathan, and Raghavan, and independently to Chouldechova) showing that two natural fairness criteria — error-rate balance and predictive parity — are generally incompatible when different groups have different base rates of the behavior the algorithm is trying to predict. This is, in its structure, an impossibility theorem of the same species as the ones we’ve been discussing: you can’t have everything you want, simultaneously, within the system.

Now, in some recent work that Maggie Penn and I have been doing, we noticed something. The classical impossibility results hold behavior fixed — they assume that people’s base rates of compliance (or recidivism, or default, or whatever the algorithm is classifying) are just facts about the world, not choices that respond to incentives.

But of course they are choices that respond to incentives, and in particular they respond to the stakes of classification — the severity of the fine, the length of the sentence, the terms of the loan. Once you recognize that base rates are endogenous — that they’re equilibrium objects shaped by the algorithm and its consequences — an escape route from the impossibility opens up. You can simultaneously achieve error-rate balance and predictive parity by adjusting the stakes of classification to induce equal base rates across groups.

Cool, …problem solved, right?

Not quite. Here comes the conservation law. The statistical impossibility disappears, but it migrates: achieving both fairness criteria requires that identical classification decisions carry different consequences for different groups. You’ve moved the inequality from the distribution of algorithmic outcomes to the severity of consequences attached to those outcomes. The impossibility doesn’t vanish. It changes address. And it gets worse — in a way that connects directly to the penalty-ceiling problem. In some cases, equalizing base rates under equal stakes requires penalizing compliance — effectively setting negative incentives that suppress the behavior the system is supposed to encourage.

That’s the fairness equivalent of flattening the penalty gradient between assault and murder. You’ve “equalized” the treatment, but you’ve destroyed the incentive structure that was generating the behavior you wanted. The gap migrates, again, from one form of unfairness to another.

I think this is a general feature of any system that tries to regulate strategic behavior. The gap between what the rules intend and what agents can do is not a deficiency of any particular set of rules. It is a structural property of the relationship between rules and the strategic agents who inhabit them. Fix it here, and it appears there. Close this loophole, and you open that one. The impossibility is conserved.

A Provocation for Next Time

So if the impossibility always migrates — if every fix to a rule system creates new gaps somewhere else — then what does this mean for the biggest, most complicated “games” we play? What does it mean for institutions, bureaucracies, governments? It means, I’ll argue, that every well-functioning institution is riddled with informal patches — norms, workarounds, conventions, and practices that exist precisely to handle the cases the formal rules can’t reach.

These patches are the institution’s solution to the migration problem: every time a gap was discovered, someone — a bureaucrat, a judge, a middle manager — found a way to cover it, and that patch became part of the operating system. The institution looks messy from the outside because it is messy. It has to be. The formal rules can’t do the job alone, and the patches are where the real work happens. And it means that anyone who looks at those patches and sees only waste, inefficiency, or evidence of a “deep state” is making a very specific error: they’re assuming the game is complete, when we just showed it can’t be.

They’re treating the messiness as a bug, when it is — often, not always, but far more often than reformers tend to appreciate — a feature. There’s also, I think, a deeper thread here about information — about the fact that rules governing who knows what, and who must disclose what to whom, are a particularly fragile species of rule. Poker understands this; the NFL tried and largely failed; and some of our most important legal infrastructure (think §6103) exists precisely at this fault line. But all of that is for next time. (Ed: Oh, you’ll be back…like in 2016? Sheesh.)

For Now, I Leave You with This

In the 1983 film WarGames, a military supercomputer called the WOPR is tasked with simulating global thermonuclear war. It plays every possible scenario — every first strike, every retaliation, every escalation — searching for one that ends in victory. It finds none. After cycling through the entire game tree, it arrives at a conclusion: “A strange game. The only winning move is not to play.” (Ed: I could make a joke about your blog, but I think you already see it, dammit.)

The WOPR, in other words, did what the trilemma says can’t be done: it verified, from within the game, that the game has no self-enforcing solution. It searched the space, hit every penalty ceiling, found every flat region at the top, discovered that every “winning” move triggers a retaliation that migrates the problem somewhere worse — and concluded that the game is, in our terms, formally incomplete.

There is no outcome the rules can assign to “global thermonuclear war” that makes initiating it incentive-incompatible (Ed: Thank goodness, …right?), because the penalty structure maxes out at “everybody dies,” and at that ceiling, the marginal cost of escalation is zero. Of course, the WOPR had an advantage we don’t: it could search the entire game tree. For the rest of us — playing games whose rules we can’t fully verify, in institutions whose patches we can’t fully see, against opponents whose strategies we can’t fully anticipate — the only honest starting point is to admit that the game is bigger than its rules. With that, I leave with one (dated, but memorable, and timeless) question: “Shall we play a game?”

  1. He didn’t inform me of this, but my friend and coauthor Tom Clark essentially encouraged me to write this up some months ago. ↩︎
  2. Note the “subtle shift” here: I moved from “basketball” to “basketball as governed by” (or, to quote James Scott’s awesome work: “made legible by” a specific institution that, ahem, “provides basketball to the public for their enjoyment and remuneration.” ↩︎
  3. And here’s an additional wrinkle: the NBA’s rules say that no team may be reduced below five players. If a player fouls out (six personal fouls), but there are no eligible substitutes, that player stays in the game and is charged with a personal foul, a team foul, and a technical foul for each subsequent infraction. So ejections are actually the only mechanism that can force a team below five — which means our strangler has, in addition to getting himself tossed, potentially inflicted a roster-count penalty on his own team. But note: this is the same roster-count penalty he’d have inflicted with a garden-variety Flagrant 2 for an overly aggressive screen. The punishment doesn’t scale with the severity of the act. (And even the “stay in the game with a technical” rule is itself manipulable. If your player just picked up his sixth foul with 30 seconds left in a close game, is the team better off keeping him on the court — where every subsequent foul triggers another technical free throw for the opponent — or just… letting him leave and playing 4-on-5? The rule was designed to protect teams from being shorthanded. But in the right circumstances, the “protection” costs more than the problem it solves. We’ll see this pattern again.) ↩︎
  4. Speaking of “ceilings,” I am tempted to ask what Naismith would have thought of physical “ceilings” in laying out the initial rules of basketball. Don’t know if he was a physicist or even that “sophisticatedly rational” to think about it, but I would suppose that he would have eventually agreed that “having a ceiling over the game” where you throw a ball up high to avoid defenders’ hands would “only complicate” the eventual performance (and adjudication) of his new game. This makes think of both XFL and Arena Football: both are fun, partly because they borrowed some of the elements of an “already legible sport” (i.e., American Football) and “slightly modified” the nature of the constraints in that sport…) ↩︎
  5. For simplicity, let’s just think about “games” where there can be no more than one winner. That a lot looser than “zero-sum”in a formal sense, but with two players, it’s basically without loss of interesting generality (and, yes, I am an American, and I do (in my heart) think “ties are boring.” But that’s maybe why, or because, I find faculty meetings generally unsatisfying. There’s a lot in there, I know.) ↩︎
  6. I think the idea that “kingmaking” is a recognized verb should make all of us think more about the nature of language in both analytical and sociological terms. ↩︎
  7. I say “the rules” have “handed you” this to differentiate it from very real, “expressive” feelings of guilt or failure from being labeled “a loser.” Just ask our president DJT. The only thing he hates more than rules is being (or, it seems, being associated with) “a loser.” ↩︎

Trump, Cruz, Rubio: The Game Theory of When The Enemy of Your Enemy Is Your Enemy.

I posted earlier about truels and how the current GOP nomination approximates one.  In that post, I laid out the basics of the simple truel (i.e., a three person duel), assuming that the three shooters shoot sequentially.  Things can be different when the three shooters shoot simultaneously.[1]  Short version: Trump and Rubio aren’t allies, but game theory suggests they should both attack Cruz, in spite of this.

This is arguably a better model for debates than the sequential version, in which candidates prepare extensively prior to debate, largely in ignorance of the other debaters’ preparations. Leaving that interesting question aside, let’s work this out.  I assume that the truel lasts until only one shooter is left, and that each shooter wants to live, and is otherwise indifferent.  I’ll also assume that the best shooter hits with certainty.[2] The probability that the second-best shooter hits his or her target is 0<p<1 and the probability that the worst shooter hit his or her target is 0<q<p.

When there are two shooters left, each will shoot at the other.  Not interesting, but important, because this implies that the worst shooter wants to shoot at the best shooter in the first round. In the first round, both the second-best and worst shooters shoot at the best shooter.  Either the first best or second best shooter will be dead after this (if the second-best and worst shooter each get to shoot before the first best shooter, but miss, then the second-best shooter will be killed with certainty). There is also a chance that the worst shooter will win in the first round: the best shooter kills the second-best shooter (probability 1/3), and the worst shooter kills the best shooter (probability q<1).

What does this say about the GOP race?  Both Rubio and Trump should be shooting at Cruz.  This is a simplistic model, and it ignores a lot of real-world factors.  But that’s why it’s valuable, from a social science perspective: if (and when) the behaviors of the three campaigns deviate from this behavior, we know that we need to include those other factors.  Until then, you see, in this world there’s two kinds of models, my friend: Those with just enough to capture the logic and those who need to dig for more things to include.  We’ll see if this one needs to dig.

With that, I leave you with this.

____________________

[1]. For simplicity, I will assume that, if two shooters shoot at each other, then one of them, randomly chosen, will “shoot first” and, if he or she kits, kill the other shooter before he or she fires his or her weapon.  Note that, with this assumption, if shooter A knows that shooter B (and only shooter B) is going to shoot at shooter A, then shooter A should definitely shoot at shooter B.

[2]This assumption isn’t as strong as it appears. This is because the truel is already assumed to continue until only one player is left (note that it is impossible for zero shooters to survive, given the tie-breaking assumption).

The GOP’s Reality is Truel, Indeed

truel is a three person duel.  There are lots of ways to play this type of thing, but the basic idea is this: three people must each choose which of the other two to try to kill.  They could shoot simultaneously or in sequence.  The details matter…a lot.  I won’t get into the weeds on this, but let’s think about the GOP race following last night’s Iowa caucus results.  By any reasonable accounting, there are three candidates truly standing: Ted Cruz, Marco Rubio, and Donald Trump.  The three of them took, in approximately equal shares, around 75% of the votes cast in the GOP caucus.

The next event is the New Hampshire primary, and the latest polls (all conducted before the Iowa caucus results) have Trump with a commanding lead and Rubio and Cruz essentially tied for (a distant) second.  So, the stage is set.  Who shoots first?  And at whom?

The truel is a useful thought experiment to worm one’s way into the vagaries of this kind of calculus.  A difference between truels and electoral politics is that the key factor in a standard truel is each combatant’s marksmanship, or the probability that he or she will kill an opponent he or she shoots at.  What we typically measure about a candidate is how many survey respondents support him or her.  For the purposes of this post, let’s equate the two.  Trump is the leader, and Rubio and Cruz are about equal.

A relatively robust finding about truels is that, when the shots are fired sequentially (i.e., the combatants take turns), each combatant should fire at the best marksman, regardless of what the other combatants are doing (this is known as a “dominant strategy” in game theory).  Thus, if we think that the campaigns are essentially taking turns (maybe as somewhat randomly awarded by the vagaries of the news cycle and external events), then both Rubio and Cruz should be “shooting at Trump.”  This is in line with Cruz’s post-caucus speech in Iowa last night.

An oddity of this formulation of the truel is that it is possible that the best marksman is the least likely to survive.  This is true even if the best marksman gets to shoot first.

Is it current, or future, popularity? An alternative measurement of marksmanship, however, is not the current support, but the perceived direction of change in support.  After all, marksmanship is about the ability to kill someone on the next shot.

On this front, Rubio is currently the better marksman: his support in Iowa vastly exceeded expectations, while by many accounts (though not necessarily my own), Trump is the worst marksman.  If one buys this alternative measure, then the smart strategy for both Trump and Cruz is to “aim their guns” at Rubio.  We have a week to see who they each aim at.

Of course, a truel is a simplistic picture of what’s going on in the GOP nomination process. In reality, it is probably better to think that each candidate’s marksmanship depends on his (or her) choice of target.  Evidence suggests that it is harder for Trump to “shoot down” Cruz than it was for him to shoot down Bush.  Maybe I’ll come to that later.  For now, I’m still making sense of Santorum’s strategy of heading to South Carolina. For that matter, I’m trying to make sense of him being called “a candidate for President.”

With that, I leave you with this.

How Two People’s Rights Can Do Both People Wrong: Vaccines & (Anti-)Social Choice Theory

Vaccination, both in terms of its social good and the role of government in securing that social good while respecting individual liberty, has been a hot topic lately.  In fact, it’s gone viral. (HAHAHAHA.  Sorry.)  In this short post, I link the debate about vaccination, liberty, and social welfare, with the work of Amartya Sen, a preeminent social choice theorist who won the 1998 Nobel Memorial Prize in Economic Sciences.

The Vaccination Paradox. Suppose that—due to there only being one dose of the measles vaccine available—two families, A and B, each with a single child, a and b, are confronted with choosing which child (if any) to vaccinate against measles.  The choices are “a: vaccinate child a,” “b: vaccinate child b,” “n: vaccinate neither child.”

Family would prefer that child b get vaccinated because child a has a compromised immune system, but would prefer that child a get vaccinated rather than neither child gets vaccinated.  In other words, Family A‘s preference over the three outcomes is:

b > a > n.

Due to personal beliefs, Family B is opposed to vaccination for anyone, but due to child a‘s situation, prefers that child b get vaccinated rather than child a.  Thus, Family B’s preference over the three outcomes is:

n > b > a.

Now, suppose that a government agency is tasked with choosing whether to vaccinate a child and, if so, which one.  Furthermore, suppose that the government agency is required to respect the families’ wishes with respect to their own children.  That is, if either family prefers having nobody vaccinated to having their own child vaccinated, then their child is not vaccinated (i.e., the government agency is required to grant an “opt-out” exemption to each family).

What would the result be?  The opt-out exemption requirement implies that Family A is decisive with respect to a versus n, so that n will not occur: child a will be vaccinated if child b is not.  Similarly, Family B is decisive with respect to b versus n, so that b will not occur: child b will not be vaccinated. Accordingly, because the government agency can not choose n, and it can not choose b, it must choose a.  Because the government agency is required to respect individual rights to opt-out, child a will receive the vaccine.

Okay.  But, wait… the government agency has (implicitly) ranked the three possible vaccination choices as

a >> n >> b,

so that in spite of both families agreeing that they prefer that child b be vaccinated rather than child a:

b > a,

The government agency—because it is respecting individual rights—must vaccinate child a instead of child b.

This is an example of the Liberal Paradox (or Sen’s Paradox), which states that no policymaking system can simultaneously

  1. be committed to individual rights and
  2. guarantee Pareto efficiency.

This paradox is at the heart of a surprising number of political/social conundrums. One basic reason it emerges is that individual rights are in a sense absolute and not conditioned on the preferences of others.  That is, if Families A and B could somehow write a binding contract and Family B knew/believed Family A‘s preferences, then Family B would agree to sign away their right to decline the vaccination for child b.

I’ll leave this here, but my limited take-away point is this: individual rights are important, but even in situation in which their definition seems straightforward, there’s no free lunch here: individual rights invariably come into conflict with social welfare.  That’s not saying that individual rights should be sacrificed, of course.  But it is saying that preserving individual rights does not always maximize social welfare.

Boehner in a Manger? The Entitativity Scene in DC

The SHUTDOWN-CEILING SHOWDOWN has been depicted, typically, as “the Republicans versus [Democrats/Obama].”  The simple story here is that many people think of “the Republicans” as a unified whole, a group qua “unitary actor,” a collection of individuals who are acting as one.[1]  My recent posts (e.g. this, this, and this) and my recent blustering on the wireless have attempted to make clear that this understanding of the negotiations/stalemate obfuscate attempts to understand how the drama is playing out and the true causes of the “governing from-crisis-to-crisis” dynamic in DC.

The popular/informal understanding of the Republicans is an example of what is known as entitativity, which describes the perception of a group as an entity, generalized from its individual members.  My recent posts have pointed out that Boehner and the other leaders have multiple factions to worry about, and I argue that the varying incentives of the members of these various factions provide a key starting point in understanding the apparent irrationality of this bargaining process.

A very nice example of the language/framework I am discussing here is provided by this nice piece by Jonathan Chait, “Stop Fretting: The Debt-Ceiling Crisis Is Over!” who writes:

Of the Republican Party’s mistakes, the most rational was its assumption that Democrats would ultimately bend. … Democrats would have to pay a ransom. Republicans spent weeks prodding for every weakness.

Democrats seemed to share a genuine moral revulsion at the tactics and audacity of a party that had lost a presidential election by 5 million votes, lost another chance to win a favorable Senate map, and lost the national House vote demanding the winning party give them its way without compromise.

Probably the single biggest Republican mistake was in failing to understand the way its behavior would create unity in the opposing party. 

The MathOfPolitics of entitativity is interesting.  Defining the notion in 1958,[2] Donald T. Campbell described three important characteristics that supported entitativity in the sense of leading observers to think of a group as an entity:[3]

  1. Common fate, or the degree to which the members experience common outcomes,
  2. Similarity of members’ behaviors and/or appearances, and
  3. Proximity, or the positive correlation of perception: when you see/”think of” one member, you tend to see/”think of” other members as well.

The three dimensions of entitativity are really interesting to think about in the context of Congressional bargaining because each of these dimensions is politically relevant on its own.  Common fate, for example, is highly relevant for the two parties in DC.  The candidates of a given party clearly face a common challenge (winning the most votes in their districts on the first Tuesday after the first Monday in November), and they tend to face common hurdles.  Parties are “brand names,” and many political scientists believe that many voters punish and/or reward candidates based on their partisan affiliation, above and beyond the platforms and records of the candidates themselves.  Furthermore, when it comes to being in the majority or minority following (say) the 2014 election, House Republicans obviously have a common fate.

Similarity is clearly relevant in terms of individuals’ perceptions of the parties.  For example, the GOP is worried that it is seen as the party of stuffy old white men. At the same time, the reality of common fate discussed above provides an incentive for displaying unity, something that Boehner has repeatedly been faced with (last week and back in 2011), though it is arguably collapsing today.  The Democrats have been pursuing a similar strategy.  What is important to note in each of these cases is the public argument/assertion that “the party” should be unified.  The appeals in these cases are coming from party leaders, on partisan/policy grounds.  That is, these are not appeals along the lines of “look, we all have the same interests and just happen to share a label.”  These are “we share a label because and/or therefore we share the same interests.”

Finally, proximity is an interesting consideration in light of some second-order differences between the members themselves.  For example, the Tea Party is recognized as a faction within the GOP more than, say, Log Cabin Republicans because members of the tea party are “seen together” more frequently.  For example, the Tea Party has a caucus in the House.  Similarly, some members such as Senators Ted Cruz, John McCain, and Rand Paul, and Representative Michele Bachmann are portrayed (and often promote themselves) as different than the “typical” Republican, and their actions and statements are often differentiated from those of “the Republicans.”

The notion of entitativity is interesting at times like these precisely because the relationship between its origins and effects illustrate how the costs and benefits of partisan “unity” are—at least relative to times of normal governing/bargaining–altered/reversed in times of crisis bargaining, as is going on in DC now.  More specifically, it demonstrates an alternate understanding of the “Hastert rule” and other related procedures used by both parties to attempt to promote internal unity/consensus on external decision-making.  When dealing with a normal bargaining situation—one in which members are not (or do not expected to be) under particularly fine-grained scrutiny by their constituents—copartisans with at least partially common (policy and electoral) fates have an incentive to bind themselves together.  However, when the matter at hand is high profile, the partial nature of the common fate undergirding “the party” is revealed: as constituents watch their members’ votes more closely, the differences between the districts represented by—and hence electoral fates/fortunes of—the party’s members come into starker relief.  Similarly, as the policy stakes of the vote “get larger,” the usually minor differences between the party’s members “policy fates” (policy goals, personal ideologies) loom larger.[3]

The problem, then, is how to balance the value of unity during “normal times”—unity that is truly useful only to the degree that it is perceived/believed by others and accordingly is less valuable if it cracks when “everybody is watching”—and the need for flexibility when governing requires simply “getting the job done.”  If we could all credibly commit, for today at least, to “not paying attention to which party a member is in,” I strongly suspect the current crisis would end within the hour.[4]  In a sense, we have to take entitativity out of the scene for a second to get to the next act.

With that, I leave you with this.

___________________

[1] I talk about the GOP here, but the basics apply to discussions of “the Democrats,” too.

[2] See Campbell, D. T. (1958). “Common fate, similarity, and other indices of the status of aggregates of person as social entities.” Behavioural Science, 3, 14–25.

[3] Remember Blue Dog Democrats and Senator Nelson’s “Cornhusker Kickback?” Both parties have unity issues when the stakes get big and the votes get visible.

[4] Similarly, we could agree to not record the votes.  I’ve said before, if we could somehow get a debt ceilingincrease/CR approved with a voice vote, this would be over already.  And, of course, the House used to do something like this all the time, with “the Gephardt Rule.”  The GOP got rid of this in 1995.  But, as Sarah Binder notes, note that this particular rule would not save us much pain right now.