Can a Game Know Its Own Rules?

Hi again! The question I’m about to pose is one that, I’m reliably informed, clears rooms at cocktail parties.1 But I think it sits at the foundation of why institutions are so hard to reform — and why the people who try to reform them so often end up making things worse. That’s for next time, though. Today, I want to talk about games.

Taking Your Ball and Going Home

Here’s a scene everyone recognizes. Two kids are playing a game — basketball, say. One of them is losing. So he picks up the ball, says “this is stupid,” and goes home (note: he never says, “I forfeit the game,” maybe he was in a hurry?) Anyway, pragmatically at least, “uhh, game over.” Sounds like a lot of (mostly less fun) games I have played in life. (I won’t tell you which character I was playing, but I will admit/confess that I have played both “roles,” so to speak. I’m a “double threat,” I suppose. Is that a compliment to myself?)

Now: what just happened, strategically? Within the rules of basketball, there is no explicit provision for this exact situation. Instead, the “rules of basketball” understandably tell you “what happens” when you shoot (depending on whether it “goes through the hoop,” for example), when you foul, when the clock runs out. They do not tell you what happens when a player picks up the ball and leaves the court, never to return. This action is, formally speaking, outside the game. Your first instinct might be: “Well, obviously — he loses. He quit.” And that’s a perfectly reasonable/”practically accurate” interpretation. But notice that “he quit, and therefore he loses” is your (and, yes, most of society’s) inference, not the rules’ literal interpretation.

To make this less ethereal, suppose instead the kid says, “I’m so sorry — my parents are here, I have to leave!” Should that kid lose because of his parents’ timing/schedules? (And, in spite of my inclinations, no, “don’t be a stickler right now.” Yes, that’s about to get “ironic AF”.)

The rules of basketball define how you score and how the clock works; they don’t contain a general provision for “a player decided to leave and never come back.” You’re filling the gap with common sense — and common sense, as we’ll see, is doing a lot of heavy lifting that the formal rules cannot. Let me push on this with a darker example, because I think it reveals something important.

The Penalty Ceiling

Suppose, in the course of an NBA game, you want to prevent an opponent from scoring. You could commit a blocking foul. You could commit a hard foul — a flagrant foul, in the NBA’s terminology.2 The NBA distinguishes two levels: a Flagrant 1 (“unnecessary contact”) gets you two free throws and possession for the other team, while a Flagrant 2 (“unnecessary and excessive contact”) adds an ejection. That’s where the ladder ends. There is no Flagrant 3. So: what if, instead of committing a hard foul, you grab the opposing player and strangle him? Within the formal rules of basketball, the in-game consequence is… [flips through pages speedily….] well, it’s identical to a Flagrant 2 foul. Ejection. Two free throws. Possession. The rules literally cannot distinguish, in terms of game outcomes, between a very hard basketball play and attempted murder. Everything above the Flagrant 2 ceiling looks the same to the game. Criminal law handles the strangulation, of course — but that’s an external enforcement system, a different “game” entirely. Within the four corners of basketball’s rules, the marginal in-game cost of escalating from a hard flagrant to actual assault is zero.3

Now, you might (yes, quite reasonably) think: “Fine, but no one actually strangles an opponent during a basketball game. The criminal law deters that.” True. But the fact that you need to invoke an entirely separate system of rules (here: “the rules of the legal system”) to handle actions that are physically possible within the game is essentially precisely the point. From a logical perspective, the rules of the “game of basketball” themselves have a ceiling,4 and above that ceiling, deterrence vanishes.

This matters beyond basketball. Consider: why have police unions historically resisted making the penalty for assaulting an officer as severe as the penalty for killing one? It’s not squeamishness. It’s strategy. If assaulting a cop carries ten years and killing a cop carries life, then a suspect who has already committed the assault faces an enormous marginal cost for escalating further. The gradient protects the officer. But if both carry life? The marginal cost of escalation drops to zero. A suspect who has already crossed the assault threshold faces no additional deterrence against killing. The punishment structure only deters escalation when there’s room to escalate into.

The general principle: any finite penalty schedule creates a flat region at the top where marginal deterrence fails. And raising the ceiling doesn’t solve the problem — it just moves the flat region higher. You haven’t eliminated the zone where deterrence vanishes; you’ve simply changed where (i.e., “conditional on what action?”) the deterrence “has its bite.”

And there’s a second problem with “if you do X, you lose” — one that is, if anything, even more fundamental. Everything I’ve said so far implicitly assumes a two-player game. In a (zero-sum)5 two-player game, “you lose” means “your opponent wins,” and since you have exactly one opponent, this is unambiguously bad for you. The fix might fail for other reasons, but at least it’s a punishment. Add a third player and even this breaks down. “You lose” no longer determines who wins — it just removes you from contention. And the question of which remaining player benefits from your removal is now a strategic variable. If you prefer Player C to Player B, and your continued participation is helping B more than C, then losing is not a punishment — it’s a gift to your preferred outcome. “If you break this rule, you lose” becomes, in effect, “if you break this rule, you get to kingmake.”6 The penalty has been tranformed from a deterrent into a strategic instrument, and, having assigned a definite/predictable outcome to the violation in question, the rules have no way to prevent (or, somewhat ironically, deter) this type of behavior. They did exactly what they were supposed to do. The problem is that what they were supposed to do “isn’t enough” — or more appropriately, they are not incentive compatible within the game itself.

This is not that exotic, of course. In sports, it’s called tanking: a team deliberately loses late-season games to secure a more favorable draft pick or dodge a stronger playoff opponent. In elections, it’s strategic withdrawal: a candidate drops out not because they can’t win, but to determine who among the remaining candidates does. In legislatures, it’s the entire logic of strategic voting and logrolling.

Simple and universal point: whenever “a game” has three or more players, even the declarative “you lose” outcome is no longer necessarily the worst possible outcome. How you lose, and when you lose, and who benefits from your loss are all strategic variables that the rules have handed you.7 The penalty, intended to close the game, has opened it. (Readers of this blog will note the family resemblance to a certain famous theorem about what happens when you have three or more alternatives: it sort of rhymes with “Mia Farrow.” We’ll come back to this.) I want to convince you that this problem is not trivial at all. In fact, I think it’s a deep problem, one that connects to some of the most important results in mathematics, and political economy.

The Chessboard, Overturned

Consider chess. Chess is, compared to basketball in the driveway, a remarkably well-specified game. The rules define every legal move, every legal position, and every terminal outcome (checkmate, stalemate, draw by repetition, and so on). Chess even has a formal provision for one action that might seem “outside” the game: resignation. If you tip over your king, the game ends and your opponent wins. Clean, elegant, formally complete. But now imagine a player who, upon finding herself in a losing position, sweeps all the pieces off the board and onto the floor. What happened? Not a resignation — she didn’t tip her king. Not a checkmate. Not a draw. The rules of chess, so carefully specified, have nothing to say about this. And here’s what’s interesting: it’s not obvious what they should say. The most natural response — the one most people jump to — is: “Well, obviously she loses. Flipping the board is just resignation with theatrics. We can infer that she wanted to concede and was simply… efficient about it.” And in a single game of chess, maybe that resolution works well enough. But notice what it’s doing: it’s interpreting a physical action (scattering pieces) as a strategic action (resignation) by reasoning about the player’s intent. The rules of chess say nothing about intent. We’re filling the gap with inference — and inference, as we’re about to see, opens its own can of worms.

The Game Within the Game

Here’s where it gets interesting (Ed: …Finally?). Suppose our chess player isn’t playing a single game. She’s playing a best-of-seven match. She’s down a game, and the current game — game 3 — is going badly. She has two options within the formal rules: play on to the bitter end, or resign. But these two options are strategically different in the context of the match, even though they produce the same outcome in game 3 (she loses). Playing to the bitter end reveals information — about her style, her preparation, her responses to specific positions — that her opponent can exploit in games 4 through 7. Resigning early conceals that information. Accordingly, the timing and manner of her concession is itself a strategic variable, one that the rules of chess (which govern individual games) don’t acknowledge at all. The match is a game; each game within the match is a game; and the two levels interact in ways that neither level’s rules fully capture. Now: is it “legitimate” for a player to play badly — or concede early — in game 3 in order to improve her chances in game 4, 5, 6, and/or 7? While I play chess, I’m not serious at it (Ed: you mean you’re not that good at it?) That said, I suspect that most chess players would say this offends the spirit of competition (to understand why, ask yourself, “does anybody think being described as tanking something is a compliment?) But the rules of a best-of-seven match, as typically specified, say nothing about it. We’re back in the gap between what the rules formally cover and what is physically (and strategically) possible.

What Poker Understands

This is a good moment to note that at least one common game does understand the problem we’re circling around — or at least one important dimension of it. In standard Texas Hold’em, when all of your opponents fold, you win the pot. You may then show your cards to the table, but you are explicitly not required to. This is a rule about information, and it is one of the rare cases where a game’s designers grasped that the strategic management of private information is itself part of the game. Whether you show a bluff, show a strong hand, or show nothing at all is a decision with consequences for future hands — and the rules protect your right to make that decision. Most rule systems are not nearly this sophisticated. They either ignore the information dimension entirely (chess doesn’t care (or, more accurately, is realistic about the fact that it “can’t measure”what you were “thinking” about doing) or — and this is the case that will matter most for us — they try to compel disclosure, and immediately discover that compelled disclosure is extraordinarily hard to enforce.

Belichick’s Injury Reports (and Other Mendacities)

Which brings us to the NFL, and to a man who made a career out of finding the gaps between what rules say and what rules mean. The NFL requires teams to publicly disclose player injuries before each game. The purpose is transparent: betting markets, opposing teams, and fans should have access to the same basic information about who’s healthy and who isn’t. The rule was designed to “level the playing field” — to prevent teams from gaining a strategic advantage by concealing private information about their own roster. This is, on its face, a reasonable rule. It is also exactly the kind of rule that is most vulnerable to manipulation, because it attempts to regulate something — private information — that the regulator cannot directly observe. The NFL can see what a team reports. It cannot easily verify whether the report is accurate. And so Bill Belichick, with characteristic precision, listed half his roster as “questionable” every single week. Technically compliant. Informationally useless. The rule required disclosure; Belichick disclosed — in a way that conveyed nothing. The spirit of the rule was defeated by the letter of the rule, and the letter couldn’t be tightened without creating new problems. (What does “accurate” mean? Must a team disclose a player’s private medical details? Who adjudicates disagreements about severity?) Notice the irony: the injury disclosure rule was created specifically to prevent teams from “gaming the game” with private information. But the rule itself became the game that got gamed. This isn’t a bug in the NFL’s rule-writing process. I think it’s a theorem — and we’re about to see it again.

Belichick’s Safety

Let me give you a second Belichick example, because one might be an anecdote but two starts to look like a pattern (and, yes, I am both a proud Tarheel and Steelers fan, so I am not “unbiased” with respect to Billy B). In a 2003 NFL game, Belichick’s New England Patriots were leading the Denver Broncos late in the game. Facing a 4th down deep in their own territory, the conventional play would be to punt. But Belichick did something that, at the time, struck many observers as bizarre: he had his punter intentionally run out of the back of the end zone, conceding a safety — two points for Denver. Why? Because a safety, unlike a punt, is followed by a free kick from the 20-yard line, which typically travels farther and is harder to return than a punt from deep in your own end zone. Belichick wasn’t breaking any rules. He was following them. But he was exploiting a feature of the rule mapping — the relationship between safeties and free kicks — that the rules’ designers almost certainly never intended as a strategic option. The rules said: “if a safety occurs, the following happens.” They assigned an outcome to the event. And that assigned outcome, in the right circumstances, made deliberately causing the event profitable. This is not a curiosity. This is a theorem.

Gibbard-Satterthwaite, in Football Pads

The Gibbard-Satterthwaite theorem, one of the foundational results in social choice theory, tells us (informally) that any sufficiently rich system of rules that isn’t dictatorial — that is, any system where more than one person’s actions matter — is manipulable. There exists some situation in which some agent can achieve a better outcome by acting contrary to the system’s intended purpose. Both of Belichick’s exploits are Gibbard-Satterthwaite in football pads. The NFL’s rules are “sufficiently rich” (they cover a complex, multi-agent strategic environment) and non-dictatorial (both teams’ actions matter). So the theorem guarantees that there exist situations where a team can benefit by doing something the rules didn’t envision as a strategic choice. The intentional safety was always there, latent in the rule book, from the moment the safety/free kick provision was written. The meaningless injury report was always available, from the moment the disclosure rule was written. It just took decades — and a coach who modeled the game differently than the rule designers — to find them. And notice the computational point: these exploits were hard to find. Not hard in the sense of requiring genius (though Belichick is a genuinely brilliant strategic mind), but hard in the sense that the space of possible rule interactions is vast, and most people never think to search it. The manipulability is guaranteed by theorem; the discovery of any particular manipulation is a search problem of potentially enormous complexity.

The Trilemma

Now let’s go back to our ball-taker and our chessboard-flipper and think about what a game designer could do about these “outside” actions. I think there are exactly three options, and none of them is satisfactory. 

Option 1: Leave the action outside the game. The rules simply don’t address it. This is the status quo for chessboard-flipping. The game is formally incomplete: there exist feasible actions with no assigned outcome. This might seem acceptable — we handle these situations with social norms, tournament rules, or just the general understanding that you’re not supposed to do that. But “not supposed to” is doing an enormous amount of work here, and it’s not part of the formal game. We’ll come back to this. 

Option 2: Assign the action a bad outcome. “If you flip the board, you lose.” This is the most natural response, and it’s what most rule systems try to do — define penalties for rule-breaking. But here’s the problem: the moment you assign an outcome to an action, you’ve brought that action into the game. It’s now part of the strategy space. And once it’s part of the strategy space, it interacts with everything else. Belichick’s safety is exactly this: the rules assigned an outcome to the “bad” event of a safety, and that assigned outcome, in interaction with the rest of the rules, made the event strategically attractive. The injury report is a subtler version: the rules assigned a requirement (disclose) with a penalty (fines, draft picks) for noncompliance — and in doing so created a new strategic question (how to comply in form while defecting in substance) that didn’t exist before the rule did.

Worse, any newly incorporated action can be used as a threat. “Trade with me or I flip the board” is now a meaningful strategic statement, because “flip the board” has a formally defined consequence. You’ve just enriched the game in ways you may not have intended. And recall the multiplayer problem from earlier: even the seemingly nuclear option — “if you do this, you lose” — is only a deterrent when the game has exactly two players. The moment there are three or more, “you lose” becomes a strategic instrument rather than a punishment, because the violator gets to influence who among the remaining players benefits. This is not a minor caveat. Most real-world “games” — legislatures, markets, regulatory environments, organizations — have many players. In these settings, Option 2 doesn’t just fail because penalties create new strategic possibilities. It fails because the maximum penalty — total defeat — is itself a strategic resource. The penalty schedule cannot be made severe enough to deter a player who would rather kingmake than compete. There is, quite literally, no “bad enough” outcome to assign, because the badness of the outcome for the violator is not the relevant quantity — the relevant quantity is the differential effect of the violation on the remaining players, and the rules cannot control this without controlling the entire game, which is the problem we started with.

This, I think, is where the blog’s namesake result makes its quiet entrance (Ed: I just knew you were into “branding”). The two-player case is well-behaved: there’s one opponent, preferences are opposed, and penalties can work (modulo the ceiling problem). Add a third player — or a third alternative — and the structure changes qualitatively. Stability dissolves. Manipulation becomes ubiquitous. Three implies chaos

Option 3: Define an external enforcement mechanism. “There’s a referee, and the referee handles situations the rules don’t cover.” This works — until you realize that the referee’s judgment is itself a rule system. What are the rules governing the referee? Can a player “go outside” the referee’s rules? If so, you need a meta-referee. And meta-meta-referee. You’ve begun an infinite regress — or, if you prefer, you’ve acknowledged that the game is embedded in a larger game, which is embedded in a larger game, and somewhere the buck has to stop at a system that is, itself, formally incomplete.

Why This Matters (or: Gödel Was Here)

If the “trilemma” above reminds you of something, it should (Ed: Oh goodness, is this another “truels post“?). Gödel’s incompleteness theorems tell us, roughly, that any formal system rich enough to express basic arithmetic cannot be both consistent and complete. There will always be true statements that the system cannot prove from within.

The analogy to games is, ahem, more than an analogy (is there a word for “X is analogous to X,” beyond “tautological” (Ed: Not that tautologies have ever stopped you before). A “self-enforcing” rule is one where breaking that rule is never incentive-compatible, given the other rules of the game. This is another way of understanding “internal consistency,” for those of you playing at home.

To verify that a rule is self-enforcing, you need to check it against all other rules and all possible strategies — which is itself a statement within the system. And for any sufficiently rich game, the system cannot verify all such statements from within. There will always be some actions, some contingencies, some interactions that the rules cannot “reach” without expanding the system — at which point you’ve created a new system with new gaps. A game, in other words, cannot fully know its own rules. It cannot certify, from within, that all of its rules are self-enforcing. There will always be a kid who can pick up his ball and go home, and the game — qua game — has nothing to say about it.

A more tangible way of understanding this: any interesting game must have some rule X that the other rules of the game that define “winning the game” must sometimes give you an incentive to break “rule X.”

I now dub that the Billy B Rule and it expands far beyond American Football, Chapel Hill, and indeed time and space itself! (Ed: Seriously? ….Oh, what the hell, if they’re still reading, let’s go for it, I guess.)

The Impossibility Migrates

I want to close (Ed: What? Oh, I thought you were just getting started.) by suggesting that what we’ve identified is not merely a curiosity about games. It’s a conservation law. The trilemma says that the “gap” in a rule system — the space between what the rules formally cover and what strategic agents can actually do — cannot be eliminated. It can only be relocated.

You can leave it as incompleteness (Option 1), and accept that some actions have no formal consequence.

You can try to close it by assigning penalties (Option 2), and discover that the gap reappears as manipulation — new strategic possibilities created by the very rules you wrote to prevent the old ones.

Or, you can hand it off to an external enforcer (Option 3), and watch the gap reappear one level up.

In any event, the problem is conserved; it just changes form. This pattern — call it the migration of impossibility — shows up far beyond sports and parlor games.

The “Hook”: Consider algorithmic fairness. There’s a well-known result (due to Kleinberg, Mullainathan, and Raghavan, and independently to Chouldechova) showing that two natural fairness criteria — error-rate balance and predictive parity — are generally incompatible when different groups have different base rates of the behavior the algorithm is trying to predict. This is, in its structure, an impossibility theorem of the same species as the ones we’ve been discussing: you can’t have everything you want, simultaneously, within the system.

Now, in some recent work that Maggie Penn and I have been doing, we noticed something. The classical impossibility results hold behavior fixed — they assume that people’s base rates of compliance (or recidivism, or default, or whatever the algorithm is classifying) are just facts about the world, not choices that respond to incentives.

But of course they are choices that respond to incentives, and in particular they respond to the stakes of classification — the severity of the fine, the length of the sentence, the terms of the loan. Once you recognize that base rates are endogenous — that they’re equilibrium objects shaped by the algorithm and its consequences — an escape route from the impossibility opens up. You can simultaneously achieve error-rate balance and predictive parity by adjusting the stakes of classification to induce equal base rates across groups.

Cool, …problem solved, right?

Not quite. Here comes the conservation law. The statistical impossibility disappears, but it migrates: achieving both fairness criteria requires that identical classification decisions carry different consequences for different groups. You’ve moved the inequality from the distribution of algorithmic outcomes to the severity of consequences attached to those outcomes. The impossibility doesn’t vanish. It changes address. And it gets worse — in a way that connects directly to the penalty-ceiling problem. In some cases, equalizing base rates under equal stakes requires penalizing compliance — effectively setting negative incentives that suppress the behavior the system is supposed to encourage.

That’s the fairness equivalent of flattening the penalty gradient between assault and murder. You’ve “equalized” the treatment, but you’ve destroyed the incentive structure that was generating the behavior you wanted. The gap migrates, again, from one form of unfairness to another.

I think this is a general feature of any system that tries to regulate strategic behavior. The gap between what the rules intend and what agents can do is not a deficiency of any particular set of rules. It is a structural property of the relationship between rules and the strategic agents who inhabit them. Fix it here, and it appears there. Close this loophole, and you open that one. The impossibility is conserved.

A Provocation for Next Time

So if the impossibility always migrates — if every fix to a rule system creates new gaps somewhere else — then what does this mean for the biggest, most complicated “games” we play? What does it mean for institutions, bureaucracies, governments? It means, I’ll argue, that every well-functioning institution is riddled with informal patches — norms, workarounds, conventions, and practices that exist precisely to handle the cases the formal rules can’t reach.

These patches are the institution’s solution to the migration problem: every time a gap was discovered, someone — a bureaucrat, a judge, a middle manager — found a way to cover it, and that patch became part of the operating system. The institution looks messy from the outside because it is messy. It has to be. The formal rules can’t do the job alone, and the patches are where the real work happens. And it means that anyone who looks at those patches and sees only waste, inefficiency, or evidence of a “deep state” is making a very specific error: they’re assuming the game is complete, when we just showed it can’t be.

They’re treating the messiness as a bug, when it is — often, not always, but far more often than reformers tend to appreciate — a feature. There’s also, I think, a deeper thread here about information — about the fact that rules governing who knows what, and who must disclose what to whom, are a particularly fragile species of rule. Poker understands this; the NFL tried and largely failed; and some of our most important legal infrastructure (think §6103) exists precisely at this fault line. But all of that is for next time. (Ed: Oh, you’ll be back…like in 2016? Sheesh.)

For Now, I Leave You with This

In the 1983 film WarGames, a military supercomputer called the WOPR is tasked with simulating global thermonuclear war. It plays every possible scenario — every first strike, every retaliation, every escalation — searching for one that ends in victory. It finds none. After cycling through the entire game tree, it arrives at a conclusion: “A strange game. The only winning move is not to play.” (Ed: I could make a joke about your blog, but I think you already see it, dammit.)

The WOPR, in other words, did what the trilemma says can’t be done: it verified, from within the game, that the game has no self-enforcing solution. It searched the space, hit every penalty ceiling, found every flat region at the top, discovered that every “winning” move triggers a retaliation that migrates the problem somewhere worse — and concluded that the game is, in our terms, formally incomplete.

There is no outcome the rules can assign to “global thermonuclear war” that makes initiating it incentive-incompatible (Ed: Thank goodness, …right?), because the penalty structure maxes out at “everybody dies,” and at that ceiling, the marginal cost of escalation is zero. Of course, the WOPR had an advantage we don’t: it could search the entire game tree. For the rest of us — playing games whose rules we can’t fully verify, in institutions whose patches we can’t fully see, against opponents whose strategies we can’t fully anticipate — the only honest starting point is to admit that the game is bigger than its rules. With that, I leave with one (dated, but memorable, and timeless) question: “Shall we play a game?”

  1. He didn’t inform me of this, but my friend and coauthor Tom Clark essentially encouraged me to write this up some months ago. ↩︎
  2. Note the “subtle shift” here: I moved from “basketball” to “basketball as governed by” (or, to quote James Scott’s awesome work: “made legible by” a specific institution that, ahem, “provides basketball to the public for their enjoyment and remuneration.” ↩︎
  3. And here’s an additional wrinkle: the NBA’s rules say that no team may be reduced below five players. If a player fouls out (six personal fouls), but there are no eligible substitutes, that player stays in the game and is charged with a personal foul, a team foul, and a technical foul for each subsequent infraction. So ejections are actually the only mechanism that can force a team below five — which means our strangler has, in addition to getting himself tossed, potentially inflicted a roster-count penalty on his own team. But note: this is the same roster-count penalty he’d have inflicted with a garden-variety Flagrant 2 for an overly aggressive screen. The punishment doesn’t scale with the severity of the act. (And even the “stay in the game with a technical” rule is itself manipulable. If your player just picked up his sixth foul with 30 seconds left in a close game, is the team better off keeping him on the court — where every subsequent foul triggers another technical free throw for the opponent — or just… letting him leave and playing 4-on-5? The rule was designed to protect teams from being shorthanded. But in the right circumstances, the “protection” costs more than the problem it solves. We’ll see this pattern again.) ↩︎
  4. Speaking of “ceilings,” I am tempted to ask what Naismith would have thought of physical “ceilings” in laying out the initial rules of basketball. Don’t know if he was a physicist or even that “sophisticatedly rational” to think about it, but I would suppose that he would have eventually agreed that “having a ceiling over the game” where you throw a ball up high to avoid defenders’ hands would “only complicate” the eventual performance (and adjudication) of his new game. This makes think of both XFL and Arena Football: both are fun, partly because they borrowed some of the elements of an “already legible sport” (i.e., American Football) and “slightly modified” the nature of the constraints in that sport…) ↩︎
  5. For simplicity, let’s just think about “games” where there can be no more than one winner. That a lot looser than “zero-sum”in a formal sense, but with two players, it’s basically without loss of interesting generality (and, yes, I am an American, and I do (in my heart) think “ties are boring.” But that’s maybe why, or because, I find faculty meetings generally unsatisfying. There’s a lot in there, I know.) ↩︎
  6. I think the idea that “kingmaking” is a recognized verb should make all of us think more about the nature of language in both analytical and sociological terms. ↩︎
  7. I say “the rules” have “handed you” this to differentiate it from very real, “expressive” feelings of guilt or failure from being labeled “a loser.” Just ask our president DJT. The only thing he hates more than rules is being (or, it seems, being associated with) “a loser.” ↩︎

Who’s Got The Power? Measuring How Much Trump Went Banzhaf On Tuesday

The Democratic and Republican Parties each use a weighted voting system to choose their presidential nominees.  This only matters when no candidate has a majority of the delegates, and the details are complicated because the weight a particular candidate has is actually a number of (possibly independent) delegates.  Leaving those details to the side, let’s consider how much Donald Trump’s wins on Tuesday April 26th “mattered.”  The simplest measure of success, for each candidate, is how many additional delegates they each won.  As a result of Tuesday’s primaries, Trump is estimated to have picked up 110 delegates, Senator Cruz is estimated to have picked up 3, and Governor Kasich similarly is estimated to have picked up 5.

A key concept in weighted voting games is that of power.  There are literally countless ways to measure power, but one of the most popular ways is called the Banzhaf index.

If there are N total votes, and a candidate “controls” K of those votes, the Banzhaf index measures the probability, given the distribution of the other N-K votes across the other candidates, that the candidate in question will cast the decisive vote: that is, that he or she will have enough votes to pick the winner, given every way the other candidates could cast their ballots. (I’m skipping some details here.  For the interested, the most important detail is that the index presumes that the other candidates will randomly choose how to vote.)

A higher power index implies that the candidate is more likely to determine the outcome. What is key is that the power index for a candidate with K votes out of N is generally not equal to \frac{K}{N}.  For example, if a candidate has over half of the votes,[1] then that candidate’s Banzhaf index is equal to 1 (and those of all other candidates are equal to zero, and we’ll see that come up again below), because that candidate will always cast the decisive vote.

So, back to Tuesday.  Here is the breakdown of how the GOP candidates’ delegates translated into “Banzhaf power” before Tuesday’s primaries.

Candidate Donald Trump Ted
Cruz
John Kasich Marco Rubio Ben Carson Jeb
Bush
Carly Fiorina Rand Paul Mike Huckabee Total 
Delegates 846
(48.85%)
548
(31.64%)
149
(8.6%)
173
(9.99%)
9
(0.52%)
4
(0.23%)
1
(0.06%)
1
(0.06%)
1
(0.06%)
1,732
Banzhaf Power 0.5 0.1667 0.1667 0.1667 0.1667 0 0 0 0

Going into Tuesday’s primaries, Trump held just under majority of the delegates and held exactly half of the power.  More interesting in this comparison is that Marco Rubio’s power was still significant: in fact, equal to the individual powers of Kasich and Cruz.

Even though Rubio and Kasich each had less than a third of Cruz’s delegates, their voting power as of Monday was equal to Cruz’s. This is due to the fact that Rubio, Kasich, and Cruz could defeat Trump if and only their delegates voted together, regardless of how the other delegate-controlling candidates had their candidates vote.  In other words, Carson, Bush, Fiorina, Paul, and Huckabee truly had—as of Monday (and today)—no bargaining power at a contested convention.

However, after Tuesday’s results, the following happened:

Candidates Donald Trump Ted
Cruz
John Kasich Marco Rubio Ben Carson Jeb
Bush
Carly Fiorina Rand Paul Mike Huckabee Total
Delegates 956
(51.68%)
551
(29.78%)
154
(8.32%)
173
(9.35%)
9
(0.49%)
4
(0.22%)
1
(0.05%)
1
(0.05%)
1
(0.05%)
1,850
Banzhaf Power 1 0 0 0 0 0 0 0 0

By securing a majority of the delegates allocated so far, Trump’s power jumped from 0.5 to 1 and all of his opponents’ powers dropped to zero.  If the convention occurred today, they would be powerless to stop Trump.

Now, suppose that the candidates had votes equal to the actual votes (rather than delegates) they receive.  If the convention were held today under such rules, this would result in the following:

Candidates Donald Trump Ted
Cruz
John Kasich Marco Rubio Ben Carson Jeb
Bush
Jim Gilmore Chris Christie Carly Fiorina Rand Paul Mike Huckabee Rick Santorum Total
Popular Votes 10,121,996
(39.65%)
6,919,935
(27.10%)
3,677,459
(14.40%)
3,490,748
(13.67%)
722,400
(2.83%)
270,430
(1.06%)
2,901
(0.01%)
55,255
(0.22%)
36,895
(0.14%)
60,587
(0.24%)
49,545
(0.19%)
16,929
(0.07%)
25,530,125
Banzhaf Power 0.5 0.1667 0.1667 0.1667 0 0 0 0 0 0 0 0

If the popular votes were the basis of the GOP nomination and the convention were held today, then the candidates would still have the same “powers” as they did prior to Tuesday’s primaries.  Thus, on Tuesday, we arguably truly witnessed the effect of the “delegate system.”

As a final note, this power calculation clearly indicates something that I think is underappreciated about multicandidate races in majority rule settings.  To break Trump’s lock on the race, it is unimportant which candidate (other than Trump) an “unpledged” delegate decides to support.  Right now, if and only if at least 62 unpledged delegates (and I have no idea how many of them there are left right now) decide to support “other than Trump,” then the Trump’s power drops below.  In addition to (and in line with) the fact that it doesn’t matter how those delegates allocate their support across the other candidates, if 62 such delegates appeared in the hypothetical conference tomorrow in Cleveland, the powers of the candidates would be as follows:

Candidates Donald Trump Ted
Cruz
John Kasich Marco Rubio Ben Carson Jeb
Bush
Carly Fiorina Rand Paul Mike Huckabee Total
Delegates 956
(50%)
613
(32.06%)
154
(8.05%)
173
(9.05%)
9
(0.47%)
4
(0.21%)
1
(0.05%)
1
(0.05%)
1
(0.05%)
1,912
Banzhaf Power 0.97 0.004 0.004 0.004 0.004 0.004 0.004 0.004 0.004

Conclusion. There are two “math of politics” points in here. The first is that votes/delegates are definitely not a one-to-one match: indirect democracy is distinct from direct democracy—it’s always important to remember that.  The second, and more “math-y” is that, when people have different numbers of votes, it is not the case that the number of votes a person has is equal to his or her voting power.[2]

With that, I leave you with this.

PS: If you would like (Mathematica) code to calculate the Banzhaf index for this and other situations, email me.

___________

[1] I am assuming for simplicity throughout, in line with the rules of the GOP and Democratic Party, that the collective decision is made by simple majority rule.  One can calculate the Banzhaf index for any supermajority requirement as well.  As the supermajority requirement goes up, the power indices of all candidates with a positive number of votes converge to equality (guaranteed to occur when the decision rule is unanimity).

[2] For a great review of how this is important in the real world, see Grofman and Scarrow (1981), who discuss a real-world use of weighted voting in New York State back in the 1970s.

This Thursday, At 10, FOX News Is Correct

FOX News just announced the 10 candidates who will participate in the first primetime Republican presidential primary debate on August 6, 2015. The top 10 were decided by these procedures.  Given that FOX is arguably playing a huge role in the free-for-all-for-the-GOP’s-Soul that is that race for the 2016 GOP presidential nomination, it is important to consider whether, and to what degree, FOX News “got it right” when they chose “10” as the size of the field. Before continuing, kudos to FOX News for playing this difficult game as straight as possible: the procedures are transparent and simple. Thoughthey have ineradicable wiggle room and space for manipulation, I really think this was an example of how to make messy business as clean as possible.  That said, let’s see how messy it turned out…

In order to gauge how important procedures were in this case, I examined the past 10 polls (data available here) to ascertain, in any given poll, who was in the top 10.[1]  The results are pretty striking in their robustness.  In spite of there being 19,448 ways to pick 10 from 17, the top 10 candidates in the final poll were in the top 10 of each of the 10 polls in 96 cases out of a possible 100.  Furthermore, in no poll was more than one of the chosen participants outside of the top 10.  Thus, there were 4 polls in which one of the debate participants was not ranked in the top 10, and 2 of these were the oldest pair in the series.

More tellingly, perhaps, is the fact that the smallest consistent “non-trivial debate group”—the smallest group of candidates that never polled at less than the size of the group in the 10 polls—is 3: Donny Trump, Jeb Bush, and Scott Walker composed the top 3 of each of the last 10 polls (that’s actually true of the last 15 polls).[2]

While I often like to be contrary in these posts, and I thought I might have an opportunity here, I have to say that, in the end, FOX News got this one right—the only direction to go in terms of tuning the size of the debate would have been down (to either 8 or 3, but I will leave 8 for a different post).  Given that logistics are the only real reason for a media outlet[3] to putatively and presumptively winnow the field of candidates in an election campaign, FOX News was, in my opinion (and possibly by luck), correct in setting the number at 10.

And, with that, I leave you with this.

______

[1] The oldest of these concluded two weeks ago, on July 20th.

[2] The reason I refer to a non-trivial debate group is that Donald Trump composes the smallest consistent debate group: he has held the number 1 spot in the past 16 polls. I will leave to the side the question of whether Trump debating himself would be informative or interesting.  I just don’t know if he is enough of a master debater, though I suspect that he loves to master debates.  Who doesn’t?

[3] Oh, yeah, I forgot to mention that Facebook is involved with organizing the debate. See what I did there?!?

The True Trump Card: You Can’t Buy Credibility

The rise of mega-donors has been an important storyline in the unfolding drama of the 2016 presidential election (for example, see here).  The presence of these donors in the political game (or at least their visibility) is partially the result of the Supreme Court’s decision in Citizens United.  But more interesting is whether the rise of these mega-donors has caused the explosion of seemingly viable (mostly Republican) contenders for the 2016 election.

The argument that Citizens United has caused the explosion in candidates is admittedly appealing.  As Steven Conn describes this argument in the Huffington Post,

Citizens United has created a new dynamic within the Republican Party. Call it the politics of plutocratic patrons, and at the moment it is causing the GOP to eat itself alive.

Continuing, Conn notes that the argument

works something like this: With the caps lifted on spending, any candidate who can find a wealthy patron can make a perfectly credible run at the nomination.

I’ve added the underline because this is where “the math” gets interesting.  If by perfectly credible, one means, “capable of spending lots of money,” then yes, I agree.  That was actually always true: the right of an individual (i.e., a “wealthy patron”to buy advertisements for any political issue/candidate has never been effectively curtailed.  Rather, the right of individuals to contribute without limit to organizations that can then do so has been, in fits and starts, regulated.

More importantly, though, the fact that anyone can do so now does not mean that wealthy patrons can guarantee that any candidate can make a “perfectly credible run” at the nomination.  As Conn notes, Foster Freiss is bankrolling Santorum’s 2016 bid.     …Does anyone think that Rick Santorum is a perfectly credible candidate for the GOP nomination?

Maybe Foster Freiss.

No, Rick Santorum is not going to win the GOP nomination.   Neither is Rick Perry. Neither is Chris Christie.  Neither is Carly Fiorina.  Neither is Bobby Jindal.  Of course, I might be wrong on any one of those five.  But I will assuredly be right on at least four.  In fact, if I wanted to type enough, I could be right about no fewer than 15 people who are currently running for the GOP nomination not winning it. (Evidence?  For the latest, see here.)

Simply put, if there are 16 “perfectly viable” candidates for the GOP nomination, then I’m throwing my hat in the ring, too.  WHY NOT?

Look, a wealthy donor can get you in the media.  That is easy, to be honest, if you have the money.  To be a credible candidate, you have to have a chance of winning.  Only one can win.  Lots can spend.  In social science, we often describe this kind of competition as an “all-pay auction.” In an all-pay auction, the highest bidder gets the prize after paying his or her bid.  All of the other bidders pay their bids and don’t get a prize.  It is a stinky, foul game.  (Kind of like running for the presidency.)

In the mega-donor world, the donors are now the bidders, and we are to believe that they want to create viable candidates through their monies spent.  But this is at odds with two points, one empirical and one theoretical.  The empirical point is that these mega-donors are often successful investors and businesspeople.  The theoretical point is that, when there is a single prize, the all-pay auction should not generally see any positive bid from more than two bidders.[1]

These mega-donors have the real-world experience to understand the theoretical point.  …So what are they thinking?

Aside from misunderstanding the game (which can not explain all of the 14 or so “out of equilibrium donors” under the simplistic all-pay model), there are two immediate explanations.  The first is vanity: these donors want to play with the “big kids,” have a roll in the hay with the DC cognoscenti, etc.  While I think that’s obviously got some purchase, it is both unsatisfying and seems too simple for billionaires.

Accordingly, the second is that some or all of these donors are playing the long game with the real contenders.  You see, what the all-pay auction analogy to multicandidate elections misses (among assuredly other things) is that the auction is actually for multiple prizes—each person’s vote is (slightly) differential in value to the bidder, because if it is not bought by me then it might go to various different candidates.

To make this concrete, suppose for simplicity that a donor supported some new candidate, “Charlie,” with money spent in a way that bought a bunch of votes exclusively from nativist (anti-immigration-reform) voters.  That would hurt some GOP candidates (such as Donny Trump, who is anti-immigration-reform) more than others (such as Jeb Bush). If I, as a mega-donor, am in favor of Trump not winning the nomination, supporting Charlie might be much more effective in the multicandidate, winner-take-all game of the GOP nomination fight than simply handing that same money to Jeb. (This is because I could take votes away from Trump—for Charlie—that Jeb could not steal away himself, thus causing Jeb to win because Trump loses votes.  This is another instance of the Gibbard-Satterthwaite Theorem.)

As Conn describes the picture, I completely agree with the main point: Citizens United might very well have unleashed a beast upon the GOP hierarchy (at least for now), because it is harder for the party establishment to control mega-donors, who can now be solicited for “simple checks” by super-PACs and 527 groups.  But, I disagree that this is because the new system increases the realm of “viable candidates.”  Rather, it simply lowers the prices of diversion, smoke, and mirrors in the nomination game.

Is that good or bad?  I’ll defer for now, but I’m perfectly willing to say that it’s neither.  It just changes the game—in the end, money matters, but votes matter more.  In other words, to paraphrase Mencken, though the ways may vary according to the institutional details, donors and voters will invariably get the government they want, and they’ll get it good and hard.

With that, I leave you with this.

__________

[1] This is a blog post, so I’m being quick about this.  But the basic idea is that the contestants have some common beliefs about their (generally differing) levels of resources (or valuations of winning) and, with few exceptions, the bidder who is capable and willing to pay the third-highest (or lower) price for the prize will not bid because he or she will not willingly sustain a bid that would win in equilibrium.

How Two People’s Rights Can Do Both People Wrong: Vaccines & (Anti-)Social Choice Theory

Vaccination, both in terms of its social good and the role of government in securing that social good while respecting individual liberty, has been a hot topic lately.  In fact, it’s gone viral. (HAHAHAHA.  Sorry.)  In this short post, I link the debate about vaccination, liberty, and social welfare, with the work of Amartya Sen, a preeminent social choice theorist who won the 1998 Nobel Memorial Prize in Economic Sciences.

The Vaccination Paradox. Suppose that—due to there only being one dose of the measles vaccine available—two families, A and B, each with a single child, a and b, are confronted with choosing which child (if any) to vaccinate against measles.  The choices are “a: vaccinate child a,” “b: vaccinate child b,” “n: vaccinate neither child.”

Family would prefer that child b get vaccinated because child a has a compromised immune system, but would prefer that child a get vaccinated rather than neither child gets vaccinated.  In other words, Family A‘s preference over the three outcomes is:

b > a > n.

Due to personal beliefs, Family B is opposed to vaccination for anyone, but due to child a‘s situation, prefers that child b get vaccinated rather than child a.  Thus, Family B’s preference over the three outcomes is:

n > b > a.

Now, suppose that a government agency is tasked with choosing whether to vaccinate a child and, if so, which one.  Furthermore, suppose that the government agency is required to respect the families’ wishes with respect to their own children.  That is, if either family prefers having nobody vaccinated to having their own child vaccinated, then their child is not vaccinated (i.e., the government agency is required to grant an “opt-out” exemption to each family).

What would the result be?  The opt-out exemption requirement implies that Family A is decisive with respect to a versus n, so that n will not occur: child a will be vaccinated if child b is not.  Similarly, Family B is decisive with respect to b versus n, so that b will not occur: child b will not be vaccinated. Accordingly, because the government agency can not choose n, and it can not choose b, it must choose a.  Because the government agency is required to respect individual rights to opt-out, child a will receive the vaccine.

Okay.  But, wait… the government agency has (implicitly) ranked the three possible vaccination choices as

a >> n >> b,

so that in spite of both families agreeing that they prefer that child b be vaccinated rather than child a:

b > a,

The government agency—because it is respecting individual rights—must vaccinate child a instead of child b.

This is an example of the Liberal Paradox (or Sen’s Paradox), which states that no policymaking system can simultaneously

  1. be committed to individual rights and
  2. guarantee Pareto efficiency.

This paradox is at the heart of a surprising number of political/social conundrums. One basic reason it emerges is that individual rights are in a sense absolute and not conditioned on the preferences of others.  That is, if Families A and B could somehow write a binding contract and Family B knew/believed Family A‘s preferences, then Family B would agree to sign away their right to decline the vaccination for child b.

I’ll leave this here, but my limited take-away point is this: individual rights are important, but even in situation in which their definition seems straightforward, there’s no free lunch here: individual rights invariably come into conflict with social welfare.  That’s not saying that individual rights should be sacrificed, of course.  But it is saying that preserving individual rights does not always maximize social welfare.

On The Possibility of An Ethical Election Experiment

The recent events in Montana have sparked a broad debate about the ethics of field experiments (I’ve written once and twice about it, and other recent posts include this letter from Dan Carpenter, this Upshot post by Derek Willis, and this Monkey Cage post by Dan Drezner).  I wanted to continue a point that I hinted at in my first post:

[T]he irony is that this experiment is susceptible to second-guessing precisely because it was carried out by academics working under the auspices of research universities.  The brouhaha over this experiment has the potential to lead to the next study of this form—and more will happen—being carried out outside of such institutional channels.  While one might not like this kind of research being conducted, it is ridiculous to claim that is better that it be performed outside of the academy by individuals and organizations cloaked in even more obscurity.  Indeed, such organizations are already doing it, at least this kind of academic research can provide us with some guess about what those other organizations are finding.

Personal communications with colleagues and readers indicated that Paul Gronke was not alone in interpreting my message in that passage as something like “well, others intervene in elections in unethical ways, so scholars don’t need to worry about ethics.”  That was not my intent.  Rather, I was trying to make the point that interventions by academic researchers are more likely to be transparent and, accordingly, capable of being judged on ethical grounds, than interventions by others.  Of course, that is a contention with which one might disagree, but I’ll take it as plausible for the purposes of the rest of this post.[1]

Reflecting further on the ethics of field experiments led me to a classical social choice result known as the liberal paradox, first described by Amartya Sen.  The paradox is that respecting individual rights can lead to socially inferior outcomes.  The secret of the paradox is that sometimes our preferences over our actions depend on what others’ do (also known as “nosy preferences”).

The link between the paradox and the ethics of experimenting on elections in the following simple way.  Let’s choose between four possible worlds, depending on whether scholars and/or political parties do field experiments on elections, and let’s take my assertion about the value of open academic research as given, so that “society’s preference” is as follows:[2]

  1. Nobody does any field experiments on elections, (the “best” option)
  2. Scholars do field experiments on elections, political parties do not,
  3. Both scholars and political parties do field experiments on election, and
  4. Partisan researchers do field experiments on elections, scholars do not (the “worst” option).

Then, let’s suppose that we have two principles we’d like to respect:

  • Noninterference in Elections: Field Experiments on Elections are Unethical if They Might Affect the Election Outcome.
  • Free Speech: Political Parties Are Allowed to Do Experiments If They Choose to.

It is impossible to respect these (reasonable) principles and maximize social welfare.  Here’s the logic:

  1. If a field experiment might affect an election, then some political party will want to do it, but the experiment would be considered unethical.
  2. Thus, if a field experiment is unethical and we respect Free Speech, then some political party will do the field experiment.
  3. But if scholars behave in accordance with Noninterference, then they will not perform a field experiment that might affect the election outcome.
  4. This leads to the outcome “Partisan researchers do field experiments on elections, scholars do not,” which is clearly inefficient.  Indeed, it is the worst possible outcome from society’s standpoint.

It is not my intent to judge the ethics of any particular field experiment study here, and I do believe that there are plenty of unethical designs for field experiments.  However, I am rejecting the notion that a field experiment on an election is ethical only if it does not affect the outcome of the election.  This is because it is precisely in these cases that others will do these experiments in non-transparent ways.  This is not the same as saying “other groups do unethical things, so scholars should too.”  Rather, this is saying “groups are intervening in elections in both ethical and unethical ways, so it is important for scholars to transparently learn from and about election interventions in ethical ways.”  To say that potentially affecting an election outcome is presumptively unethical implies that a scholar who values ethical behavior will never learn about how election interventions that are occurring work, what effects they might be having on us individually and collectively, and how society might better leverage the interventions’ desirable effects and mitigate their undesirable effects.

____________

[1] Relatedly and more generally, my post has (perhaps understandably) been read as defending all field experiments on elections.  My intent, however, was two-fold: (1) guaranteeing that a field experiment will have no effect on the outcome requires the experiment to be useless and thus is too strong a requirement for a reasonable notion of ethicality and (2) coming up with a reasonable notion of ethicality requires taking (social choice) theory seriously, during the design of the field experiment.

[2] One can substitute any private corporation/interest/government agency/conspiracy one wants for “political parties.”

Mind The Gap: The Wages of Aggregation, Evaluation, and Conflict

For whatever reason, I’m on a “data is complicated kick.”

So, this story is one of many today discussing the gender gap in wages in ‘Merica. In a nutshell, President Obama pointed out “that women make, on average, only 77 cents for every dollar that a man earns.”  Critics (most notably the American Enterprise Institute) immediately pointed out that “the median salary for female employees is $65,000 — nearly $9,000 less than the median for men.”

There are LOTS of angles on this thorny issue.  I want to raise the specter of social choice theory as a mechanism by which we can understand why this debate goes around and around.[1] The basic idea is that aggregation of data involves simplification, which involves assumptions.  Because there are various assumptions one can make (properly driven by the goal of one’s aggregation), one can aggregate the same data and reach different conclusions/prescriptions.

To keep it really simple, consider the following toy example.  Suppose that a manager currently has one employee, who happens to be a man, who makes $65,000/year, and the manager has to fill three positions, A, B, and C.  Furthermore, suppose that the manager has a unique pair of equally qualified male and female applicants for each of these three positions.  Finally, suppose that position A is paid $70,000/year, position B is paid $60,000/year, and position C is paid $45,000/year.

Now consider two criteria:

(1) eliminate/minimize the gender gap in terms of average wages,[2] and
(2) minimize the difference between proportions of male and female employees.

How would the manager most faithfully fulfill criteria (1)?  Well, if you hire the woman for position B and the two men for positions A and C, then the average wage of women (i.e., the woman’s wage) is $60K, and the average of the three men’s (the existing employee and the two new employees) wages is $60,000.  This is clearly the minimum achievable.[3]

How about criteria (2)?  Well, obviously, given that one man is already employed, the manager should hire two women.  If the manager satisfies criteria (2) with an eye toward criteria (1), then the manager will hire a man for position B and women for positions A and C.

Note that the two criteria, each of which has been and will be used as benchmarks for equality in the workplace (and elsewhere), suggest exactly and inextricably opposed prescriptions for the manager.

In other words, the manager is between a rock and a hard place: if the manager faithfully pursues one of the criteria, the manager will inherently be subject to criticism/attack based on the other.

Note that this is not “chaos”: the manager, if faithful, must hire no more than 2 of either gender: hiring three men or three women is incompatible with either of these criteria.[4] But the fact remains—and this is a “theory meets data” point—one can easily (so easily, in fact, that one might not even realize it) impose an impossible goal on an agent if one uses what I’ll call “data reduction techniques/criteria” to evaluate the agent’s performance.

In other words: real world politics is inherently multidimensional.  When we ask for simple orderings of multidimensional phenomena (however defined, and of whatever phenomena), we are in the realm of Arrow’s Impossibility Theorem.

_________

[1] This argument is made in a more general way in my forthcoming book with Maggie Penn, available soon (really!) here: Social Choice and Legitimacy: The Possibilities of Impossibility.

[2] Here, by “average,” I mean arithmetic mean.  Because this example is so small, there is no real difference between mean, median, and mode in terms of how one measures the gender gap.  If these differ in practice, then the problem highlighted here is merely (and sometimes boldly) exacerbated.

[3] To be clear, I am setting aside the issue of “how much does a gender make if none of that gender is employed?” While technically undefined, I think $0 is the most common sense answer, and I’ll leave it at that.  

[4] Of course, as Maggie Penn and I discuss in our aforementioned book, there are many criteria.  Our argument, and that presented in this post, is actually strengthened by arbitrarily delimiting the scope of admissible criteria.

My Bad: Dispelling The Implied Suspension of Discharge

The Daily Kos has a piece authored by “War on Error” that linked to my piece today on H.Res. 368. I want to correct a misinterpretation in that piece.  The author states that

Chris Van Hollen, a Democrat, tried to end the Government Shutdown with a Discharge Petition on October 12, 2013

This isn’t correct.  Van Hollen was attempting to offer a motion to “recede and concur,” which is typically privileged (and thus in order at any time when offered by any member) under Rule XXII, Section 4 of the House’s standing rules. This isn’t a motion to discharge in any sense, actually, as the bill in question (H.J.Res.59, which, as amended by the Senate, is the “clean CR”) isn’t in committee.  Rather, it is “in the state of disagreement”—or, less technically, “in limbo”—as the House and Senate have each passed different versions of the bill, and the House has insisted on its version and requested a conference with the Senate.

I understand the potential for confusion in my post, because I wrote “What this [special rule] does is further make discharge of a clean CR even harder.”  I was speaking/typing quickly and in reference to the general quest to get a floor vote on a clean CR.  I should have said

“What this [special rule] does is further make discharge of a clean CR even harder complicate the matter of getting a floor vote on a clean CR in the House.”

The privilege accorded to a motion to discharge a committee (i.e., to “put into play” a discharge petition duly signed by 218 members of the House) is contained in Rule XV, Section 2o f the House’s standing rules, and is accordingly unaffected by H.Res.368.  As I wrote earlier, the discharge petition is a highly unlikely route to reach a vote on a clean CR, and indeed would have had to be called up today or wait until October 28th.

Also, to be completely clear on this point, Van Hollen’s statement that “Democracy has been suspended” is hyperbole: H.Res.368 is a short and succinct special rule that was duly approved by a vote of 228-199.  Regardless of how one feels about the use of special rules in the House, it is (one of many forms of) democratic governance, and is (at least in my and many others’ opinions) clearly consistent with Article I, Section 5, Clause 2 of the US Constitution, which says (in part)

Each House may determine the Rules of its Proceedings

With that, I leave you with this.

LegerdeBoehner, or “The Rules Rule.”

 

In a little noticed procedural move on October 1st, the House of Representatives “entered in to the stage of disagreement” with the Senate with respect to H.J.Res. 59, the “clean” continuing resolution (CR) that, as currently amended by the Senate, would reopen the government at the sequestered levels.[1]

Specifically, H.Res. 368 does two things.  The first is a tripartite bit of normal business in the “navette” system used by Congress to resolve differences between the two chambers’ versions of a bill,[2] stating that the House

      “(1) takes from the Speaker’s table the joint resolution … (2) insists on its amendment, and (3) requests a conference with the Senate thereon.”

What this means, in short, is that the House collectively asserts that it is “done considering” the Senate’s version of the CR, and requests that the two have a “sit down” to work out the details of a compromise.  This is pretty normal.

The “trickery,” which I absolutely love, is the second part, which states

      Any motion pursuant to clause 4 of rule XXII relating to House Joint Resolution 59 may be offered only by the Majority Leader or his designee.

What this little sentence does is further make discharge of a clean CR even harder.  I didn’t think about this charmingly arcane and surprisingly potent possibility when I wrote on discharge as a strategy to get a clean CR last week. (I haven’t seen anyone bring it up until this weekend, when the Democrats hit upon it.)  Under the standing rules of the House,

When the stage of disagreement has been reached on a bill or resolution with House or Senate amendments, a motion to dispose of any amendment shall be privileged.

What this means is that, conditional on the first part of H.Res. 368, under which H.J.Res. 59 reaches “the stage of disagreement,” any member of the House could at any time move that the House “recede and concur” with the Senate’s amendment, thereby approving the clean CR.

To be clear, this isn’t “undemocratic,” per se: H.Res. 368 was approved 228-199 (Roll Call 505)—an example of majority rule being used to make the rules arguably less “majoritarian.”[3]  Now let’s consider the MathOfPolitics of this move.

As I have argued before, in a variety of ways, the bargaining situation between the Democrats and Republicans in both chambers and President Obama is a decidedly “multi-player” situation and, importantly, one that the House GOP (and probably some of the House Democrats, too) would like to resolve in a nuanced manner.  In particular, voting “yea or nay” on the clean CR is a blunt and high-profile signal to members’ constituents about the members’ relative priorities.  To the degree that the House GOP leadership and/or rank-and-file wanted to negotiate something other than a clean CR (or even just wait for a combined “CR and debt ceiling deal,” this logic implies that they need to avoid the possibility of facing a “clean vote” on a clean CR.  To attempt to get a conference with the Senate,[4] the House had to enter into the stage of disagreement.  But this step also would hand a gun to the minority: they could at any time force exactly the vote the majority did not want.

So, putting the two components of the rule together—thereby meaning that the House could affirmatively “act upon the Senate’s CR” only by agreeing to hand this agenda power to Majority Leader Cantor—represents a classic example of a “credible logroll.” Note that the rule change limits the preexisting agenda powers of all members, not just those in the minority party. The Republican majority, in order to negotiate as a somewhat unified body, agreed to “bind their hands” from the potential temptation to “just agree to a clean CR.”

This type of procedural commitment strengthens the ability of Boehner and Cantor to “speak for” the House in the ensuing negotiations.  If the motion to recede and concur with the clean CR had been available to any member, as usual, then the Senate and/or President Obama could attempt to simply build a floor majority, without any special need to focus on/pay attention to the position of the House Leadership.  In a sense, the House signed away a limited, but durable, “power of attorney” to Boehner and Cantor on the CR negotiations.

With that, I leave you with this.

______________

[1] Remember, the sequester? HAHAHAHA.  Well, the Senate Democrats do.  Note the link between this position and my recent post on how appearing to lose may help one win.

[2] It’s tripartite because each of the three parts could in theory be debated and voted upon separately, but this is rarely if ever done in the House.  (They were accomplished here, as is pretty common, through a single vote on a “special rule” from the Rules Committee.) In the Senate, the analogue of this three-step process is one of the minority party’s secret weapons against the “nuclear options.”  But I will leave this to the side….for now?

[3] Hey, it’s my blog, so I’ll remind you that I’ve written and published (i.e., something more than a blog post) on this question.

[4] This might seem like a waste of time given hindsight, but I’ll simply point out that this step was necessary for the House to make it seem like the Senate was the roadblock to reopening the government (because the Senate did not agree to the conference).  Public opinion since October 1 seems to indicate this strategy failed to impress.

Why The House Can’t Discharge Its Duties

[Edit 10/5/13. Note: When I wrote this, I had not yet read this piece in the Washington Post, which refers to this bill. This doesn’t change the basic math of the post and, indeed, makes its points arguably even more tangible.]

A few people (fewer than I would have expected) have mentioned the possibility of the discharge petition as way to circumvent the House leadership in pursuit of a “clean CR.”  I will briefly describe this procedure and why it wouldn’t work, even procedurally, for the current dilemma.  (For those interested in more details, see my 2007 article “The House Discharge Procedure and Majoritarian Politics,” the ungated version of which is here.)

Essentially every bill considered by the House of Representatives is first considered by one or more standing committees.  While committees are generally thought of as places where bills might be more closely considered and amendments proposed for eventual consideration by the floor, they also represent opportunities for “gatekeeping,” or negative agenda control, insofar as the committee(s) in question can delay or potentially block floor consideration (and, hence, passage) of the bills referred to them. The discharge petition is a tool that allows a majority of the House to remove (“discharge”) the committee from its responsibility for/ownership of a bill.

The basics of a discharge petition are as follows:[1]

  1. A bill is referred to a committee.
  2. The committee does not report the bill to the floor within 30 days.
  3. A member starts a discharge petition to remove the committee from consideration of the bill.
  4. The petition is signed by a majority of the House (218 members).
  5. The petition is called up when it is privileged (2nd and 4th Mondays of the month)
  6. The motion to discharge is approved by majority vote.
  7. The bill is then considered under the standing rules (open rule)
  8. The bill (possibly as amended) must then receive a majority vote to pass.

This is quite a hill to climb, for many reasons.  For the purpose of this post, I simply want to point out that, as far as I can tell, no “clean CR” has been referred to a committee in the House.[2]  Thus, a member would need to introduce such a CR and have it referred to a committee, and then wait 38 days at least (November 11, 2013). That assumes the member can get 218 signatures.[3]

More to the point, even if a clean CR has been sitting in some committee for 30 days or more, the earliest that such a CR could practically be brought up is October 12th 14th (the second Monday of October).  Given that the GOP has shown unity on procedural maneuvers (such as the previous question on a special rule) during this SHUTDOWN SHOWDOWN, I have a hard time imagining that even this would work.  After all, in 1993, the discharge petition was successfully used…to make signatures on discharge petitions public.[4]  Thus, every GOP member who signed it would be “out in the open” for potentially several days or weeks.

So, in a nutshell, the discharge petition is a potential route to circumventing the leadership, but it definitely ain’t an easy one.  But, you know, should we really be surprised?

With that, I leave you with this.

________________________________

[1] I dispense here with some variations on the route to discharge, including special rules and multiple referrals.  There are some interesting differences for special rules, but discharge of a special rule for consideration of a bill can take no less (and, generally, more) time than a “straight discharge.” The virtue of discharging a special rule is that this route to discharge allows one to discharge a committee and then consider the bill under a “closed rule,” protecting it from amendment.  This is arguably irrelevant in this case, since the Senate has already passed a clean CR, and this would arguably be the object of the discharge.

[2] The closest thing I could find currently in committee in the House is H.J.Res. 62, which is a CR that permanently defunds the Affordable Care Act.  Using this measure to achieve a clean CR would require the use of the special rule route to discharge, mentioned in note [1].  The time problem remains even if this route were used.

[3] …And that the Speaker has not read my article about a subterfuge that would at least theoretically circumvent the whole thing. Really, it’s quite cool.  You know, if you’re into stuff like that. It also can probably be pulled at most one time, after which the rules would be changed.

[4] To see the four currently pending discharge petitions, click here and scroll to the bottom.