The IRS Is Here to Help. So Is ICE.

It’s been almost ten years since I’ve written here. The last time I posted, Donald Trump had just clinched the GOP nomination, his Banzhaf power index had hit 1.0, and I was calculating the proportion of his campaign contributions that were unitemized.1 That was June 2016. I stopped writing because the general election demanded a firehose of commentary I didn’t have the time or the stomach for, and the opportunity cost of blogging versus finishing actual research was getting untenable.

A lot has happened. Some of the people who used to read this blog — colleagues, friends, people I admired — aren’t here anymore. I won’t make a list, because that isn’t what this space is for, but I’ll say that their absence is felt, and that part of what brings me back is the sense that the kind of work this blog tries to do — taking the math seriously, taking the politics seriously, and refusing to pretend you can do one without the other — matters more now than it did when I left.

For those who are new: this is a blog about the math of politics, which is a thing that exists whether or not anyone writes about it. The tagline is three implies chaos, which is a reference to the fact that collective decision-making with three or more alternatives is, under very general conditions, a mess.2 I’m a political scientist at Emory. I use formal models — game theory, mechanism design, social choice — to study how institutions shape behavior. And I write here when something in the news is so perfectly illuminated by the theory that I can’t not.

Today a federal judge ruled that the IRS violated federal law approximately 42,695 times, and I have a model for that. Let’s go.


NA NA

Last April, Treasury Secretary Bessent and DHS Secretary Noem signed a memorandum of understanding allowing ICE to submit names and addresses to the IRS for cross-verification against tax records. ICE submitted 1.28 million names. The IRS returned roughly 47,000 matches. The acting IRS commissioner resigned over the agreement. And Judge Colleen Kollar-Kotelly, reviewing the IRS’s own chief risk officer’s declaration, found that in the vast majority of those 47,000 cases, ICE hadn’t even provided a valid address for the person it was looking for — as required by the Internal Revenue Code. The address fields contained entries like “Failed to Provide,” “Unknown Address,” or simply “NA NA.”3

NA NA.

That’s what ICE typed into the field that was supposed to ensure the government could only access tax records for individuals it had already specifically identified. And the IRS said: close enough.

Now, the obvious story here — the one you’ll get from the news — is about a legal violation and an institutional failure. And that story is correct. But there’s a deeper story, one that requires thinking about what classification systems do to the populations they classify. Because the address field in the §6103 request wasn’t just a data element. It was a constraint — a design specification that determined what kind of system the IRS-ICE pipeline would be. With the address requirement enforced, the system is a targeted lookup: you ask about a specific person you’ve already identified, and the IRS confirms or denies. With the address requirement collapsed — with “NA NA” treated as a valid input — the system becomes a dragnet. Same code, same database, same agencies. But a fundamentally different machine, operating under fundamentally different logic, with fundamentally different consequences for the people inside it.

I want to talk about those consequences. Specifically, I want to talk about what happens to the population being classified when the classifier changes.


Filing Taxes as a Strategic Choice

Here’s the setup. If you’ve read the work Maggie Penn and I have been doing on classification algorithms, this will look familiar.4

Undocumented immigrants in the United States pay taxes. They do this using Individual Taxpayer Identification Numbers (ITINs), which the IRS issues specifically to people who have tax obligations but aren’t eligible for Social Security numbers. Filing is not optional — the legal obligation exists regardless of immigration status. But the compliance rate — how many people actually file — has historically been sustained by a critical institutional feature: a firewall between tax data and immigration enforcement. Section 6103 of the Internal Revenue Code strictly prohibits the IRS from sharing taxpayer information with other agencies except under narrow, court-supervised conditions.

The firewall is what made tax filing a safe act. Filing carried a compliance benefit — potential refunds, building a record for future status adjustment, staying on the right side of the IRS — and essentially zero enforcement cost. The tax system observed you, but the immigration system couldn’t see what the tax system saw.5 To put it in terms we’ll use throughout: the classifier’s expected responsiveness was zero.6 When the classifier is null, people make their filing decision based solely on the intrinsic costs and benefits of compliance. Call that sincere behavior.

The MOU blew a hole in that firewall. After the MOU, filing generates a signal — the tax record, including your address — that feeds directly into an enforcement match. Before the breach, the only classifier that mattered was the IRS’s own enforcement system, and that system rewarded filing: if you complied, you reduced your probability of audit, penalty, and all the administrative misery that follows from the IRS noticing you didn’t file. The reward was real, the classifier was responsive to compliance, and the equilibrium worked.

The MOU layered a second classifier on top — the ICE match — and this one runs in the opposite direction. Filing still reduces your IRS enforcement risk, but it now increases your immigration enforcement risk, because filing is what generates the data that feeds the match. For citizens and legal residents, the second classifier is irrelevant — they face no immigration enforcement cost, so the net calculus doesn’t change. For undocumented immigrants, the second classifier dominates. The expected cost of filing went up, and for many people it went up enough to swamp the expected benefit.

The equilibrium compliance rate in the model is

$$\pi_F(\delta, \phi, r) = F(r \cdot \rho(\delta, \phi))$$

where $r$ captures the net stakes of being classified and $\rho$ captures how much the classifier’s decision depends on the individual’s behavior.6 When the firewall was intact, the net reward to filing was positive — the IRS classifier rewarded compliance, and the immigration system couldn’t see you. When the firewall broke, the net reward dropped, in some cases below zero, and the filing rate dropped with it. Not because the legal obligation changed. Not because the refund got smaller. Because the classifier changed, and people responded.

This is a point that’s worth pausing on, because it’s general and it’s important: classification systems do not passively observe the world. They reshape it. A credit-scoring algorithm changes how people use credit. An auditing algorithm changes how people report income. A policing algorithm changes where people walk. The instrument and the thing being measured are not independent of each other, and any analysis that treats them as independent will be wrong in a specific, predictable direction: it will overestimate the accuracy of the system and underestimate its behavioral effects.

Think of two cities, each with a system for issuing speeding tickets. One city’s algorithm is designed to ticket speeders — it cares about accuracy. The other city’s algorithm is designed to generate revenue — it tickets indiscriminately. Drivers in the accuracy-motivated city slow down, because compliance is rewarded. Drivers in the revenue-motivated city don’t bother, because ticketing has nothing to do with their behavior. Same roads, same drivers, same speed limits. Different classifiers, different equilibria. The classifier doesn’t just measure the city — it makes the city.7


The Death Spiral

This is where it gets interesting. And by “interesting” I mean “bad.”

The people most likely to be correctly identified by the IRS-ICE match are those with stable addresses who file consistently and accurately. These are, almost by definition, the most compliant members of the undocumented population — the ones who’ve been following the rules, building a paper trail, doing exactly what the system told them to do. They’re also the ones with the most to lose from enforcement, because they’ve given the system the most data about themselves.

These are the first people who stop filing.

Judge Talwani flagged this directly. Community organizations that provide tax assistance to immigrants can’t advise their members to stop filing — that would be encouraging illegal behavior. But they also can’t encourage filing, because filing now triggers enforcement risk. The organizations reported decreased revenue and participation. The chilling effect isn’t hypothetical. It’s in the court record.

Now here’s the feedback loop. When the most identifiable filers exit the system, the quality of the remaining data degrades. The match rate goes down. The false positive rate — the probability that a match incorrectly targets a citizen or legal resident — goes up, both because the denominator of correctly matched records shrinks and because ICE is submitting garbage inputs (“NA NA”) that the IRS is accepting anyway. The classifier gets worse at its stated objective precisely because it’s operating.

The system doesn’t just get unfair. It gets worse at its own stated purpose — identifying specific individuals — because the individuals it could most easily identify are exactly the ones who stop showing up.

This is a general property of classification systems with endogenous behavior, and it’s one I think about a lot. When the population being classified can respond to the classifier, the classifier doesn’t observe a fixed distribution. It selects the distribution that’s willing to be observed. And that selection runs in exactly the wrong direction if your goal is accurate identification: the easy cases exit, the hard cases remain, and accuracy deteriorates as a function of the classifier’s own operation. The system eats its own inputs.8


What the Designer Wants Matters

One of the results Maggie and I are most insistent about is that the objectives of the entity doing the classifying shape the equilibrium in ways that aren’t obvious from the classifier’s structure alone. Two cities with identical data, identical populations, and identical infrastructure but different objectives will design different classifiers, induce different behavior, and produce different social outcomes. The objectives live inside the algorithm, not alongside it.

So: what is DHS trying to do?

The official framing is accuracy-aligned. DHS says the goal is to “identify who is in our country.” That sounds like accuracy maximization: correctly match individuals to their immigration status.

But the implementation tells a different story. An accuracy-maximizing designer needs good inputs — the whole point of the §6103 requirement that ICE provide a valid address is to ensure the system operates on pre-identified individuals, which is a precondition for accurate matching. ICE submitted “NA NA.” They submitted jail addresses without street locations. They submitted 1.28 million names and got 47,000 matches, meaning a 96.3% non-match rate before you even get to the question of whether the matches were accurate.

This doesn’t look like accuracy maximization. It looks like a fishing expedition — a bulk data pull designed to maximize the reach of the enforcement system rather than the precision of individual identifications. In the language of the paper, it looks more like compliance maximization (or its dark inverse: maximizing the chilling effect on a target population) or outright predatory objectives — a system that benefits from inducing non-compliance, because non-compliance makes the targets more vulnerable, not less.9

And the distinction between objectives matters formally, because the two produce different classifiers with different welfare properties. An accuracy-maximizing classifier, we show, will push some groups toward compliance and others away — exacerbating behavioral differences between groups even when the data quality is identical across groups. A compliance-maximizing classifier, by contrast, always satisfies what we call aligned incentives: it pushes all groups in the same behavioral direction.

Here, the groups aren’t abstract. They’re citizens, legal residents, and undocumented immigrants, all of whom file taxes, all of whom had their data swept into the same match, and all of whom face different enforcement costs from being identified. The classifier doesn’t distinguish between them at the input stage — it just matches names and addresses. But the behavioral response to the classifier differs radically across groups, because the stakes of being classified differ radically. Citizens face essentially zero enforcement cost from a match. Undocumented immigrants face deportation. The same classifier, applied to the same data, produces wildly different equilibrium behavior in different populations.

That’s not a bug in the implementation. That’s a structural property of classification systems with heterogeneous stakes. And it’s a property that accuracy maximization makes worse, not better.


The Commitment Problem

There’s one more piece of the model that’s eerily relevant. We distinguish between designers who can commit to a classification algorithm and designers who are subject to audit — who must classify consistently with Bayes’s rule and their stated objectives. The commitment case is more powerful: a designer who can commit can deliberately misclassify some individuals to manipulate aggregate behavior. The no-commitment case, which we interpret as the effect of auditing or judicial review, strips away this power.

Judge Kollar-Kotelly’s ruling is an audit. She looked at what the IRS actually did — accepted “NA NA” as a valid address, disclosed 42,695 records in violation of the statutory requirement — and said: this doesn’t satisfy the constraints. Judge Talwani’s injunction goes further, blocking enforcement use of the data entirely.

These rulings function exactly as the no-commitment constraint does in the model. They force the classifier to satisfy sequential rationality — to justify each classification decision on its own terms, rather than as part of a bulk strategy to influence population behavior. And the paper tells us what happens when you impose that constraint: the resulting equilibrium satisfies aligned incentives. The designer can no longer push different groups in different behavioral directions.

That’s the fairness argument for judicial review of classification systems, stated formally. It’s not that judges know better than agencies how to design algorithms. It’s that the constraint of having to justify individual decisions prevents the designer from using the algorithm to strategically manipulate aggregate behavior. The cost is accuracy — the no-commitment equilibrium is always weakly less accurate than what the designer could achieve with commitment power. But the benefit is behavioral neutrality across groups, which is a fairness property that accuracy maximization cannot guarantee.10


Where This Goes

The D.C. Circuit will rule on the Kollar-Kotelly injunction. If they uphold it, the no-commitment constraint holds and the data-sharing agreement is dead in its current form. If they reverse — and the Edwards panel’s reasoning from two days ago suggests this is possible — the commitment case reasserts itself, and the behavioral distortions I’ve described become the operating equilibrium.

Meanwhile, the chilling effect is already in motion. People have already stopped filing. Community organizations have already seen decreased participation. The equilibrium is shifting in real time, and it won’t shift back quickly even if the courts ultimately block the agreement, because trust in the firewall is not a switch you can flip. It’s a belief about institutional behavior, and beliefs update slowly after violations — especially violations that occurred 42,695 times.

The tax system was designed as a compliance mechanism: file your returns, pay what you owe, and we won’t use your data against you. That design was a choice. The firewall was a choice. The address requirement in §6103 was a choice. Every one of those choices encoded a judgment about what the system should be for — not just what it should measure, but what kind of behavior it should sustain. The MOU didn’t just breach a legal firewall. It changed the classifier, which changed the equilibrium, which is changing the population, which will change the data, which will change what the classifier can do. The whole thing is a loop, and it’s spinning in exactly the direction the model predicts.

I said I’d be back when something in the news was so perfectly illuminated by the theory that I couldn’t not write about it. This is that. There will be more.11

With that, I leave you with this.


1. 72.9%, for those keeping score.

2. The phrase is from Li and Yorke’s 1975 paper “Period Three Implies Chaos,” which proved that a continuous map with a periodic point of period 3 has periodic points of every period — plus an uncountable mess of aperiodic orbits. But the tagline does triple duty: Arrow’s theorem, the Gibbard-Satterthwaite theorem, and the McKelvey-Schofield chaos theorem all say that with three or more alternatives, the relationship between individual preferences and collective outcomes becomes fundamentally unstable. Norman Schofield, who proved the general form of the chaos result with Richard McKelvey, was a mentor and colleague to both Maggie Penn and me at Washington University. It was Norman, in a bar in Barcelona, who suggested that Maggie and I write our first book, Social Choice and Legitimacy: The Possibilities of Impossibility, which we dedicated in part to McKelvey. He died in 2018, and he is one of the people I miss when I write here. Three implies chaos. It’s not a bug. It is the central fact of democratic life.

3. The legal landscape is, to use a technical term, a mess. Kollar-Kotelly’s injunction from November is still in effect but under appeal in the D.C. Circuit. Judge Talwani in Massachusetts issued a separate injunction in early February blocking enforcement use of the data. And two days ago, a D.C. Circuit panel declined to enjoin the agreement, reasoning that “last known address” isn’t protected return information under §6103. So you have district courts saying it’s illegal and an appellate panel suggesting it might not be. Three courts, three bins for the same data. If that doesn’t sound like a social choice problem to you, you haven’t been reading this blog long enough.

4. Penn and Patty, “Classification Algorithms and Social Outcomes,” American Journal of Political Science (forthcoming). The formal model and all the results I’m drawing on here are in that paper. What follows is a blog-post-grade application of the framework, not a formal extension of it. But the shoe fits disturbingly well.

5. The firewall wasn’t just a policy preference — it was constitutional load-bearing infrastructure. The government’s power to tax illegal income was established in United States v. Sullivan (1927) and famously applied to convict Al Capone in 1931. But requiring people to report illegal income creates an obvious Fifth Amendment problem: filing becomes compelled self-incrimination. Section 6103 resolved the tension by ensuring tax data stayed behind the wall. With the firewall intact, you could — in principle — write “narco drug lord” in the occupation field of a 1040 and nothing would happen, because the IRS couldn’t share it. The MOU reopened that wound. If filing now feeds ICE, then filing is self-incrimination for undocumented immigrants, and the constitutional bargain that made the whole system work since Sullivan is back in play. Whether anyone is litigating this yet is a question I leave open, but the logical structure is Gödelian: the system simultaneously compels disclosure and punishes the act of disclosing.

6. In the model, expected responsiveness is $\rho(\delta, \phi) = (\delta_1 + \delta_0 – 1)(2\phi – 1)$, where $\delta_1$ and $\delta_0$ are the probabilities that the classifier’s decision matches the signal for compliers and non-compliers respectively, and $\phi$ is signal accuracy. A null classifier has $\rho = 0$: the probability of being targeted is the same regardless of whether you file. The §6103 firewall enforced nullity by severing the link between the signal (tax record) and the decision (enforcement action).

7. This example is from the paper, but it’s the kind of thing that should be folklore by now. It isn’t, largely because the computer science literature on algorithmic fairness has mostly treated the classified population as fixed. That’s starting to change — see Perdomo et al. (2020) on performative prediction and Hardt et al. (2016) on equality of opportunity — but the political science framing, where the designer has objectives and the population has strategic responses, is still underdeveloped. Maggie and I are trying to fix that.

8. There’s also a revenue dimension that shouldn’t be ignored. The IRS estimates that undocumented immigrants pay billions in federal taxes annually. If the filing rate drops — which it will, and which the court record suggests it already is — that’s tax revenue the government doesn’t collect. The classifier was supposed to serve immigration enforcement, but its equilibrium effect includes degrading the tax base. Whether anyone in the administration has done this calculation is an exercise I leave to the reader.

9. Predatory preferences in the model are characterized by a designer whose most-preferred outcome is to not reward an individual who didn’t comply. Think predatory lending: the lender benefits most when the borrower defaults, because the default triggers fees, repossession, or refinancing at worse terms. A designer with predatory preferences over immigration enforcement would want undocumented immigrants to stop filing taxes, because non-filers are more legally precarious, have weaker paper trails, and are easier to deport. Whether this is what DHS actually wants is a question I can’t answer from the model. But the model can tell you what the observable signatures of predatory preferences look like, and “submit NA NA as an address for 1.28 million people” is consistent with the signature.

10. Whether you think that tradeoff is worth it depends on what you think “fairness” means in this context, and reasonable people disagree. But the point is that it is a tradeoff, with formal properties that can be characterized — not a vague gesture at competing values. I have more to say about this, and about how it connects to a set of problems that go well beyond tax data. But that will have to wait for another post. Or, you know, the book.

11. Next up: the Supreme Court just handed us a game-theoretic goldmine, and three implies chaos. Stay tuned.

Speech-y Keen, or Why Nobody Worries About the “Right to Praise the Government”

This post by Michael Moynihan, responding in part to this post by Thane Rosenbaum, asks how “free” free speech should be.  The question of discriminating between different forms of speech—based on questions such as “is it knowingly false,” “how likely is it to incite violence,” and “is it political”—is an instantiation of an aggregation problem, exactly the type of problem that motivates the analysis and arguments in the forthcoming book I penned with Maggie Penn, Social Choice and Legitimacy.

But, aside from the question of how one would (or could) construct meaningful and coherent “bounds” on “free” speech, I was led to think about the instrumental nature of speech by the following quote from Moynihan’s post (which includes a quote from Rosenbaum’s post):

“Actually, the United States is an outlier among democracies in granting such generous free speech guarantees. Six European countries, along with Brazil, prohibit the use of Nazi symbols and flags. Many more countries have outlawed Holocaust denial. Indeed, even encouraging racial discrimination in France is a crime. In pluralistic nations like these with clashing cultures and historical tragedies not shared by all, mutual respect and civility helps keep the peace and avoids unnecessary mental trauma.” So one would assume that racial discrimination has been dumped on the ash heap of history in France, considering racist thoughts and symbols have been made illegal. How, then, does one explain that the National Front, whose former leader Jean-Marie Le Pen was found guilty of Holocaust denial, is now the most popular party in the country?

The math of politics point here is both simple and arguably subtle.  Basically, speech limitations are not imposed at random, and citizens should draw inferences about the motivations of, and information held by, whoever imposed them.

Consider the classical “marketplace of ideas” justification for strong free speech rights.  In a nutshell, this argument says that free speech is socially beneficial because it does minimizes the probability that a “true” (and, by presumption, socially beneficial) argument will be prescreened or forestalled by speech limitations.  (Consider, for example, the creationism vs evolution debate.)

My argument here, though in favor of strong speech rights, is slightly different. Specifically, it focuses on constraints imposed by the government.  This is an important qualification.  In particular, democratic governments are in the end chosen or “produced” through collective action.  If “ideas matter” (as the marketplace justification justifiably presumes), then evaluating the policies of the government and its potential successors matter.  Then, the transmission of ideas between citizens might lead to changes in/pressures on the government.

Accordingly, if one presumes that governments prefer to maintain power, ceteris paribus, then a policy that discriminates between speech based on content can arguably be informative in its own right.

Here’s a quick sketch:  suppose that a government favors some policy that may or may not be socially suboptimal and people have variously informed opinions about the social optimality of that policy.

Suppose that people are prohibited from talking “negatively” about that policy.  If people don’t consider the government’s motivation to choose/support such a prohibition, then the prohibition would—for the sake of argument—tamp down dissidence regarding that policy.  However, if the citizens think about the government’s motivations—regardless of whether they be policy-based, reelection-focused, or a combination thereof—then the government’s imposition of the prohibition would justifiably lead to the citizens suspecting that not only was the policy in question more likely to be suboptimal, but also that the government does not have the best interests of the electorate at heart. (NO WAY!)

In short, all governments are at least practically dependent upon their citizens’ support. If speech “matters,” then governmental limits on speech—perhaps especially those accompanied by the purest of putative motives—should be viewed with suspicion.

Note that this logic gets even “stronger” once one considers the timing of the limitations: that is, if one thinks about a government considering the (per se) costly imposition of speech limitations that might potentially (in a naive world) mitigate agitation against the government, the fact that the government is willing to incur the costs of imposing such limitations in a particular policy area should make one consider whether the government was alerted to an increased frequency of individuals unhappy with the government in this realm.  This “strengthens” the conclusion about the effects of the ban—arguably mirroring the Le Pen example above—-because savvy citizens would infer that the imposition of a limitation on speech on a particular topic is itself indicative of citizen unrest on that issue.

With that quick post, I leave you with this reminder of the most eternal right.

Plumbing Presidential Power: Pens, Phones, & Paperwork

President Obama’s SOTU speech has revived interest in Presidential power.  Erik Voeten (here) and Andrew Rudalevige (here) argue that Presidential unilateral action has declined in recent years, while Eric Posner argues here that “executive power has increased dramatically since World War II.”

The question of presidential power is a classic one in political science.  The recent debates illustrate three important problems one confronts when trying to measure it, one conceptual, one practical, and one theoretical.  Before considering each of these in turn, it is useful to summarize Posner’s already-succinct point.  In a nutshell, Posner’s argument is that “more pages of regulation produced per year” implies greater executive branch power.

I consider three issues with this in turn.

The Executive Branch Is a “They,” Not a “He.” The Federal Register is essentially the daily record of executive branch actions, somewhat analogous to the Congressional Record.  In it, the various agencies and bureaus within the executive branch publish all sorts of things.  The highest profile (but by no means the only) category of these are what are known formally as rules, or colloquially as “regulations.”

The problem with equating regulations with presidential action is that they are almost never initiated or even approved by the president.  That is, the theoretical gold standard for a rule’s legal standing (i.e., why citizens and firms ought to follow them) is that they are exercising/instantiating statutory authority delegated by Congress to the agency or agencies in question.  The sometimes byzantine fashion in which a policy becomes a regulation is beyond the scope of this post, but it is not uncommon for the process to span multiple administrations.  That is, the action or policy embodied in a rule may very well have been initiated while “the other party” controlled the White House.[1]

Thus, as I will come back to below, regulatory action is (at least arguably) the executive branch doing the work that Congress has requested in terms of “filling in the details” of statutes passed by Congress.

Additionally, at least in de jure terms, the power to promulgate (publish) a regulation is generally held by someone other than the president.  That is, the president does not “sign” regulations.  Rather most statutes with regulatory impact direct a specific official to issue regulations in furtherance of the statute’s goals.  Indeed, one of the most important developments of presidential power since World War II, known colloquially as preclearance, consists of a largely unilaterally-asserted power by the President’s appointed official, the director of the Office of Information and Regulatory Affairs (OIRA).[2] What is somewhat notable about preclearance in this context is that, when this executive power is “exercised,” it usually keeps pages from being added to the Federal Register. But in any case, the existence of preclearance is an acknowledgment of the practical difficulties any president faces when trying to manage the incredible breadth of agencies with at least de jure regulatory autonomy.

Another way of putting this is that executive power and presidential power are related, but not equivalent.

All Pages Aren’t The Same. Of course, some regulations are important and others are unimportant.  But, more to the point, the Federal Register contains more than just rules.  For example, today (1/30/2014)’s Federal Register contains [3]

  • 4 Rules,
  • 6 Proposed Rules,
  • 131 Notices.

Thus, the (vast) majority of the pages of today’s Federal Register are not policy. Rather, they are things like “Notice of Request for Extension of Approval of an Information Collection; Accreditation of Nongovernment Facilities.”  That is, they are notifications of government agencies actions, many of which are trivial.  More importantly, it is distinctly unclear that these filings—many of which are required (somewhat ironically) by statutes such as the Paperwork Reduction Acts of 1980 & 1995—represent nimble and potent executive power.

Is that a Congress Behind the Curtain? I’m definitely not one to argue that executive power has not grown steadily since World War II (in fact, you can read how Sean Gailmard and I narrate and explain part of this rise in our book, Learning While Governing).  But Congress still matters.  And, as I mentioned above, the canonical story of administrative legitimacy (which Maggie Penn and I discuss in our forthcoming book, Social Choice and Legitimacy) begins with the agency issuing the regulation with authority granted by Congress.

As many political scientists have noted in various ways and forms, procedure can be (and, in my experience, often is) politics.  That is, Congress and the president often fight most bitterly over procedure (see executive privilege, fast track authority, filibusters, notice and comment, impoundment, etc.)  A lot of the Federal Register is filled with paperwork that was required of the executive branch by Congress and, furthermore, by Congress under both Democratic (e.g., 1969-1972; 1976-1980) and Republican (e.g., 1995-96) majorities.

As a closing note, if you look at Posner’s graph for a second:

Credit: Eric Posner

I’ll note three features:

1. The really big jump occurs between 1970 and 1975.  The cause of this jump (during Nixon’s Administration):

I’ll just note that Nixon did not get “exactly what he wanted” from the Democratic controlled Congresses in those statutes.

2. President Carter presided over an acceleration, and President Reagan immediately succeeded him with a dramatic pulling back, of the production of Federal Register pages.  I would definitively characterize the first term of the Reagan Administration as more “powerful/effective” than that of Carter’s.[4]

3. The (smaller but still big) jump is around 1990 and corresponds to the regulatory actions required to implement the Clean Air Act Amendments and Americans with Disabilities Act, each passed by Democratic Congresses with a Republican president.

Conclusion…? I guess the basic point of this post is that no single time series is going to capture presidential power.  There are a lot of specific reasons for this, but a major theoretical point is that, if there was, then Congress could leverage that number to “rein in” the president (we see this with the budget/debt ceiling every month or so these days).  Thus, a power-seeking president would attempt to find substitute ways to exert/exercise (truly) unilateral power.

With that, I leave you with this.

____________

[1] A famous (and unusual in other respects) example of this was the ergonomics standard, a history of which is presented here. Note that the linked history was written in 2002, right after the standard was repealed under the Congressional Review Act of 1996 (to my knowledge, the only regulation so far to have been overruled by Congress under the CRA)—things have evolved since then.

[2] OIRA was established by Congress in 1980 and is located within the Office of Management and Budget.  The Administrator of OIRA is subject to confirmation by the Senate.  OIRA’s main statutory mandate is reviewing agency’s requirements for information collection. However, the real “juice” of OIRA review is based on its presidentially crafted mandate to review draft regulations under Executive Order 12866, signed by Clinton and tinkered in minor ways by both GW Bush and Obama.  EO 12866 replaced EO 12291, signed by Reagan, which really established the preclearance regime.

[3] The Register is published daily, Monday-Friday.

[4] Before one says, “well, Reagan was pushing a deregulatory agenda,” I’ll note that (1) deregulation can require as much, if not more, notification and revision (i.e., pages) than regulation and (2) “yeah, that’s kinda my point.”

What Didn’t He Say? …And How Didn’t He Say it?

Tonight, President Obama will deliver the State of the Union speech, or SOTU.  The SOTU is an odd creature.  It is an annual opportunity for the President to directly address Congress on whatever he wishes—a time to “show his hand” for the upcoming year.  From a “math of politics” perspective, there are at least three interesting aspects of the SOTU: (1) time is limited, (2) it’s not just what you talk about, but how you talk about it, and (3) everybody knows that everybody else is listening, too.[1]

1. What’s (not) in Your Hand? First, the fact that time is limited and the SOTU occurs only once a year provides the President a chance to credibly signal priorities.  Given time constraints, including a topic in the SOTU is a signal of its importance in the obvious fashion.  But what is more interesting is the case of topics that one might expect the President to include but are instead left unmentioned.  Especially if the topic is relatively polarized, omitting it from the speech might be an implicit concession to moderate Democrats and/or Republicans in Congress.  If I had to pick a topic that is in the news and yet might not be mentioned, it would be immigration.  I’m not going to bet the house on this, but I wouldn’t be surprised if there isn’t much attention paid to it precisely because some in the GOP are indicating that they might be organizing for a push in favor of immigration reform.[2]  A topic I predict will be included: the debt-ceiling. (See my previous post about Mitch McConnell’s stance on this to understand why.)

Given the realities of bargaining before an audience (as I have discussed beforemultiple… times) the fact that a topic was not brought up in the SOTU can signal to Congress a flexibility on the part of the President about what he might be willing to accept with respect to that topic—because he has not (re)staked out a public position on the issue.  Indeed, this is independent of how the President spins a position on the topic.  That is, ironically, if the President explicitly signals flexibility on a topic in the SOTU, then this might make it harder for him to actually compromise, because the details of the compromise might be interpreted and/or spun as the President/Dems “folding” and/or the GOP might be accused of “not getting as much as they could.”  We’re arguably seeing a version of this play out right now with the debt ceiling.

2. What Kind of Glove?  Now, taking the topics brought up in the SOTU as given, how does the President propose to address each topic—does he “seek” Congressional leadership on the matter, or does he announce a unilateral initiative?  This is particularly interesting in tonight’s speech, given the recent suggestions by the White House that President Obama is “ready to take unilateral action to close the gap between rich and poor Americans” and the pre-speech announcement of an Executive Order that will raise the minimum wage for Federal contract workers.

The press release linked above is particularly interesting in this regard, as the following quote illustrates (emphasis added):

The President is using his executive authority to lead by example, and will continue to work with Congress to finish the job for all Americans by passing the Harkin-Miller bill.

Given this “pre-game messaging strategy,” Obama might be seen as more conciliatory and “bipartisan” if he requests Congressional leadership and/or initiative on other topics.  Furthermore, such an approach on a different topic would set up a justification for subsequent unilateral action by Obama if (when?) Congress fails to act on the matter in question.  The message sent by the White House’s comments in general and the minimum wage Executive Order in particular can be interpreted as President Obama saying, “hey, I’ll give it a go on my own if I have to.”  Of course, there’s necessarily a lot of bluster in such statements by any President, but it’s not completely cheap talk.

3. Why Don’t We Shake Hands On It? Third, the SOTU is the basis of what game theorists call “common knowledge.”[3]  In other words, while the President can say whatever he wants, whenever he wants, and get media coverage of it, the SOTU is a time when all of Congress is sitting in front of him when he says it.  That is, he knows that Congress has heard him take the precious time available in the SOTU to say what he said (and not say what he didn’t say) and, even more importantly, he knows that Congress knows that he knows they were there.  Or, put another way, the voters know that Congress heard the SOTU.

This mutual knowledge aspect of the SOTU is important in the following way: if Congress does not act upon a request in the SOTU, it is difficult to believe that the inaction was due to Congress not knowing that the President thought the topic was important.  As alluded to above, this is particularly relevant if the President eventually takes unilateral action on the topic. If he signaled a topic was important through the allocation of SOTU time and Congress doesn’t bring a bill on it to his desk, then it is clearly easier for the President to justify unilateral action to the voters.

In summary, while any speech is necessarily “just” talk, the State of the Union is more than just a speech: it is a constrained amount of time when everybody knows that everybody is watching. This kind of talk ain’t cheap.

With that, I’m going to go get my popcorn and leave you with this.

___________

[1] Of course, it’s not true that everybody is listening.  I mean, this is about SOTU, not SYTYCD.

[2] I’ll leave aside the interesting question of whether such attempts, given the timing, might be strategic attempts to preempt Obama staking out a big position on immigration.  That would require another post entirely.

[3] Technically speaking, common knowledge is a much deeper phenomenon than what I am talking about here.  However, this is merely a blog post. I am using this footnote to try to make it common knowledge that I am aware I am being loose with the notion of common knowledge.  Know what I mean?

 

 

CIA? See, I Am Policy Relevant

As most things I encounter, This New York Times story got me to thinking about, well, me.  Specifically, the article—discussing the Senate’s attempts to oversee the CIA’s interrogation programs—touches upon two strands of my research that, at first glance, might appear related only in that they both use mathematical models to analyze and characterize political phenomena.  One of these strands revolves around the use and acquisition of information in political institutions (specifically, but not exclusively, bureaucratic agencies).  The second focuses on how one might discriminate between legitimate political procedures and illegitimate ones.

Information and Oversight: Ex Ante vs. Ex Post Incentives. One story at the heart of the article revolves around the existence of a classified internal CIA report that members of the Senate Intelligence Committee would like to see.  The central question here is whether the internal report confirms the Senate’s (still classified) own report and/or contradicts the subsequent disclosures/admissions/justifications offered by the CIA to the Senate Intelligence Committee.

I have written, with Sean Gailmard, several articles and working papers (e.g., here, here, here, and here) and a book, Learning While Governing, that examine in various settings the incentives for individuals within hierarchical organizations to acquire, use, and honestly report information to their superiors.[1]

While I am sure you want to read each of these in their entirety, with the proper “SPOILER ALERT” I can summarize a principal thread linking the theories as follows:

The incentive to collect, use, and share information in a faithful way depends on expectations about who will subsequently get the information and how they will use it.

In terms of the Senate Intelligence Committee’s oversight efforts, the implication of this is as follows:

To the degree that the information revealed and shared within the CIA in writing its internal report is potentially used by the Senate to punish the CIA (in whatever form), successfully extracting the report may hinder the CIA’s efforts to internally collect and share information in the future.

Of course, we are not the first to make this point,[2] but it is often forgotten.  Put another way, attempts to keep the CIA’s internal report hidden need not indicate nefarious motives.  Rather, there is a coherent logic that justifies a lack of transparency, or stonewalling, in somewhat ironic pursuit of information in the future.[3]

Colloquially understood, “oversight” is an ex post phenomenon (it occurs after the actions of interest).  But game theoretic institutional analysis helps illustrate and remind us that this ex post procedure can have ex ante effects (individuals may change their behavior—sometimes in unexpected, or “perverse,” ways—in anticipation of it).

Legitimacy and Oversight. In line with the Senate Intelligence Committee’s pursuit of the CIA internal report, some Senators are also pursuing the Department of Justice’s internal classified memos that supposedly provide a legal rationale that justifies some of the interrogation techniques in question.  Quoting from the end of the article,

Much of Tuesday’s hearing was consumed by a debate about whether the White House should be forced to share Justice Department legal memos.

Under polite but persistent questioning by members of both parties, Ms. Krass repeatedly said that while the two congressional intelligence committees need to “fully understand” the legal basis for C.I.A. activities, they were not entitled to see the Justice Department memos that provide the legal blueprint for secret programs.

The opinions “represent pre-decisional, confidential legal advice that has been provided,” she said, adding that the confidentiality of the legal advice was necessary to allow a “full and frank discussion amongst clients and policy makers and their lawyers within the executive branch.”

Senator Feinstein appeared unmoved. “Unless we know the administration’s basis for sanctioning a program, it is very hard to oversee it,” she said.

In an article and a forthcoming book, Social Choice and Legitimacy: The Possibilities of Impossibility, Maggie Penn and I have tackled the question of when a government policy is legitimate, which colloquially means that it is consistent with an underlying set of principles or criteria (e.g., fairness, efficiency, equality, etc.).  The argument is social choice theoretic, and boils down to the following.

A policy is legitimate if one can construct a sequence of decisions that justify the policy in the sense that no earlier, intervening decision is strictly better than the final policy, and every unchosen policy that is arguably better than the final policy is itself inferior to one or more of the intervening decisions in the sequence that justifies the final policy.

Informally, the theory provides a precise characterization of justifying a choice through providing (or accompanying the decision with) other decisions that can serve as counter-objections to any alternative choice that one might propose to replace the final choice.

A direct implication of Arrow’s impossibility theorem is that there may in general be multiple legitimate decisions.  However, it is generally the case that there are many illegitimate—and non-legitimizable decisions, too.

A fundamental starting point/implication of our notion of legitimacy is that, in order to verify the legitimacy of a decision, one must have access to the rationale or justification for the decision.  As portrayed in our work, this can be thought of as an argument about both the principles that ought to (and/or did) guide the choice and an demonstration that the final decision is in fact justifiable—sometimes necessarily through a complicated process of reasoning.

In a nutshell, then, our work provides a systematic (though, of course, contestable) argument in favor of the executive branch sharing the DOJ memos with Congress.

Putting It All Together. In addition to being applicable to the same story, the two strands of research—which leverage different (though I just argued not that different) veins of theory—also complement each other, particularly in the light cast by situations such as this.  In particular, placed side-by-side, they demonstrate one of the most fundamental realities of politics: every choice we debate for more than a couple of seconds necessarily involves an important trade-off.

In other words, seeking the truth can sometimes ironically further its obfuscation, just like banning certain types of Superbowl Ads can ironically create an incentive to create ads that will be banned.  Finally, recognizing the ubiquity of such trade-offs leads to the recognition of the fundamental importance of social choice theory. To quote Maggie Penn and myself,

Rather than taking [the impossibility theorems of social choice theory] as negatives, to be either ignored or worked around, … these results motivate the entire study of politics. The potential irreconcilability of multiple societal and/or individual goals is exactly the raison d’être of government.

This has obviously been an even more blatantly self-promoting blog post than usual. Instead of feigning an apology for that, I leave you this, the greatest pop song ever written.

____________

[1] In solely authored work, I also tackle this topic in this article and this working paper.

[2] To be fair, though, I think we make it in a variety of new ways and draw new institutional implications of it.

[3] For the cognescenti of these kinds of models, there is always a delicate question of commitment at this point.  I am setting that to the side.

The Politics of Going Public

The Syrian crisis and the debt ceiling/government funding crisis have one thing in common in my mind.

Narrative.

In each situation, President Obama has a chance to “look Presidential” by being decisive. To be short about it, “Presidents order military strikes based on moral/strategic prerogative” and “Presidents tell Congress that the business of governing goes on.”

But what’s different about this situation?

I and others have thought and discussed the two crises ad nauseam.  But their interaction is the point of this post.  While I discussed this earlier, I will take a different, ahem, take in this post.

Syria comes first: Obama might or might not have personal beliefs about what is best to do there but let’s accept as plausible that “doing nothing” requires no explanation: the default of “don’t put Americans in harm’s way” is a safe and understandable (even when wrong) option. To do something requires explanation (especially after Iraq).  There are some layers there, given the various subtle differentiations between “airstrikes” and “boots on the ground,” but the point remains: if Obama wants to do more than a one-off strike on Syria, he needs to explain to America (read: insure himself in Congress) why this is a good idea.

Now we turn to the coming budget showdown: Obama will have no choice but to face a choice here.  Let’s stipulate that Congress ain’t going to hand him a sequester-free continuing resolution (longhand in today’s world for “federal budget”). So, he will face meaningful calls for veto threats, stonewalling, and general resistance to whatever Congress sends to his desk. And, of course, if Congress sends him nothing, then he is forced to face the question of how to avoid/deal with default.

Ugh.  I mean, $400K is a lot of sugar, but that job SOUNDS LIKE IT SUCKS RIGHT ABOUT NOW.

All that said, and while I have already said/implied that I think committing troops/materiel to Syria would put the GOP in a tough spot while also saying that he may very well (and understandably) not know what to do, I want to point out that Obama lingering on the sidelines on Syria, as morally slippery as that may be, might be the right strategic call.  While he dithers and is lampooned by Putin, Gates, and Panetta, Obama is reserving the flexibility to “go public” with a fuller narrative.  When and if the debt/funding crisis comes to his doorstep, Obama arguably has the ability to distract/dissimulate/refocus the public on “the big (Presidential) moments” as he sees fit.  This won’t work with certainty, of course, there are always Benghazis/Wacos/Katrinas to wreck a hard-working President’s day.  But, I’ll just point out that there’s one thing more Presidential than stepping in front of the camera and making the hard call and, as usual, it is referenced in Kenny Rogers’s “The Gambler.”

Real leaders know when to hold their Presidential opportunities until they are at their most ripe as Presidential moments.  From a strategic perspective, Obama has one distinct and enduring advantage over Congress: he not only gets to decide, but he also can—within reason—decide when and in front of whom to decide.

With that, I leave you with this.

“Syllogism? I Hardly Know Him!”: The Uneasy Wedding of Gay Marriage & (Political) Conservativism

“Disambiguiating,” as wikipedia fittingly obscurely puts it,

Conservatism is a set of political philosophies that favour tradition.

My point in this post is a defense of the notion that the Supreme Court’s ruling in US v. Windsor that the “Defense of Marriage Act” is unconstitutional.  (The majority’s reasoning in that case—that Section 3 of DOMA amounted to “a deprivation of the liberty of the person protected by the Fifth Amendment.”—is something I will return to below.  For now, I will focus simply on whether the US government “ought to be in the business of” differentiating between marriages.)

Well, in a nutshell, my focus should be enough to elucidate the syllogism underlying my argument:

Major premise: The US Federal Government has not historically differentiated between different marriages.

Minor premise: A government should not a make differentiation that has not been made traditionally.

Conclusion: The US Federal Government should make a differentiation between different marriages.

Note that, at least to my (and wikipedia’s) knowledge, the Federal Government never enacted an anti-miscegenation law, the nearest analogy in my opinion.  Thus, while the Federal Government has distinguished (and still distinguishes) between the married, unmarried, and widowed, the fact is that federal law has traditionally adopted a pretty fittingly (Gertrude) Steinian position vis-a-vis marriage:

A marriage is a marriage is a marriage…

Thus, the Supreme Court struck a (in my opinion, well overdue, albeit for additional reasons) decisive blow in favor of conservativism in its ruling in Windsor.  Simply put, laws should in general be few and, more importantly, avoid whenever possible making new distinctions between people and/or actions.  (There are a couple of arguments about why I believe this, including the undesirability of providing overly powered incentives.  For example, in this case, why should society attempt to provide a differential incentive to a man or woman who thinks about marriage to marry someone of the opposite gender?  More subtly, why should the federal government tell states that they need not think about providing such differential incentives?  Finally, why should the government provide differential incentives to “get married,” as opposed to more directly focusing on providing incentives for “being a responsible adult and taking care of your children?” But that is for another post.)

Before concluding, I’ll comment briefly on what I think—as a certified non-lawyer—the “best” reasoning for overturning the statute.  The full faith and credit clause (Article IV, Section 1 of the United States Constitution, henceforth FFCC) of the Constitution is, in my opinion, quite clear.  (As is, to be fair, the clause’s essential reiteration  in federal law.)  In a nutshell, the clause states that each State must recognize as legally binding the

“public acts, records, and judicial proceedings of every other state.”

So, the question here is one of degree.  Fast-forwarding, it is reasonable and true that the FFCC does not say that each State must recognize/enforce the laws of every other state.   Why?  Well, because it doesn’t, and because it would be ridiculous if it did.  Accordingly, if one state allows its clerks to issue marriage licenses to same-sex couples, then this does not mean that any other state need issue such marriage licenses.  We see differences like this everywhere in family law: the details of acceptable grounds for divorce, alimony, community property, etc. all differ in various states.

The question here, then, is actually not about the so-called “public policy exemption” that some believe is implicit in the original text of the Constitution.  I’ll agree that such an exemption must exist: but a marriage license is not a gun license.  (Insert joke here.)  In particular, while State A does not need to recognize State B’s decision to allow you to possess a particular gun in State B, State A does not possess the power (at least under the Constitution) to assert that the gun is not yours.  To me, at least, that’s the point, and it’s a real one: I’m strongly for marriage equality (though, again, maybe for weird reasons…), but as strongly as I believe that DOMA violates the FFCC, I don’t think that the FFCC (per seguarantees all married couples “equal rights” with respect to certain “marital actions” such as adoption (because the state has a compelling public interest in regulating adoption, for example).

I admit this because, again, I have a stronger (and in some sense “Constitution-free”) argument for why states should not differentiate between marriages based on the genders of the couple in question.  Sometimes you need a hammer, and sometimes you need a screwdriver: this more specific problem needs a screwdriver, and the FFCC is a hammer.

With that, I leave you with this.

Just So You Know, I Won’t Know: The Politics of Plausible Deniability

The IRS scandal, and in particular the handling (or, mishandling) of it by President Obama’s counsel, Kathryn Ruemmler, has raised a classic question: what did the President know, and when did he know it? In my mind at least, the question is predicated on the presumption that the president ought to know everything that is going on in the federal government.  After all, he is the administrator-in-chief, “the CEO,” the boss, the decider, the frickin’ POTUS!

A key point to remember through all such “management scandals” such as this is that the federal government is not a business, and particularly not the simplistic understanding of what “a business” is.  The reality is that the effectiveness of government can not be properly judged in an unambiguously unidimensional fashion like a classic (again, simplistic) business can be judged by its profits.  After all, one person’s “pork” is another’s “public purpose.”

That said, the real “mathofpolitics” point of this post is, even if the effectiveness of a government could be judged in a simple and uncontested unidimensional fashion, there are still situations in which the boss should not know everything that is going on.  Mind you, this is not a feasibility/constraints argument such as “the boss is simply too busy to worry about that).  Ironically, it is the opposite: there are situations in which the boss should not know some fact X precisely because the boss might care too much about X.  In other words, there are situations in which voters/shareholders (i.e., “principals”) might want their politicians/CEOs to not know something.  That is, the notion of plausible deniability is not exclusively a polite term for nefarious blame avoidance.

As I wrote a few days ago, political accountability is almost inherently an adverse selection problem. We as voters worry, and I think rightly, about the true motivations and goals of our representatives.  It is a little complicated, but consider the following simple situation to understand the importance of plausible deniability.

Suppose that a politician is charged with reviewing applications for grants.  Should the grant applications include the name of the applicants?  Well, practical concerns answer this in the affirmative: how else can you award grants if you don’t to whom to award them?

So, supposing the applications have the names on them, should the names be removed prior to the politician’s review of the applications?  That is, should the review be “blind” with respect to the applicants’ identities?  Before you answer, “yes, it’s only fair,” think why this is the case, because the reason is (at least) two-fold.  The more obvious of these two folds is based on the possibility that the politician is biased in favor (say) people who share his or her political views or partisanship.  (And let’s reasonably suppose that such favoring would be bad/inefficient relative to the goals of the grant program.)

In this scenario (the heart of the adverse selection worries alluded to above), removing the names creates a “more efficient” award process because it removes something that the awarding of the grants should not be conditioned upon from the ultimate determination of the awards.  This is a direct argument for “insulating” the politician from a piece of information about what’s going on in the government (i.e., who‘s getting grants?).

The second fold is more subtle.  Suppose that the politician is unbiased.  Technically, suppose that you almost certain that the politician is unbiased (i.e., the probability that the politician would exhibit favoritism is arbitrarily close to zero).  In simple probabilistic/expected utility terms, the argument sketched above would suggest that the gain from removing the names from the applications is also arbitrarily close to zero.  So, that argument would follow, you shouldn’t “pay much” or “go to much trouble” to remove the names from the applications, right?

Wrong. While this argument is correct “at the limit”—i.e., when the politician is absolutely, positively, without a doubt known to be unbiased—it falls apart (in game-theoretic terms, “unravels”) when there is even a scintilla of a (perceived) chance that the politician is biased.  The reason for this is that the politician, if he or she knows that the voters know that the politician can see the names, needs to worry about the voters inferring something (or “updating their beliefs”) about whether the politician is actually biased.  If he or she awards “too many” awards to his or her buddies (or, if we think there are a few friends, a few enemies, and a third group of non-friends/non-enemies, if the politician awards “too few” awards to his or her enemies), then a sophisticated voter will have increased (and perhaps greatly increased) reason to believe that the politician is actually biased.

The actual effect of this dynamic—for example, whether it will lead to too many or too few awards being awarded—depends on the parameters of the problem (specifically, the net benefit of an extra award and the weight that voters think a biased politician assigns to helping friends/hurting enemies), but the key to this second fold of the argument is as follows:

An unbiased politician will (ironically) condition his or her decisions on the names of the applicants if the politician is known/believed by the voters to have had the names when making his or her decisions.

So, when an advisor goes to great lengths to make it known (i.e., tells others, records the facts, etc.) that he or she did not tell the politician “the names,” this is not necessarily a “cover up.”  Rather, and particularly when the names are “politicized” (i.e., the awards are coming out in a biased way), this approach can be required (even if ultimately unsuccessful) to support a “de-politicized” decision process by the politician.

Another way to think of this is as follows: suppose that the politician chooses 10 awards, and the advisor, upon looking at the names in another room, realizes that the politician has given awards to 10 enemies.  At first blush, one might think—well, heck, if the politician is unbiased, then he or she could be told this fact.  But this is not true, because if this were a possibility, we (the voters) would (or should) wonder exactly whether the advisor told the politician that and the politician changed his or her mind/redid the awards whenever we see 10 of the politician’s friends receive awards.

Is this dynamic at play in the IRS scandal?  Yes.  It is at play throughout the federal government everyday.  For example, the idea of the special prosecutor (or special counsel) is entirely based on it.  Viewed from a strategic point, whether Obama should appoint a special counsel in this case is not unambiguous, as it could be akin to a Chamberlain moment vis-a-vis the House GOP, but I think I agree with Bill Keller that he should.

With that, I leave you with this.

 

 

Uninsurable Risk: Adverse Selection and the Politics of Scandals

American politics lately has been centered on SCANDAL! In particular, President Obama has been at the center of several well-publicized controversies, ranging from Benghazi to the IRS to the Department of Justice.

The politics of scandal is interesting.  For example, in none of the current scandals is there any real evidence that President Obama “did” anything directly (for example, as opposed to the crack-smoking scandal in Canada).  Rather, the questions center (appropriately) on whether the acts of subordinates reflect a general, if latent, policy/stance of the Obama Administration.  In all cases, the worry is essentially that the Obama administration is concerned with “political gain” at the expense of “good policy.”

Setting aside the difficulty of defining “good policy”—a ubiquitous problem that bedevils presidents from both parties—the “math of the politics of scandal” is similarly ubiquitous and bedeviling.  A potent view of the politics of scandal is provided by the notion of adverse selection. In particular, the adverse selection view of political scandals provides a useful understanding of why scandals are both inevitable and, more intriguingly, “important” even when the events directly associated with them are not necessarily so.

In a nutshell, adverse selection refers to situations in which there are multiple types of politicians (or any other agent), some of whom a given voter would prefer not to have in office, but the voter can not directly distinguish between the types.  Adverse selection is perhaps most visibly demonstrated in insurance markets: the insurance company would prefer to insure low risk individuals (e.g., good drivers), but provides a product that is necessarily particularly attractive to high risk individuals (bad drivers).

In the conventional view of politics, individuals who seek office are ambitious: campaigning is a costly and generally risky activity, and even holding office is not particularly remunerative. Accordingly, a degree of vanity or policy motivation is almost a sine qua non for a high profile political career.

Politicians who seek policy change are accordingly viewed with some suspicion: in order to secure public support for a new policy, a politician will seek to persuade voters that the policy is in their interest(s).  However, the costliness of campaigning, etc. and our belief that some politicians are accordingly driven by vanity or personal (rather than, or in addition to, “altruistic”) policy motivations implies that we view many such persuasion attempts with skepticism.  This is the first order effect of our awareness of the adverse selection problem: if there were no adverse selection problem, then we could rightly defer to the politician’s proposal.  (This protomodel can serve as a microfoundation for Richard Fenno’s famous “Home Style” argument: politicians seek to credibly emulate their constituents so as to ameliorate their constituents’ perceived likelihood of the adverse selection problem.)

The “politics of scandal” is a second order effect of adverse selection, in the sense that scandals, and the various courses they tend to run (e.g., slow burn, quick flame out, absolute barnburner, etc.) can be thought of as representing decentralized audits (or, perhaps, “sniff tests”) that politicians face from time to time.  If there were no adverse selection problem, a credible reform of consulate security would (or theoretically should) “end” the Benghazi scandal.  Similarly, if there were no adverse selection problem, replacing Michael Brown at FEMA would have quieted the backlash against President Bush following Hurricane Katrina.

So, what does the math of adverse selection tell us about scandal?  Well, the details of the model of course matter a lot, but a couple of generalizations are pretty robust.

  1. Scandals will be prolonged when the politician is already suspected of being a “bad type” and, furthermore, the scandal will be prolonged by and promoted towards those voters who are already suspicious of the politician.  (Call it the Kanye effect?)
  2. Admitting the failure and taking the blame for it can (will?) end the scandal.  This is because, under the typical adverse selection setting, the notion that the scandal is informative is precisely because the politician is actively trying to conceal his or her type. (Definitely call this the “Bay of Pigs/Janet Reno” effect.)
  3. Scandals will be prolonged when the act(s) in question have a high probability of revealing new information about the politician.  That is, a scandal that strongly contradicts closely held beliefs of the politician’s supporters about the “true nature” of the politician (e.g., the AP subpoena scandal for Obama or the Rush Limbaugh prescription drug scandal) will be prolonged precisely because those opposed to the politician have a greater incentive to push, dig, and promote it.  To take the contrapositive, it is unclear that Obama would be (politically) harmed even if it came out that he personally audited and harangued tea party 501(c)(3) groups.  (Taking a similar example from the past, call this the “Dick Cheney’s Secret Task Force Effect.”)  (Also, as a mathofpolitics point, note that this is related to—in the sense of being the mathematical dual of—the “It Takes A Nixon To Go To China” class of signaling models.)
  4. Scandals are more prolonged when it is more plausible that the politician knew about the problem before it happened.  Of course, the famous saying that It’s not the crime, it’s the cover-up seems to suggest otherwise.  In reality, the saying is making exactly this point.  The reason that the cover-up was so important (as, similarly though less spectacularly, was the Whitewater/Lewinsky impeachment fiasco with President Clinton) is that the cover-up activities, the obfuscating, the dodging are all consistent with a “bad type” reluctant to reveal this fact.

In the practical terms of the three aforementioned Obama imbroglios, these generalities suggest to me the following rough-and-ready-and-completely-seat-of-the-pants ranking of their “seriousness” from least to most serious:

  1. (Least) Benghazi,
  2. IRS,
  3. (Most) AP subpoenas.

This post is already arguably too long, but I’ll quickly list the empirical characteristics that helped me make this list in light of the four generalities above:

  1. Attorney General Holder is still in office and presumably a close confidante of President Obama.  Note that Attorneys General are Ground Zero for modern Cabinet-level scandals. (Sorry, Department of the Interior, the heydays of the 19th and early 20th Century are currently no more but, that said, never forget the now-disemboweled Marine Mineral Service and the Deepwater Horizon disaster.)
  2. The AP scandal is arguably distinctly “unDemocratic”—particularly in light of its obvious analogies with the various Nixon scandals (also applies to the IRS scandal even more directly, though the IRS was significantly reformed after Nixon’s fiascos).
  3. It’s not clear how Obama can come out “against” the AP scandal (for example, see this).  This is a bit complicated, but it is thin needle to thread to be against the AP scandal and against potentially-national-security-compromising leaks of classified information.

So, Obama rightly has his work cut out for him.  If it were me, which it most definitely ain’t, I don’t know what I’d do.  Probably obfuscate, wait for the public to become disinterested in subpoenas and journalists, and pray for a happy, healthy, and well-covered royal birth.

With that, I leave you with this.

 

The Impermissibility of Permission Structures

The idea of a “permission structurehas attracted some attention this week.  The basic idea of this phrase, it seems, is as follows: A doesn’t trust B to do some activity X because A fears that B does not have A’s best interests at heart in the “realm” of X.

A good example of this type of distrust is when you get in a car accident.  Both you and your car insurance company are faced with the difficulty of who should determine what “should be fixed” under your policy. You don’t want the insurance company to determine this, because they have an incentive to minimize costs and, accordingly, denote too few things as “needing to be fixed.”  On the flip side, your insurance company doesn’t want to let YOU determine this, because suddenly that “three martini lunch bumper ding” you got 6 months ago is deemed “covered” and repaired on the insurance company’s dime.

The point I want to make is that, at least in a very specific sense, permission structures can (almost) never solve the problem they are purportedly designed to solve.  In a nutshell, think of a simple model where there are two types of politicians: one type is “faithful” and the other type is “biased.” To continue to keep it simple, suppose that the faithful type of politician will always use (say) increased taxes in a way that benefits you and the biased type will use them in such a way as you would rightly prefer not to have your taxes increased for how he or she would spend the resulting revenues.

The idea of a permission structure is to clarify to you, the voter, when the politician is a faithful type and not a biased type.  As Obama said recently,

We’re going to try to do everything we can to create a permission structure for [Congressional Republicans] to be able to do what’s going to be best for the country.

The impossibility of “creating a permission structure” (regardless of whether it is through “third party authentication” or otherwise) is due to the use of the term “creating” (it is also doubly ironic for Obama to announce that he and his team are going to “try to do everything we can” to create one).  The math of politics of this post is a remarkably simple point that seems to have been closely brushed by many of the analysts, and it rests on the concept of “creating” such a structure. Suppose that a third-party authenticator could be found/created/cajoled—or even simply brought to everyone’s attention—that would lead voters to say “hey, cool—you’re the faithful type!”  Then think for a minute and ask yourself—why would the biased type not find/create/cajole such an authenticator?  Indeed, in many (but not all) situations, the biased type would have a stronger incentive to create a permission structure than the faithful type.

There’s always the possibility that when the politician is a biased type no such authenticator could be found/created/cajoled.  But, let’s be honest, that’s a pretty knife-edge case.  (I mean, have you heard of Wayne LaPierre?)  Also, it’s at least theoretically possible that the biased type is relatively uninterested in raising your taxes (and, accordingly, little interest in creating a permission structure).  I leave this to the side, as such a presumption describes exactly zero American voters’ beliefs about politicians.

Accordingly, the problem with Obama’s statement wasn’t the elitist/wonkish sound of the term, or the possibility that it strengthened a perception of him being unwilling to “knock some heads” or otherwise “lead.”  (Nonetheless, I do appreciate the irony of conservatives banging on the table saying “WHY DON’T YOU LEAD LIKE THE GUY WHO CREATED MEDICAID AND GOT BOTH THE CIVIL RIGHTS & VOTING RIGHTS ACTS PASSED!!!”)

…No, the only real problem with the statement is that Obama pointed out the man behind the curtain: many voters can’t trust “government” right now precisely because they have a strong suspicion that government is trying to fool them.  This is very sad to me for many (nonpartisan) reasons, but it illuminates the adage:

Never trust a man who says, “Trust me.”

With that, I leave you with this.