Shining A Little More Light On Transparency

Thinking more about transparency (which I just wrote about), it occurred to me that I neglected two pieces (of many) that are relevant for the point about transparency of decision-making in bodies like the Federal Open Market Committee (FOMC) in which expertise plays an important role in justifying the body’s authority.

David Stasavage and Ellen Meade made use of a great (and entirely on point) data set in their analysis of the effect of transparency on FOMC decision-making in their Economic Journal article, ”Publicity of Debate and the Incentive to Dissent: Evidence from the US Federal Reserve.” They find strong evidence that, once members knew their statements were being recorded, both the content of their opinions and their individual votes on monetary decisions changed.  

The general implications of this point from a theoretical perspective are nicely laid out in Stasavage’s Journal of Politics article, “Polarization and Publicity: Rethinking the Benefits of Deliberative Democracy.” Transparency can affect individual incentives, particularly among career-motivated decision-makers.  If one presumes that the decision-makers in a deliberative are motivated to “look good” by making good decisions, and one is mostly or wholly concerned with the quality of their performance then, in a specific sense, transparency of individual decision-makers’ opinions and votes can “only hurt” actual performance, because the decision-makers are not worried not only about the performance of their collective decisions (e.g., the actual inflation rate), but also by how their individual opinions/inputs are viewed.

Why Have Transparency At All, Then?

There are two broad categories of theoretical arguments in favor of transparency.  The first of these is screening and the second is record-keeping.

Screening. Recall that the problems with transparency sketched out above and in my previous post follow from the presumption that some or all of the decision-makers are interested in being rewarded and/or retained by voters/Congress/the president or whomever else might employ them in the future.  This “career-concerns model” of course implies that somebody else is going to be considering whether to retain, hire, or promote these decision-makers again in the future.  I’ll leave the details to the side for now and simply note that, if the “next job” for which they will be considered is sufficiently important relative to the current job, the ability to possibly infer something about the relative expertise or abilities of the decision-makers might be sufficiently valuable to warrant introducing some “noise” into the current decision-making.[1]

Record-Keeping. Nobody lives forever.  Many decision-making bodies that have authority because it is believed that expert decision-making can and should be used to set policy exist for many years, with decision-makers rotating in and out.  In such situations, because one is leveraging expertise as a justification, one might think that past experience can inform future decisions.  Steve Callander has recently published several excellent articles (here, here, and here) that offer a good starting point (unexplored as far as I know) for us to consider the types situations in which transparency can be helpful by allowing future decision-makers to not only observe past performance, but also learn how policy decisions actually affect outcomes by observing the details of the decisions that produced those outcomes.

Note that this argument, as opposed to the screening argument above, leaves room for one to think meaningfully about the proper “lag” or delay of transparency.  As the evolution of FOMC policy illustrates, many transparency policies involve a delay between decision and publication.[2] Interesting aspects of the policy process, such as how much information is conveyed by more recent versus older decisions, would presumably play a role in the final derivation of how much transparency is optimal.

Conclusions. If there’s any grand conclusion from this post, it’s that I think there’s a lot of important topics left in the study of transparency, and as social science theorists we should start thinking about getting closer to the “policy technology side” of the decision(s) being made.  Abstract static models provide a lot of very key and portable insights.  But they can take us only so far.


[1] Of course, if transparency in the current decision process leads every decision-maker to “pool” and do the same thing, regardless of their type, then one can’t infer anything about the decision-makers from their decision, thereby obviating this argument for transparency.  This will be the case when the decision-makers are sufficiently motivated to “get hired in the next job” relative to their innate preference to “make the right decision” in the current matter at hand.  In the FOMC, this would be an FOMC member who cares a lot more about becoming (say) Fed Chairman someday than he or she does about getting monetary policy “right” today.

[2] This type of argument, combined with career concerns, would also allow us to think in more detail about to whom the decisions ought to be made transparent and from whom this information should be withheld.


Why Separate When You Can…Lustrate!?!

Today’s post by Maria Popova and Vincent Post, “What is lustration and is it a good idea for Ukraine to adopt it?” made me think about the difference between what I will call policy and discretionary purges.

It is not easy for a nation to fix itself after a period of authoritarian rule.  There are many individuals who actually compose the government, and it is not clear that they share the ideals of the new government and, even if they do, the worries about career concerns and adverse selection that I raised a few minutes ago here suggest that changing behaviors might be hard even if the vast majority of bureaucrats/judges/legislators agree with democratic norms, the rule of law, the relative inelegance of bashing your opponents’ heads in, etc.

So, one practical approach to fixing an institution in the sense of massively and quickly redirecting its aggregate behavior (as produced by the panoply of individual decision-makers’ choices) is what we might call wiping the slate clean.  Clear the decks, Ctrl-Alt-Delete the whole shebang.

Another way is to find the people who are the problem(s) and eliminate them.  The prospect of removal might, in equilibrium, convert some who were previously scofflaws into temperate and sage clerks, after all.

I want to make a quick point.  Removal of officials is practically hard (because those who fear removal will hide evidence and otherwise obstruct the Remover’s attempts to ferret them out).  But, more intriguingly, removal of officials is politically hard…for the Remover. In cases like the Ukraine, this isn’t because removal of any official is likely to be unpopular (it’s probably the reverse…just ask Vergniaud). Rather, the problem is one of adverse selection in terms of those who are judging the motivations and trying to predict the future actions of the Remover.[1]

To think about this clearly and quickly, consider the baseline case where the Remover “cleans house,” removing everyone, and then consider the deviation from this in which the Remover “forgives” one official, who I will call “Official X.”[2]

What should we infer?  Does the Remover really have information that exculpates Official X?  Or perhaps Official X paid a bribe?  Or perhaps Official X is blackmailing the Remover? Or perhaps…       You can see where this is going.  The Remover is at risk of being suspected of being or doing something untoward if he or she has and uses any discretion.  Accordingly, the Remover would prefer to not have discretion.

The same logic applies, obviously, to a plan of “well, let the Remover prosecute those who `should’ be removed.”  Unless the Remover’s hands are tied with respect to whom to prosecute, people will always have reason to wonder “well, Official Y got prosecuted….but not Official X….”

Is Lustration a good idea?  I don’t know.  And I will mention that Popova and Post are making a different point, which is really about the extent and severity of lustration.  My point here is just that “statutory/mandated purges” are very different from “executive/discretionary purges” and, somewhat counterintuitively, it may very well be in the interests of “the Remover” to have his or her discretion taken away.[3]


[1] Note that, as always, this is equivalent to a problem of credible commitment on the part of the Remover to “not use his/her biases” when deciding whom to remove.

[2] The logic holds generally (i.e., when the Remover forgives/pardons more than one official), but this focuses our attention in a nice way.

[3] I’m trying to keep these short, but I’ll note that this incentive is stronger for a Remover who believes that the external audiences who are trying to judge the Remover’s information/character/etc. are really uncertain about the Remover’s information/character/etc.  This is because high levels of initial uncertainty imply that the discretionary actions of the Remover will have a larger impact on the beliefs held by the members of the audience, and the adverse implications of discretion on these beliefs is the justification for the Remover wanting to limit his or her own discretion.

How Transparency Could Harm You, Me, and the FOMC

Sarah Binder, as usual, provides excellent insights into a difficult political problem in this post discussing the potential political and economic pitfalls of imposing greater transparency on the Federal Open Market Committee (FOMC), which essentially directs the Federal Reserve’s active participation in the economy, thereby having the most direct control over short-term interest rates and, accordingly, day-to-day “monetary policy” in the United States.

The FOMC is a really big deal.  As Binder notes, the importance of the committee accordingly makes both economic and political observers keen to understand and forecast what it will do in the future.  By deciding over the past decade or so to publish more and more detailed data about the views of the FOMC members,[1] the Fed has increased the transparency of the information it receives.

This seems like a good idea, right?

Well, social science theories in both economics and political science acknowledge the importance of whether the FOMC’s behavior is predictable or not.  On the economics side, predictability of monetary policy (at least in terms of its outputs such inflation) is generally perceived to be a good thing, because it allows investors to focus more attention on the “fundamentals” of an asset’s value, as opposed to paying a lot of attention to purely nominal phenomena and/or inefficiently delaying/accelerating investment and consumption decisions.  In other words, while a low, fixed inflation rate is good, variation in the inflation rate is inevitable, and if this variation can be reasonably accurately forecast, this is a “second-best” outcome.

On the political science side of things, a traditional argument for transparency (in addition to the one above) is that it fosters legitimacy and/or public confidence in the Fed, and thereby makes the Fed a more credible “political actor.”  A more technical description of this is that transparency alleviates an adverse selection problem between the Fed and the public.  The Fed knows something that the public/Congress/Presidents want to know, and—in some situations—everyone would be better off if the Fed could somehow just reveal this information to the public/Congress/Presidents.

Solving this kind of problem is very tricky in practice, because a real solution requires that the Fed not be responsible for releasing the information.  And there’s some interesting things in the FOMC structure (it’s composed of multiple, and members with various overlapping terms) and the evolution of the transparency.

Being the contrarian that I am, I wanted to note two arguments against too much transparency.  I don’t think these are strong enough to justify total opacity, of course, but I do believe they’re strong enough to serve as cautionary tales regarding total transparency.

Each of these arguments revolves around an additional potential instantiation of adverse selection.  The first regards the motives of the individual members of the FOMC.  When decision-makers are career-oriented (they want to be reappointed/promoted/rewarded for their ability/performance, etc.), too much transparency about the decision-maker’s actual decision (i.e., votes and personal positions on monetary policy in the FOMC meetings) can induce conformism (or “pooling”) by the agents such that their policy decisions become suboptimally unresponsive.  For example, everybody might start acting as an inflation hawk would so as to increase the perception of their hawkishness (a worry indirectly indicated in Yellen’s comments as discussed by Binder).[2]

The second argument involves the incentives of those that make individual decisions that the Fed observes.  In particular, the Fed (and every regulatory agency) collects lots of data about the behaviors of firms and individuals.  In some cases, if (say) major firms (as the Fed is responsible for regulating) have access to the information that the Fed will ultimately use to make policy, the incentives of these firms to make decisions that are individually suboptimal in order to try and manipulate the Fed’s subsequent decision-making will be exacerbated.  That is, transparency of the Fed’s information can increase the incentives of major banks (and, arguably, even other regulators) to choose their own actions in ways that try to obscure their own private information.  When this happens, you have a double-whammy: (1) the individual firms’ decisions are not optimal and (2) the Fed does not glean as much information about the real state of the economy from the decisions of these firms.

Sean Gailmard and I make this point (coincidentally with an empirical application to Financial Industry Regulatory Authority (FINRA)) in our recent working paper, “Giving Advice vs. Making Decisions: Transparency, Information, and Delegation.”

Conclusion. I definitely don’t know what the “right” policy for the Fed is without further thought.  But the supposition that “increased transparency is unambiguously good” is at odds with at least two theoretical arguments. Accordingly, it might not be nefarious motives that lead policymakers to call for discussion of “how much transparency is too much?”


[1] See this description of the recent evolution of Fed transparency and, for a little historical context, see this report describing the 2007 change.

[2] Note that this argument implies that observing the actions of the decision-maker(s) can be bad, but it does not necessarily imply that observing what happens from those decisions (e.g., the actual inflation rate) can be bad. (Good citations on this point are Prat (2005) (ungated working paper here) and Levy (2007) (ungated working paper here), and my colleague Justin Fox has produced multiple excellent theoretical studies centering on this question (here, here (with Ken Shotts), and here (with Richard Van Weelden)).

Mind The Gap: The Wages of Aggregation, Evaluation, and Conflict

For whatever reason, I’m on a “data is complicated kick.”

So, this story is one of many today discussing the gender gap in wages in ‘Merica. In a nutshell, President Obama pointed out “that women make, on average, only 77 cents for every dollar that a man earns.”  Critics (most notably the American Enterprise Institute) immediately pointed out that “the median salary for female employees is $65,000 — nearly $9,000 less than the median for men.”

There are LOTS of angles on this thorny issue.  I want to raise the specter of social choice theory as a mechanism by which we can understand why this debate goes around and around.[1] The basic idea is that aggregation of data involves simplification, which involves assumptions.  Because there are various assumptions one can make (properly driven by the goal of one’s aggregation), one can aggregate the same data and reach different conclusions/prescriptions.

To keep it really simple, consider the following toy example.  Suppose that a manager currently has one employee, who happens to be a man, who makes $65,000/year, and the manager has to fill three positions, A, B, and C.  Furthermore, suppose that the manager has a unique pair of equally qualified male and female applicants for each of these three positions.  Finally, suppose that position A is paid $70,000/year, position B is paid $60,000/year, and position C is paid $45,000/year.

Now consider two criteria:

(1) eliminate/minimize the gender gap in terms of average wages,[2] and
(2) minimize the difference between proportions of male and female employees.

How would the manager most faithfully fulfill criteria (1)?  Well, if you hire the woman for position B and the two men for positions A and C, then the average wage of women (i.e., the woman’s wage) is $60K, and the average of the three men’s (the existing employee and the two new employees) wages is $60,000.  This is clearly the minimum achievable.[3]

How about criteria (2)?  Well, obviously, given that one man is already employed, the manager should hire two women.  If the manager satisfies criteria (2) with an eye toward criteria (1), then the manager will hire a man for position B and women for positions A and C.

Note that the two criteria, each of which has been and will be used as benchmarks for equality in the workplace (and elsewhere), suggest exactly and inextricably opposed prescriptions for the manager.

In other words, the manager is between a rock and a hard place: if the manager faithfully pursues one of the criteria, the manager will inherently be subject to criticism/attack based on the other.

Note that this is not “chaos”: the manager, if faithful, must hire no more than 2 of either gender: hiring three men or three women is incompatible with either of these criteria.[4] But the fact remains—and this is a “theory meets data” point—one can easily (so easily, in fact, that one might not even realize it) impose an impossible goal on an agent if one uses what I’ll call “data reduction techniques/criteria” to evaluate the agent’s performance.

In other words: real world politics is inherently multidimensional.  When we ask for simple orderings of multidimensional phenomena (however defined, and of whatever phenomena), we are in the realm of Arrow’s Impossibility Theorem.


[1] This argument is made in a more general way in my forthcoming book with Maggie Penn, available soon (really!) here: Social Choice and Legitimacy: The Possibilities of Impossibility.

[2] Here, by “average,” I mean arithmetic mean.  Because this example is so small, there is no real difference between mean, median, and mode in terms of how one measures the gender gap.  If these differ in practice, then the problem highlighted here is merely (and sometimes boldly) exacerbated.

[3] To be clear, I am setting aside the issue of “how much does a gender make if none of that gender is employed?” While technically undefined, I think $0 is the most common sense answer, and I’ll leave it at that.  

[4] Of course, as Maggie Penn and I discuss in our aforementioned book, there are many criteria.  Our argument, and that presented in this post, is actually strengthened by arbitrarily delimiting the scope of admissible criteria.

It’s Better To Fight When You Can Win, Or At Least Look Like You Did

In this post, Larry Bartels provocatively claims that Rich People Rule! In a nutshell, Bartels argues (correctly) that more and more political scientists are producing multiple and smart independent analyses of the determinants of public policy, one of which, by Kalla and Broockman, I have already opined on (“Donation Discrimination Denotes Deliverance of Democracy“).

Bartel’s motivation for bringing this up is essentially this quote from this forthcoming article by Martin Gilens & Benjamin Page:

economic elites and organized groups representing business interests have substantial independent impacts on U.S. government policy, while mass-based interest groups and average citizens have little or no independent influence.

The Gilens and Page is an interesting read, if only because the data on which it is based is very impressive.

Unfortunately, the theory behind the work is not nearly as strong.  In particular, the study is based on comparing observed position-taking by interest groups with (solicited) individual feedback on various surveys.[1]  So what?  Well, there is at least one potential problem, containing two sub-points, the combination of which I’ll call the Pick Your Battles Hypothesis.

Pick your battles.  Interest groups do not randomly announce positions on public issues.  Rather, any interest group of political interest presumably attempts to influence public policy through strategic choices of not only what to say, but when to bother saying anything at all.  While the mass public opinion data was presumably gathered by pollsters in ways to at least somewhat minimize individuals’ costs of providing their opinions, the interest groups had to pay the direct and indirect costs of getting their message(s) out. There’s two sub-points here, one more theoretical interesting than the other and the other presumably more empirically relevant.

Sub-point 1: Pick a winner. The theoretically interesting sub-point is that an organized “interest group” is/are the agents of donors and supporters.  To the degree that donations and support are conditioned on the perceived effectiveness of the interest group, (the leaders/decision-makers of) an interest group will—ala standard principal-agent theory—have a greater incentive to pay the costs of taking a public position when they perceive that they are likely to “win.”  If there is such a selection effect at work, then the measured correlation between policy and interest groups’ positions will be overestimated.

Sub-point 2: Only Fight The Fights That Can Be Won. The more empirically relevant sub-point is that, even if one thinks that interest groups don’t fear being on the losing side of a public debate, the simple and cold reality of instrumental rationality is that, if making an announcement is costly, any interest group should make an announcement only when the announcement can actually affect something.  Moving quickly here, this suggests that interest groups should be taking positions when they believe decision-makers might be persuaded.  To the degree that these decision-makers are presumably at least somewhat responsive to public opinion (however measured), instrumentally rational (and probably asymmetrically informed) interest groups will be more likely to make announcements that run against relative strong public opinion than to join the chorus.[2]  If this is happening, the question of whether interest groups have too much influence depends on whether you think they have better or worse information and on the types of policies that their views are influential on.

Conclusion. As political scientists know, observational data is tricky.  This is particularly true when it is the result of costly individual effort in pursuit of policy (and other) goals.  I really like Gilens and Page’s paper—the realistic point of scholarly inquiry is not to be right, it’s to get ever closer to being right, and this is even more true with directly policy-relevant work.  I just think that great data should be combined with at least a modicum of (micro-founded, individualistic) theoretical argument.  Without that, we might think umbrellas cause rain, hiring a lawyer causes you to go to jail, or chemotherapy causes death from cancer.  In other words, the analyst has simultaneously more data and less information than those he or she studies.


[1] Gilens and Page also compare responsiveness to mass opinions of economic elites (i.e., those in the 90th percentile in income) versus those of the median earner.  While I have some issues with this comparison (for example, I imagine getting a representative sample of the 90th income percentile is a bit different than getting one of the median income earner and, as Gilens and Page acknowledge, the information held by and incentives of the rich are plausibly very different from those of median earners), I will focus on the interest group component of the analysis in this post.

[2]  That this is not just hypothetical crazy talk is indicated by the relatively strong negative correlation (-.10***) between the positions of business interest groups and the average citizen’s preferences.


My Ignorance Provokes Me: I know Where Ukraine is and I Still Want to Fight

It’s been too long since I prattled into cyberspace.  This Monkey Cage post by Kyle Dropp  Joshua D. Kertzer & Thomas Zeitzoff caught my contrarian attention.  In a nutshell, it says that those who are less informed about the location of Ukraine are more likely to support US military intervention.  This is an intriguing and policy-relevant finding from a smart design.  That said, the post’s conclusion is summarized as: “the further our respondents thought that Ukraine was from its actual location, the more they wanted the U.S. to intervene militarily.”  The implication from the post (inferred by me, but also by several others, I aver) is that this is an indication of irrationality.  I hate to spoil the surprise, but I am going to offer a rationalization for this apparent disconnect.

First, however, the study’s methodology—very cool in many ways—caught my eye, only because (in my eyes) the post’s authors imbue the measure with too much validity with respect to the subjects’ “knowledge.”  Specifically, the study asked people to click on a map where they think Ukraine is located.  The study then measures the distance between the click and Ukraine.[1]  Then Dropp, Kertzer, & Zeitzoff state that this

…distance enables us to measure accuracy continuously: People who believe Ukraine is in Eastern Europe clearly are more informed than those who believe it is in Brazil or in the Indian Ocean.

I disagree with the strongest interpretation of this statement.  While I agree that people who believe Ukraine is in Eastern Europe are probably (not clearly, because some might guess/click randomly on Eastern Europe, too) more informed than those who “believe it is in Brazil or in the Indian Ocean,” I would actually say that the example chosen by the authors suggests that distance is not the right metric.  For example, someone who thinks Ukraine is Brazil is clearly wrong about political geography, but someone who thinks that Ukraine is located in the middle of an ocean is clearly wrong about plain-ole geography.

More subtly, it’s not clear that the “distance away from Ukraine” is a good measure of lack of knowledge.  In a nutshell, I aver that there are two types of people in the world: those who know where Ukraine is and those who do not.  Distinguishing between those who do not by the distance of their “miss” is just introducing measurement error, because (by supposition/definition) they are guessing.  That is, the true distance of miss is not necessarily indicative of knowledge or lack thereof.  Rather, if you don’t know where Ukraine is, then you don’t know where it is.

Moving on quickly, I will say the following.  It is not clear at all that not knowing where a conflict is should (in the sense of rationality) make one less likely to favor intervention. The key point is that if anyone is aware of the Crimea/Ukraine crisis, they probably know[2] that there is military action.  This isn’t Sochi, after all.

So, I put two thought experiments out there, and then off to the rest of the night go I.

First, suppose someone comes up to you and says, “there’s a fire in your house,” and then rudely runs off, leaving you ignorant of where the fire is.  What would you do…call the fire department, or run through the house looking for the fire?  I assert that either response is rational, depending on other covariates (such as how much you are insured for, whether you live in an igloo, and if you have a special room you typically freebase in).  The principal determinant in this case in many situations is the IMPORTANCE OF PUTTING OUT THE FIRE, not the cost of accidentally dowsing one too many rooms with water.

Second, the Ukraine is not quite on the opposite side of the world from the US, but it’s pretty darn close (Google Maps tells me it is a 15 hour flight from St. Louis).  So, let’s think about what “clicking far from Ukraine when guessing where Ukraine is” implies about the (at least in the post) unaddressed correlation of “clicking close to the United States when guessing where Ukraine is”?  This picture demonstrates where each US survey respondent clicked when asked to locate Ukraine.  Focus on the misses, because these are the ones that will drive any correlation between “distance of inaccuracy and support for foreign intervention” correlation. (Because distances are bounded below by zero and a lot of people got Ukraine basically right.)

There are a lot of clicks in Greenland, Canada, and Alaska. I am going to leave now, but the general rule is that the elliptic geometry of the globe (and the fact that the Ukraine is not inside the United States[3]) implies that clicking farther away from Ukraine means that you are, with some positive (and in this case, significant) probability clicking closer to the United States.

So, suppose that the study said “those who think the Ukraine is located close to the US are more likely to support military intervention to stem Russian expansion?”  Would that be surprising?  Would that make you think voters are irrational?

Look, people have limited time and aren’t asked to make foreign policy decisions very often (i.e., ever).  So, let’s stop picking on them.  It is elitist, and it offers nothing other than a headline/tweet that draws elitists (yes, like me) to your webpage.

Also, let’s not forget that, as far as I know, there is no chance in the current situation of the United States government intervening in the Ukraine. So, even if voters are irrational, maybe that’s meta: we have an indirect democracy for a reason, perhaps?


[1] If I was going to get really in the weeds, I would raise the question of which metric is used to measure distance between a point and a convex shape with nonempty interior.  There are a lot of sensible ones. And, indeed, the fact that there isn’t an unambiguously correct one is actually an instantiation of Arrow’s theorem.  Think about that for a second.  And then thank me for not prattling on more about that.  [That's called constructing the counterfactual. -Ed.]

[2] And, as the authors state, “two-thirds of Americans have reported following the situation at least “somewhat closely,

[3] Just think about conducting this same survey with a conflict in Georgia.  Far-fetched, right?  HAHAHAHA

Donation Discrimination Denotes Deliverance of Democracy

A recent paper by Joshua Kalla & David Broockman has attracted some attention (for example, in this Washington Post storyMonkey Cage post, and this excellent, reflective post on Mischiefs of Faction by Jennifer Victor).  In a nutshell, the paper reports the results of a well-designed field experiment that provides evidence that donations to a Member of Congress “open doors” in the sense that being a donor promotes access to more high ranking officials in the Member’s staff, including possibly the Member of Congress himself or herself.

I am not going to critique the study. Jennifer does that well in several ways.  Unrelatedly, I am also not going to doubt (or cast doubt upon) the results.  Rather, doing what I do, I am going to make a quick point about the question at hand.

We have a situation in which a (quasi-)monopolist (the Member) has a “good” to sell (access/face time).  Simply put, let’s suppose this good is valuable to some people and, similarly, that donations are valuable to the Member.  Then, it follows from a classic corner of social science known as price discrimination that the Member (in self-interested terms) should privilege those who are willing to pay for it.  That is, those who want access most will be willing to pay more than those want access less, and an efficient means to allocate the scarce/costly resource of access is to give to those who are most willing to pay.  Is this normatively disturbing?  Hell, yes.  Is it troubling even in everyman’s language?  Oh, for sure.  Is it inevitable?  Well, yes, that too.

Here’s another, more methods-meets-theory take on it.  Suppose that a Member imposed a policy where donations did not offer an advantage in obtaining access.   Now, think about your position as a constituent/citizen seeking access.

What would you do?

Let’s suppose that you like money. We’ve already supposed you seek access.  Now, finally, put those two together in the face of the hypothetical Member who does not reward donations with preferential access. … You should be very happy as you realize that you can have your cake and eat it, too, as you keep your money and waltz into the Member’s office, swilling sherry and talking Grand Strategy into the wee hours.

The summary of this hypothetical is this: if you believe that is plausible (1) that members don’t reward donations with preferential access and (2) that potential donors like money, then the predicted level of donations to any members is zero.[1]

We know that people give money to campaigns.  We also know or at least strongly believe that people expect something for their money.  Putting these together, I will simply say that the conjunction of these makes me feel better, not worse, about our democratic system.

Paraphrasing at least an apocryphal version of Churchill, democracy is better than every system we’ve ever tried, but it’s still only capable of delivering second-best…at best.  The Kalla & Broockman results, as clean as a whistle, further confirm my belief in this.



[1] This is a blog post, and I’ve been away for a while for many reasons, including that these take me a lot of time.  Accordingly, I’ll simply note that other motivations for giving (e.g., financing reelection campaigns in a purely instrumental fashion) can be accomplished by other routes in the Federal campaign finance system (party committees, other PACs, etc., and unless you are really focused on a given Member’s reelection (but why, except for access?), these routes have transaction costs/flexibility advantages over direct giving to a single Member’s campaign).

Game Theory is Punk

I’ve joked before with people that I liken social science models to rock songs.  My actual mapping is horribly incomplete.  So I’ll set that chatter to the side.

That said, the practice of modeling, in my experience, is a lot like rock ‘n roll.

You give me a topic, and I’ll think for a minute, make an awkward joke to stall, and then say, “well…I think we can throw in a bit of Romer-Rosenthal, maybe a touch of Crawford & Sobel, plus a flourish of valence, and Voilà! … We have a model.” (Participants at EITM 2013 can vouch for this…for better or worse.)

But….I’m serious. Modeling is a delicate balance of divine insight and practice.  And, given the relative and regrettable scarcity of divinity in practice, more practice than insight.

Modeling requires balancing (1) a substantive question, (2) generality, (3) the finitude of time. It lies at the heart of both what are putatively purely-empirical and purely-theoretical enterprises (the only class of social science theory that is “not-putatively-but-actually-purely-and-absolutely-theoretical-and-therefore-unambiguously-correct-and-applicable” is social choice theory.)  Methodologists, game theorists—they all rightly make assumptions to get to the point of their argument.


If I said, “tell me how to make a yummy dish,” you’d ask “what’s yummy?” If I, being as obstinate and/or distracted as I usually am, did not answer—you’d have to make some assumptions about what I might like. If you assumed that I liked what everybody else liked, you’d probably hand me “Joy of Cooking.”  On the other hand, if you assumed I asked because I’d looked in the Joy of Cooking and not found what I liked, you would appropriately presume that I wanted something other than “the normal,” and you’d then be  seen by the outside world as playing punk. You’d probably (rightly) take off-the-shelf tools, utilize standard analogies, and leverage structure that threatens few to provide me with a new conclusionThat’s punk.

[During the perhaps overly-artsy bass solo, let me confess that not all punk is good. But all punk is, tautologically, punk.]

…Cue big build, drum crescendo, and….harmonic ending that sends crowd into rhapsodic frenzy…

Ok, What I’m saying is a short thing: good (formal/stat/etc) modeling is punk: it takes “old” tools, “expected” tricks, and combines them to “make the house rock,” or “get the message across.”  (Lucky are those situations when “the house rocks to the message.”)

Does the Pixies anthem “Gouge Away” address every possible situation?  Is it robust to every ephemeral, existential robustness barrage one might throw at it?

Hell no. That’s why the phrase “holy fingers” is so haunting. After all, “holy fingers” are rare unless you count Chicken Fingers ™.

So, when you want to say “well, your explanation for that is just an example, I’ll just say `Get Over It.’ … And then I’ll be gone, making more noise pop, playing a flying Fiddle to the Quotidian.

With that, I leave you with this.

Speech-y Keen, or Why Nobody Worries About the “Right to Praise the Government”

This post by Michael Moynihan, responding in part to this post by Thane Rosenbaum, asks how “free” free speech should be.  The question of discriminating between different forms of speech—based on questions such as “is it knowingly false,” “how likely is it to incite violence,” and “is it political”—is an instantiation of an aggregation problem, exactly the type of problem that motivates the analysis and arguments in the forthcoming book I penned with Maggie Penn, Social Choice and Legitimacy.

But, aside from the question of how one would (or could) construct meaningful and coherent “bounds” on “free” speech, I was led to think about the instrumental nature of speech by the following quote from Moynihan’s post (which includes a quote from Rosenbaum’s post):

“Actually, the United States is an outlier among democracies in granting such generous free speech guarantees. Six European countries, along with Brazil, prohibit the use of Nazi symbols and flags. Many more countries have outlawed Holocaust denial. Indeed, even encouraging racial discrimination in France is a crime. In pluralistic nations like these with clashing cultures and historical tragedies not shared by all, mutual respect and civility helps keep the peace and avoids unnecessary mental trauma.” So one would assume that racial discrimination has been dumped on the ash heap of history in France, considering racist thoughts and symbols have been made illegal. How, then, does one explain that the National Front, whose former leader Jean-Marie Le Pen was found guilty of Holocaust denial, is now the most popular party in the country?

The math of politics point here is both simple and arguably subtle.  Basically, speech limitations are not imposed at random, and citizens should draw inferences about the motivations of, and information held by, whoever imposed them.

Consider the classical “marketplace of ideas” justification for strong free speech rights.  In a nutshell, this argument says that free speech is socially beneficial because it does minimizes the probability that a “true” (and, by presumption, socially beneficial) argument will be prescreened or forestalled by speech limitations.  (Consider, for example, the creationism vs evolution debate.)

My argument here, though in favor of strong speech rights, is slightly different. Specifically, it focuses on constraints imposed by the government.  This is an important qualification.  In particular, democratic governments are in the end chosen or “produced” through collective action.  If “ideas matter” (as the marketplace justification justifiably presumes), then evaluating the policies of the government and its potential successors matter.  Then, the transmission of ideas between citizens might lead to changes in/pressures on the government.

Accordingly, if one presumes that governments prefer to maintain power, ceteris paribus, then a policy that discriminates between speech based on content can arguably be informative in its own right.

Here’s a quick sketch:  suppose that a government favors some policy that may or may not be socially suboptimal and people have variously informed opinions about the social optimality of that policy.

Suppose that people are prohibited from talking “negatively” about that policy.  If people don’t consider the government’s motivation to choose/support such a prohibition, then the prohibition would—for the sake of argument—tamp down dissidence regarding that policy.  However, if the citizens think about the government’s motivations—regardless of whether they be policy-based, reelection-focused, or a combination thereof—then the government’s imposition of the prohibition would justifiably lead to the citizens suspecting that not only was the policy in question more likely to be suboptimal, but also that the government does not have the best interests of the electorate at heart. (NO WAY!)

In short, all governments are at least practically dependent upon their citizens’ support. If speech “matters,” then governmental limits on speech—perhaps especially those accompanied by the purest of putative motives—should be viewed with suspicion.

Note that this logic gets even “stronger” once one considers the timing of the limitations: that is, if one thinks about a government considering the (per se) costly imposition of speech limitations that might potentially (in a naive world) mitigate agitation against the government, the fact that the government is willing to incur the costs of imposing such limitations in a particular policy area should make one consider whether the government was alerted to an increased frequency of individuals unhappy with the government in this realm.  This “strengthens” the conclusion about the effects of the ban—arguably mirroring the Le Pen example above—-because savvy citizens would infer that the imposition of a limitation on speech on a particular topic is itself indicative of citizen unrest on that issue.

With that quick post, I leave you with this reminder of the most eternal right.

Ceiling the Deal: Quid Pro Keystone

The debt ceiling drama is inexorably drawing to its next installment, and the question remains: when and, more importantly, how will a deal get done?  To keep matters simple, President Obama and Congressional Democrats have stood by the long-standing pledge to not negotiate on the debt ceiling, but some Congressional Republicans have been pushing for concessions in return for a debt ceiling increase—in particular, approval of the Keystone XL pipeline.

The State Department released its final report on the environmental impact of the proposed Keystone XL pipeline (fact sheet here) last week.  In a nutshell, the report is a “win” for pipeline supporters.  The idea of a “Keystone approval in return for debt ceiling increase” deal is not new, of course.  What I want to discuss briefly is the procedural details of the deal and their strategic (electoral) implications.

A key question in this game is whether Congress explicitly attaches Keystone XL approval to the debt ceiling increase or not.  Congress could pass a combined bill, or perhaps an implicit deal will be struck: President Obama approves Keystone XL and Congressional Republicans approve a “clean” debt ceiling increase, without (too loudly) claiming a quid pro quo.

I have reason to suspect that President Obama is trying to set up exactly such a deal: he said in June that the criterion for approving the pipeline is that is ”not significantly exacerbate the problem of carbon pollution.”  The State Department report provides an argument that it won’t.  Furthermore, President Obama said that “the net effects of the pipeline’s impact on our climate will be absolutely critical to determining whether this project is allowed to go forward.

However, White House press secretary Jay Carney said today that Obama’s decision on the pipeline would be free from “ideological or political influence.” And the current spin regarding Secretary of State Kerry is that (1) Obama has asked him for a recommendation on the project and (2) that Kerry may be conflicted regarding his principles and partisan motivations.

The strategic question here for my purposes today is

Does Obama value “not bargaining” over the debt ceiling—a signaling of resolve, etc. that I have touched upon in various other posts (such as here)—more than the potential gain from allowing moderate Democratic Senators to vote for a bill (perhaps with a debt ceiling increase too) mandating approval of the project? [1] [2]

With respect to the first point, I think all three sides (Democrats, Republicans, and Canadians) are playing a bit of a game of chicken: nobody wants to be seen as “giving in” if they don’t have to.  I won’t work this through in detail, but the basics of “chicken” as pretty simple: each player would prefer to look tough (not give in) and have one or both of the others give in.[3]  At the same time, each player would prefer to give in if they knew that neither of the others were going to give in.  The best case scenario in this situation, it seems, is the “win-win” scenario of (1) Obama looking “presidential” and “job-creating” by solemnly approving the Keystone project at the same time as (2) Congress passing a “dirty” debt ceiling increase that mandates approval of the pipeline.

The devil, of course, is in the details: the timing has to be managed appropriately so that neither side is “clearly” trying to save face.  I think this can be accomplished by having the Senate vote for a Keystone approval and clean debt ceiling increase separately, then have the House vote under a special rule to approve both and send them to the President, during which time the President would unilaterally approve the pipeline, so that he could explain that he was essentially signing a clean debt ceiling increase.

Will it work out this way?  Oh, I’m sure it will be different.  But with the benefit of being “up close” in temporal terms, I would be somewhat surprised if we don’t see action on both the debt ceiling and the Keystone project in the next week.

With that, I leave you with this.


[1] In 2012, a majority of the Senate voted in favor of such an approval, though it failed to get the 60 votes required to move forward.

[2] I thought about discussing why Obama might want a visible and positive Kerry recommendation, versus why he might want a negative and visible one.  The basics of one such argument are provided by my colleague Randy Calvert’s seminal article from 1985, entitled “The Value of Biased Information.”  I’ll come back to this argument at another time, I’m sure.  (And, to be honest, I have already stood on Randy’s shoulders elsewhere.)

[3] Usually, “Chicken” is described as a two-player game.  With more than 2 players, it becomes clear that Chicken is really just “private provision of a public good,” or the “who takes the trash out game.”  This is not the same as the Who Let The Dogs Out? game, which has no pure strategy equilibria (Baha Men (2000)).