If Keyser Söze Ruled America, Would We Know?

In this post on Mischiefs of Faction, Seth Masket discusses the recent debate about whether (super-)rich are overly influential in American politics.  I’ve already said a bit about the recent Gilens and Page piece that provides evidence that rich interests might have more pull than those of the average American.  In a nutshell, I don’t believe that the (nonetheless impressive) evidence presented by Gilens and Page demonstrates that the rich are actually driving, as opposed to responding to, politics.[1]

Seth’s post echoes my skepticism in some respects.  First, the rich and “super rich” donors are less polarized than are “small” donors.  Second, and perhaps even more importantly, admittedly casual inspection of REALLY large donors suggests that they are backing losing causes.  As Seth writes,

…the very wealthy aren’t necessarily getting what they’re paying for. Note that Sheldon Adelson appears in the above graph. He’s pretty conservative, according to these figures, and he memorably spent about $20 million in 2012 to buy Newt Gingrich the Republican presidential nomination, which kind of didn’t happen [...] he definitely didn’t get what he paid for. (Okay, yeah, he sent a signal that he’s a rich guy who will spend money on politics, but people knew that already.)

While most donations aren’t quite at this level, they nonetheless follow a similar path, with a lot of them not really buying anything at all. To some extent, the money gives them access to politicians, which isn’t nothing.“[2]

The Adelson point raises another problem we need to confront when looking for the influence of money in American politics.  Since the 1970s, most federal campaign contribution data has been public.  Furthermore, even the ways in which one can spend money that are less transparent (e.g., independent expenditures) can be credibly revealed to the public if the donor(s) want to do so.

Thus, a rich donor with strong, public opinions could achieve influence on candidates—even or especially those he or she does not contribute to—by donating a bunch of money to long-shot, extreme/fringe candidates.  This is a costly signal of how much the donor cares about the issue(s) he or she is raising, and might lead to other candidates “etch-a-sketching” their positions closer to the goals of the donor.  Indeed, these candidates need not expect to ever receive a dime from the donor in question: they might just want to “turn off the spigot” and move on with the other dimensions of the campaign.

Furthermore, such candidates might actually prefer to not receive donations/explicit support from these donors.  After all, a candidate might not want to be either associated with the donor from a personal or policy stance (do you think anyone is courting Donald Sterling for endorsements right now?) or, even more ironically, the candidate might worry about being seen as “in the donor’s pocket.” Finally, there are a lot of rich donors, and they don’t espouse identical views on every topic.  As Seth notes,

“politicians are wary of boldly adopting a wealthy donor’s views, and … they hear from a lot of wealthy donors across the political spectrum, who probably have conflicting ideas”

Overall, tracing political influence through known-to-be-observable actions such as donations, press releases, and endorsements is perilous.  A truly influential individual sometimes wants to minimize the public’s awareness of his or her influence, particularly when that influence is being exercised through others.  It is useful to always remember Kevin Spacey’s line from The Usual Suspects:

The greatest trick the Devil ever pulled was convincing the world he didn’t exist.”[3][4]

From an empirical standpoint, I think the current debate about influence in American politics is interesting: for example, it is motivating people to think about both what data can be collected and innovative ways to manipulate and visualize it.  But I caution against the temptation to jump from it to wholesale normative judgments about the state of American politics.  Specifically, there’s another Kevin Spacey line in The Usual Suspects that is useful to remember as politicos and pundits debate who truly “controls” American politics:

To a cop, the explanation is never that complicated. It’s always simple. There’s no mystery to the street, no arch criminal behind it all. If you got a dead body and you think his brother did it, you’re gonna find out you’re right.




[1] This is what is known as an “endogeneity problem.”  While some people roll their eyes at such claims, I provided a theory (and could provide more than couple of additional ones) that support the claim that such a problem might exist.  Hence, I humbly assert that the burden of proving that this is not a problem rests on those who claim that the evidence is indeed “causal” in nature.

[2] As a side note, I’ve also argued that donors should be expected to have more access to politicians than non-donors, and that this need not represent a failing of our (or any) democratic system.

[3] Verifying my memory of this quote, I found out that it is a restatement of a line by Baudelaire: “La plus belle des ruses du diable est de vous persuader qu’il n’existe pas.I have no idea what this has to do with anything, but I feel marginally more erudite after copy-and-pasting French into my post.

[4] I will simply note in passing the link between this and the entirety of the first two seasons of the US version of House of Cards.


How Political Science Makes Politics Make Us Less Stupid

This post by Ezra Klein discusses this study, entitled “Motivated Numeracy and Enlightened Self-Government,” by Dan M. Kahan, Erica Cantrell Dawson, Ellen Peters, and Paul Slovic.  The gist of the post and the study is that people are less mathematically sophisticated when considering statistical evidence regarding a political issue.

The study presented people with “data” from a (fake) experiment about the effect of a hand cream on rashes.  There were two treatment groups: one group used the cream and the other did not.  The group that used the skin cream had more subjects reported (i.e.a higher response rate), but a lower success rate.[1] Mathematically/scientifically sophisticated individuals should realize that the key statistics are the ratios of successes to failures within each treatment, not the absolute number of successes.

This was the baseline comparison, as it considered a nonpolitical issue (whether to use the skin cream).  The researchers then conducted the same study with a change in labeling. Rather than reporting on the effectiveness of skin cream, the same results were labeled as reporting the effectiveness of gun-control laws. All four treatments of the study are pictured below.

Gunning for Mathematical Literacy

Gunning for Mathematical Literacy

I want to make one methodological point about this study: the gun control treatments were not apples-to-apples comparisons with the skin cream treatment and, furthermore, the difference between them is an important distinction between well-done science and the messy realities of real-world (political/economic) policy evaluation/comparison.

Quoting from page 10 of the study,

Subjects were instructed that a “city government was trying to decide whether to pass a law banning private citizens from carrying concealed handguns in public.” Government officials, subjects were told, were “unsure whether the law will be more likely to decrease crime by reducing the number of people carrying weapons or increase crime by making it harder for law-abiding citizens to defend themselves from violent criminals.” To address this question, researchers had divided cities into two groups: one consisting of cities that had recently enacted bans on concealed weapons and another that had no such bans. They then observed the number of cities that experienced “decreases in crime” and those that experienced “increases in crime” in the next year. Supplied that information once more in a 2×2 contingency table, subjects were instructed to indicate whether “cities that enacted a ban on carrying concealed handguns were more likely to have a decrease in crime” or instead “more likely to have an increase in crime than cities without bans.” 

The sentence highlighted in bold (by me) is the core of my main point here.  It was not even suggested to the subjects that the data was experimental.  Rather, the description is that the data is observational.  In other words, it wasn’t the case in the hypothetical example that cities were randomly selected to implement gun-control laws.

While this might seem like a small point, it is a big deal.  This is because, to be direct about it, gun-control laws are adopted because they are perceived to be possibly effective in reducing gun crime,[2] they are controversial,[3] and accordingly will be more likely to be adopted in cities where gun crime is perceived to be bad and/or getting worse.

Without randomization, one needs to control for the cities’ situations to gain some leverage on what the true counterfactual in each case would have been.  That is, what would have happened in each city that passed a gun-control law if they had not passed a gun-control law, and vice-versa?

To make this point even more clearly, consider the following hypothetical.  Suppose that instead of gun-control laws and crime prevention, we compared the observed use of fire trucks in a city and then evaluated how many houses ultimately burned down?  Such a treatment is displayed below.


From this hypothetical, the logic of the study implies that a sophisticated subject is one who says “sending out fire trucks causes more houses to burn down.”  Of course, a basic understanding of fires and fire trucks strongly suggests that such a conclusion is absolutely ridiculous.

What’s the point?  After all, the study shows that partisan subjects were more likely to say that the treatment their partisanship would tend to support (gun-control for Democrats, no gun-control for Republicans) was the more effective.   This is where the importance of counterfactuals comes in.  Let’s reasonably presume for simplicity that “Republicans don’t support gun-control” because they believe it is insufficiently effective at crime prevention to warrant the intrusion on personal liberties and that “Democrats support gun-control” because they believe conversely that it is sufficiently effective.[4] Then, these individuals, given that the hypothetical data was not collected experimentally, could arguably look at the hypothetical data in the following ways:

  • A Republican, when presented with hypothetical evidence of gun-control laws being effective, could argue that, because towns adopt gun control laws during a crime wave, regression to the mean might lead the evidence to overestimate the effectiveness of gun control laws on crime reduction.  That is, gun-control laws are ineffective and they are implemented as responses to transient bumps in crime.
  • A Democrat, when presented with hypothetical evidence of gun-control laws being ineffective, might reason along the lines of the fire truck example: cities that adopted gun control laws were/are experiencing increasing crime and that the proper comparison is not increase of crime, but increase of crime relative to the unobserved counterfactual.  That is, cities that implement gun-control laws are less crime-ridden than they would have been if they had not implemented the measures, but the measures themselves can not ensure a net reduction of crime during times in which other factors are driving crime rates.

Conclusion. The mathofpolitics points of this post are two.  First, it is completely reasonable that partisans have more well-developed (“tighter”) priors about the effectiveness/desirability of various political policy choices.  When we think about adoption of policies in the real world, it is also reasonable that these beliefs will drive the observed adoption of policies.  Finally, for almost every policy of any importance it is the case that the proper choice depends on the “facts on the ground.”  Different times, places, circumstances, and people typically call for different choices.  To forget this will lead one to naively conclude that chemotherapy causes people to die from cancer.

Second, it’s really time to stop picking on voters. Politics does not make you “dumb.” People have limited time, use shortcuts, take cues from elites, etc., in every walk of life.  Traffic-drawing headlines and pithy summaries like “How politics makes us stupid” are elitist and ironically anti-intellectual.  The Kahan, Dawson, Peters, and Slovic study is really cool in a lot of ways.  My methodological criticism is in a sense a virtue: it highlights the unique way in which science must be conducted in real-world political and economic settings.  Some policy changes can not be implemented experimentally for normative, ethical, and/or practical reasons, but it is nonetheless important to attempt to gauge their effectiveness in various ways.  Thinking about this and, more broadly, how such evidence is and should be interpreted by voters is arguably one of the central purposes of political science.

With that, I leave you with this.

Note: I neglected to mention this study—”Partisan Bias in Factual Beliefs about Politics (by John G. BullockAlan S. GerberSeth J. Hill, and Gregory A. Huber)-–which shows that some of the “partisan bias” can be removed by offering subjects tiny monetary rewards for being correct. Thanks to Keith Schnakenberg for reminding me of this study.


[1] The study manipulated whether the cream was effective or not, but I’ll frame my discuss ion with respect to the manipulation in which the cream was not effective.

[2] Note that this is not saying that all “cities” perceive that gun-control laws are effective at reducing gun crime.  Just that only those cities in which they are perceived to possibly be effective will adopt them.

[3] Again, in cities where such a law is not controversial, one might infer something about the level of crime (and/or gun ownership) in that city.

[4] I am also leaving aside the possibility that Republicans like crime or that Democrats just don’t like guns.

Shining A Little More Light On Transparency

Thinking more about transparency (which I just wrote about), it occurred to me that I neglected two pieces (of many) that are relevant for the point about transparency of decision-making in bodies like the Federal Open Market Committee (FOMC) in which expertise plays an important role in justifying the body’s authority.

David Stasavage and Ellen Meade made use of a great (and entirely on point) data set in their analysis of the effect of transparency on FOMC decision-making in their Economic Journal article, “Publicity of Debate and the Incentive to Dissent: Evidence from the US Federal Reserve.” They find strong evidence that, once members knew their statements were being recorded, both the content of their opinions and their individual votes on monetary decisions changed.  

The general implications of this point from a theoretical perspective are nicely laid out in Stasavage’s Journal of Politics article, “Polarization and Publicity: Rethinking the Benefits of Deliberative Democracy.” Transparency can affect individual incentives, particularly among career-motivated decision-makers.  If one presumes that the decision-makers in a deliberative are motivated to “look good” by making good decisions, and one is mostly or wholly concerned with the quality of their performance then, in a specific sense, transparency of individual decision-makers’ opinions and votes can “only hurt” actual performance, because the decision-makers are not worried not only about the performance of their collective decisions (e.g., the actual inflation rate), but also by how their individual opinions/inputs are viewed.

Why Have Transparency At All, Then?

There are two broad categories of theoretical arguments in favor of transparency.  The first of these is screening and the second is record-keeping.

Screening. Recall that the problems with transparency sketched out above and in my previous post follow from the presumption that some or all of the decision-makers are interested in being rewarded and/or retained by voters/Congress/the president or whomever else might employ them in the future.  This “career-concerns model” of course implies that somebody else is going to be considering whether to retain, hire, or promote these decision-makers again in the future.  I’ll leave the details to the side for now and simply note that, if the “next job” for which they will be considered is sufficiently important relative to the current job, the ability to possibly infer something about the relative expertise or abilities of the decision-makers might be sufficiently valuable to warrant introducing some “noise” into the current decision-making.[1]

Record-Keeping. Nobody lives forever.  Many decision-making bodies that have authority because it is believed that expert decision-making can and should be used to set policy exist for many years, with decision-makers rotating in and out.  In such situations, because one is leveraging expertise as a justification, one might think that past experience can inform future decisions.  Steve Callander has recently published several excellent articles (here, here, and here) that offer a good starting point (unexplored as far as I know) for us to consider the types situations in which transparency can be helpful by allowing future decision-makers to not only observe past performance, but also learn how policy decisions actually affect outcomes by observing the details of the decisions that produced those outcomes.

Note that this argument, as opposed to the screening argument above, leaves room for one to think meaningfully about the proper “lag” or delay of transparency.  As the evolution of FOMC policy illustrates, many transparency policies involve a delay between decision and publication.[2] Interesting aspects of the policy process, such as how much information is conveyed by more recent versus older decisions, would presumably play a role in the final derivation of how much transparency is optimal.

Conclusions. If there’s any grand conclusion from this post, it’s that I think there’s a lot of important topics left in the study of transparency, and as social science theorists we should start thinking about getting closer to the “policy technology side” of the decision(s) being made.  Abstract static models provide a lot of very key and portable insights.  But they can take us only so far.


[1] Of course, if transparency in the current decision process leads every decision-maker to “pool” and do the same thing, regardless of their type, then one can’t infer anything about the decision-makers from their decision, thereby obviating this argument for transparency.  This will be the case when the decision-makers are sufficiently motivated to “get hired in the next job” relative to their innate preference to “make the right decision” in the current matter at hand.  In the FOMC, this would be an FOMC member who cares a lot more about becoming (say) Fed Chairman someday than he or she does about getting monetary policy “right” today.

[2] This type of argument, combined with career concerns, would also allow us to think in more detail about to whom the decisions ought to be made transparent and from whom this information should be withheld.


Why Separate When You Can…Lustrate!?!

Today’s post by Maria Popova and Vincent Post, “What is lustration and is it a good idea for Ukraine to adopt it?” made me think about the difference between what I will call policy and discretionary purges.

It is not easy for a nation to fix itself after a period of authoritarian rule.  There are many individuals who actually compose the government, and it is not clear that they share the ideals of the new government and, even if they do, the worries about career concerns and adverse selection that I raised a few minutes ago here suggest that changing behaviors might be hard even if the vast majority of bureaucrats/judges/legislators agree with democratic norms, the rule of law, the relative inelegance of bashing your opponents’ heads in, etc.

So, one practical approach to fixing an institution in the sense of massively and quickly redirecting its aggregate behavior (as produced by the panoply of individual decision-makers’ choices) is what we might call wiping the slate clean.  Clear the decks, Ctrl-Alt-Delete the whole shebang.

Another way is to find the people who are the problem(s) and eliminate them.  The prospect of removal might, in equilibrium, convert some who were previously scofflaws into temperate and sage clerks, after all.

I want to make a quick point.  Removal of officials is practically hard (because those who fear removal will hide evidence and otherwise obstruct the Remover’s attempts to ferret them out).  But, more intriguingly, removal of officials is politically hard…for the Remover. In cases like the Ukraine, this isn’t because removal of any official is likely to be unpopular (it’s probably the reverse…just ask Vergniaud). Rather, the problem is one of adverse selection in terms of those who are judging the motivations and trying to predict the future actions of the Remover.[1]

To think about this clearly and quickly, consider the baseline case where the Remover “cleans house,” removing everyone, and then consider the deviation from this in which the Remover “forgives” one official, who I will call “Official X.”[2]

What should we infer?  Does the Remover really have information that exculpates Official X?  Or perhaps Official X paid a bribe?  Or perhaps Official X is blackmailing the Remover? Or perhaps…       You can see where this is going.  The Remover is at risk of being suspected of being or doing something untoward if he or she has and uses any discretion.  Accordingly, the Remover would prefer to not have discretion.

The same logic applies, obviously, to a plan of “well, let the Remover prosecute those who `should’ be removed.”  Unless the Remover’s hands are tied with respect to whom to prosecute, people will always have reason to wonder “well, Official Y got prosecuted….but not Official X….”

Is Lustration a good idea?  I don’t know.  And I will mention that Popova and Post are making a different point, which is really about the extent and severity of lustration.  My point here is just that “statutory/mandated purges” are very different from “executive/discretionary purges” and, somewhat counterintuitively, it may very well be in the interests of “the Remover” to have his or her discretion taken away.[3]


[1] Note that, as always, this is equivalent to a problem of credible commitment on the part of the Remover to “not use his/her biases” when deciding whom to remove.

[2] The logic holds generally (i.e., when the Remover forgives/pardons more than one official), but this focuses our attention in a nice way.

[3] I’m trying to keep these short, but I’ll note that this incentive is stronger for a Remover who believes that the external audiences who are trying to judge the Remover’s information/character/etc. are really uncertain about the Remover’s information/character/etc.  This is because high levels of initial uncertainty imply that the discretionary actions of the Remover will have a larger impact on the beliefs held by the members of the audience, and the adverse implications of discretion on these beliefs is the justification for the Remover wanting to limit his or her own discretion.

How Transparency Could Harm You, Me, and the FOMC

Sarah Binder, as usual, provides excellent insights into a difficult political problem in this post discussing the potential political and economic pitfalls of imposing greater transparency on the Federal Open Market Committee (FOMC), which essentially directs the Federal Reserve’s active participation in the economy, thereby having the most direct control over short-term interest rates and, accordingly, day-to-day “monetary policy” in the United States.

The FOMC is a really big deal.  As Binder notes, the importance of the committee accordingly makes both economic and political observers keen to understand and forecast what it will do in the future.  By deciding over the past decade or so to publish more and more detailed data about the views of the FOMC members,[1] the Fed has increased the transparency of the information it receives.

This seems like a good idea, right?

Well, social science theories in both economics and political science acknowledge the importance of whether the FOMC’s behavior is predictable or not.  On the economics side, predictability of monetary policy (at least in terms of its outputs such inflation) is generally perceived to be a good thing, because it allows investors to focus more attention on the “fundamentals” of an asset’s value, as opposed to paying a lot of attention to purely nominal phenomena and/or inefficiently delaying/accelerating investment and consumption decisions.  In other words, while a low, fixed inflation rate is good, variation in the inflation rate is inevitable, and if this variation can be reasonably accurately forecast, this is a “second-best” outcome.

On the political science side of things, a traditional argument for transparency (in addition to the one above) is that it fosters legitimacy and/or public confidence in the Fed, and thereby makes the Fed a more credible “political actor.”  A more technical description of this is that transparency alleviates an adverse selection problem between the Fed and the public.  The Fed knows something that the public/Congress/Presidents want to know, and—in some situations—everyone would be better off if the Fed could somehow just reveal this information to the public/Congress/Presidents.

Solving this kind of problem is very tricky in practice, because a real solution requires that the Fed not be responsible for releasing the information.  And there’s some interesting things in the FOMC structure (it’s composed of multiple, and members with various overlapping terms) and the evolution of the transparency.

Being the contrarian that I am, I wanted to note two arguments against too much transparency.  I don’t think these are strong enough to justify total opacity, of course, but I do believe they’re strong enough to serve as cautionary tales regarding total transparency.

Each of these arguments revolves around an additional potential instantiation of adverse selection.  The first regards the motives of the individual members of the FOMC.  When decision-makers are career-oriented (they want to be reappointed/promoted/rewarded for their ability/performance, etc.), too much transparency about the decision-maker’s actual decision (i.e., votes and personal positions on monetary policy in the FOMC meetings) can induce conformism (or “pooling”) by the agents such that their policy decisions become suboptimally unresponsive.  For example, everybody might start acting as an inflation hawk would so as to increase the perception of their hawkishness (a worry indirectly indicated in Yellen’s comments as discussed by Binder).[2]

The second argument involves the incentives of those that make individual decisions that the Fed observes.  In particular, the Fed (and every regulatory agency) collects lots of data about the behaviors of firms and individuals.  In some cases, if (say) major firms (as the Fed is responsible for regulating) have access to the information that the Fed will ultimately use to make policy, the incentives of these firms to make decisions that are individually suboptimal in order to try and manipulate the Fed’s subsequent decision-making will be exacerbated.  That is, transparency of the Fed’s information can increase the incentives of major banks (and, arguably, even other regulators) to choose their own actions in ways that try to obscure their own private information.  When this happens, you have a double-whammy: (1) the individual firms’ decisions are not optimal and (2) the Fed does not glean as much information about the real state of the economy from the decisions of these firms.

Sean Gailmard and I make this point (coincidentally with an empirical application to Financial Industry Regulatory Authority (FINRA)) in our recent working paper, “Giving Advice vs. Making Decisions: Transparency, Information, and Delegation.”

Conclusion. I definitely don’t know what the “right” policy for the Fed is without further thought.  But the supposition that “increased transparency is unambiguously good” is at odds with at least two theoretical arguments. Accordingly, it might not be nefarious motives that lead policymakers to call for discussion of “how much transparency is too much?”


[1] See this description of the recent evolution of Fed transparency and, for a little historical context, see this report describing the 2007 change.

[2] Note that this argument implies that observing the actions of the decision-maker(s) can be bad, but it does not necessarily imply that observing what happens from those decisions (e.g., the actual inflation rate) can be bad. (Good citations on this point are Prat (2005) (ungated working paper here) and Levy (2007) (ungated working paper here), and my colleague Justin Fox has produced multiple excellent theoretical studies centering on this question (here, here (with Ken Shotts), and here (with Richard Van Weelden)).

Mind The Gap: The Wages of Aggregation, Evaluation, and Conflict

For whatever reason, I’m on a “data is complicated kick.”

So, this story is one of many today discussing the gender gap in wages in ‘Merica. In a nutshell, President Obama pointed out “that women make, on average, only 77 cents for every dollar that a man earns.”  Critics (most notably the American Enterprise Institute) immediately pointed out that “the median salary for female employees is $65,000 — nearly $9,000 less than the median for men.”

There are LOTS of angles on this thorny issue.  I want to raise the specter of social choice theory as a mechanism by which we can understand why this debate goes around and around.[1] The basic idea is that aggregation of data involves simplification, which involves assumptions.  Because there are various assumptions one can make (properly driven by the goal of one’s aggregation), one can aggregate the same data and reach different conclusions/prescriptions.

To keep it really simple, consider the following toy example.  Suppose that a manager currently has one employee, who happens to be a man, who makes $65,000/year, and the manager has to fill three positions, A, B, and C.  Furthermore, suppose that the manager has a unique pair of equally qualified male and female applicants for each of these three positions.  Finally, suppose that position A is paid $70,000/year, position B is paid $60,000/year, and position C is paid $45,000/year.

Now consider two criteria:

(1) eliminate/minimize the gender gap in terms of average wages,[2] and
(2) minimize the difference between proportions of male and female employees.

How would the manager most faithfully fulfill criteria (1)?  Well, if you hire the woman for position B and the two men for positions A and C, then the average wage of women (i.e., the woman’s wage) is $60K, and the average of the three men’s (the existing employee and the two new employees) wages is $60,000.  This is clearly the minimum achievable.[3]

How about criteria (2)?  Well, obviously, given that one man is already employed, the manager should hire two women.  If the manager satisfies criteria (2) with an eye toward criteria (1), then the manager will hire a man for position B and women for positions A and C.

Note that the two criteria, each of which has been and will be used as benchmarks for equality in the workplace (and elsewhere), suggest exactly and inextricably opposed prescriptions for the manager.

In other words, the manager is between a rock and a hard place: if the manager faithfully pursues one of the criteria, the manager will inherently be subject to criticism/attack based on the other.

Note that this is not “chaos”: the manager, if faithful, must hire no more than 2 of either gender: hiring three men or three women is incompatible with either of these criteria.[4] But the fact remains—and this is a “theory meets data” point—one can easily (so easily, in fact, that one might not even realize it) impose an impossible goal on an agent if one uses what I’ll call “data reduction techniques/criteria” to evaluate the agent’s performance.

In other words: real world politics is inherently multidimensional.  When we ask for simple orderings of multidimensional phenomena (however defined, and of whatever phenomena), we are in the realm of Arrow’s Impossibility Theorem.


[1] This argument is made in a more general way in my forthcoming book with Maggie Penn, available soon (really!) here: Social Choice and Legitimacy: The Possibilities of Impossibility.

[2] Here, by “average,” I mean arithmetic mean.  Because this example is so small, there is no real difference between mean, median, and mode in terms of how one measures the gender gap.  If these differ in practice, then the problem highlighted here is merely (and sometimes boldly) exacerbated.

[3] To be clear, I am setting aside the issue of “how much does a gender make if none of that gender is employed?” While technically undefined, I think $0 is the most common sense answer, and I’ll leave it at that.  

[4] Of course, as Maggie Penn and I discuss in our aforementioned book, there are many criteria.  Our argument, and that presented in this post, is actually strengthened by arbitrarily delimiting the scope of admissible criteria.

It’s Better To Fight When You Can Win, Or At Least Look Like You Did

In this post, Larry Bartels provocatively claims that Rich People Rule! In a nutshell, Bartels argues (correctly) that more and more political scientists are producing multiple and smart independent analyses of the determinants of public policy, one of which, by Kalla and Broockman, I have already opined on (“Donation Discrimination Denotes Deliverance of Democracy“).

Bartel’s motivation for bringing this up is essentially this quote from this forthcoming article by Martin Gilens & Benjamin Page:

economic elites and organized groups representing business interests have substantial independent impacts on U.S. government policy, while mass-based interest groups and average citizens have little or no independent influence.

The Gilens and Page is an interesting read, if only because the data on which it is based is very impressive.

Unfortunately, the theory behind the work is not nearly as strong.  In particular, the study is based on comparing observed position-taking by interest groups with (solicited) individual feedback on various surveys.[1]  So what?  Well, there is at least one potential problem, containing two sub-points, the combination of which I’ll call the Pick Your Battles Hypothesis.

Pick your battles.  Interest groups do not randomly announce positions on public issues.  Rather, any interest group of political interest presumably attempts to influence public policy through strategic choices of not only what to say, but when to bother saying anything at all.  While the mass public opinion data was presumably gathered by pollsters in ways to at least somewhat minimize individuals’ costs of providing their opinions, the interest groups had to pay the direct and indirect costs of getting their message(s) out. There’s two sub-points here, one more theoretical interesting than the other and the other presumably more empirically relevant.

Sub-point 1: Pick a winner. The theoretically interesting sub-point is that an organized “interest group” is/are the agents of donors and supporters.  To the degree that donations and support are conditioned on the perceived effectiveness of the interest group, (the leaders/decision-makers of) an interest group will—ala standard principal-agent theory—have a greater incentive to pay the costs of taking a public position when they perceive that they are likely to “win.”  If there is such a selection effect at work, then the measured correlation between policy and interest groups’ positions will be overestimated.

Sub-point 2: Only Fight The Fights That Can Be Won. The more empirically relevant sub-point is that, even if one thinks that interest groups don’t fear being on the losing side of a public debate, the simple and cold reality of instrumental rationality is that, if making an announcement is costly, any interest group should make an announcement only when the announcement can actually affect something.  Moving quickly here, this suggests that interest groups should be taking positions when they believe decision-makers might be persuaded.  To the degree that these decision-makers are presumably at least somewhat responsive to public opinion (however measured), instrumentally rational (and probably asymmetrically informed) interest groups will be more likely to make announcements that run against relative strong public opinion than to join the chorus.[2]  If this is happening, the question of whether interest groups have too much influence depends on whether you think they have better or worse information and on the types of policies that their views are influential on.

Conclusion. As political scientists know, observational data is tricky.  This is particularly true when it is the result of costly individual effort in pursuit of policy (and other) goals.  I really like Gilens and Page’s paper—the realistic point of scholarly inquiry is not to be right, it’s to get ever closer to being right, and this is even more true with directly policy-relevant work.  I just think that great data should be combined with at least a modicum of (micro-founded, individualistic) theoretical argument.  Without that, we might think umbrellas cause rain, hiring a lawyer causes you to go to jail, or chemotherapy causes death from cancer.  In other words, the analyst has simultaneously more data and less information than those he or she studies.


[1] Gilens and Page also compare responsiveness to mass opinions of economic elites (i.e., those in the 90th percentile in income) versus those of the median earner.  While I have some issues with this comparison (for example, I imagine getting a representative sample of the 90th income percentile is a bit different than getting one of the median income earner and, as Gilens and Page acknowledge, the information held by and incentives of the rich are plausibly very different from those of median earners), I will focus on the interest group component of the analysis in this post.

[2]  That this is not just hypothetical crazy talk is indicated by the relatively strong negative correlation (-.10***) between the positions of business interest groups and the average citizen’s preferences.


My Ignorance Provokes Me: I know Where Ukraine is and I Still Want to Fight

It’s been too long since I prattled into cyberspace.  This Monkey Cage post by Kyle Dropp  Joshua D. Kertzer & Thomas Zeitzoff caught my contrarian attention.  In a nutshell, it says that those who are less informed about the location of Ukraine are more likely to support US military intervention.  This is an intriguing and policy-relevant finding from a smart design.  That said, the post’s conclusion is summarized as: “the further our respondents thought that Ukraine was from its actual location, the more they wanted the U.S. to intervene militarily.”  The implication from the post (inferred by me, but also by several others, I aver) is that this is an indication of irrationality.  I hate to spoil the surprise, but I am going to offer a rationalization for this apparent disconnect.

First, however, the study’s methodology—very cool in many ways—caught my eye, only because (in my eyes) the post’s authors imbue the measure with too much validity with respect to the subjects’ “knowledge.”  Specifically, the study asked people to click on a map where they think Ukraine is located.  The study then measures the distance between the click and Ukraine.[1]  Then Dropp, Kertzer, & Zeitzoff state that this

…distance enables us to measure accuracy continuously: People who believe Ukraine is in Eastern Europe clearly are more informed than those who believe it is in Brazil or in the Indian Ocean.

I disagree with the strongest interpretation of this statement.  While I agree that people who believe Ukraine is in Eastern Europe are probably (not clearly, because some might guess/click randomly on Eastern Europe, too) more informed than those who “believe it is in Brazil or in the Indian Ocean,” I would actually say that the example chosen by the authors suggests that distance is not the right metric.  For example, someone who thinks Ukraine is Brazil is clearly wrong about political geography, but someone who thinks that Ukraine is located in the middle of an ocean is clearly wrong about plain-ole geography.

More subtly, it’s not clear that the “distance away from Ukraine” is a good measure of lack of knowledge.  In a nutshell, I aver that there are two types of people in the world: those who know where Ukraine is and those who do not.  Distinguishing between those who do not by the distance of their “miss” is just introducing measurement error, because (by supposition/definition) they are guessing.  That is, the true distance of miss is not necessarily indicative of knowledge or lack thereof.  Rather, if you don’t know where Ukraine is, then you don’t know where it is.

Moving on quickly, I will say the following.  It is not clear at all that not knowing where a conflict is should (in the sense of rationality) make one less likely to favor intervention. The key point is that if anyone is aware of the Crimea/Ukraine crisis, they probably know[2] that there is military action.  This isn’t Sochi, after all.

So, I put two thought experiments out there, and then off to the rest of the night go I.

First, suppose someone comes up to you and says, “there’s a fire in your house,” and then rudely runs off, leaving you ignorant of where the fire is.  What would you do…call the fire department, or run through the house looking for the fire?  I assert that either response is rational, depending on other covariates (such as how much you are insured for, whether you live in an igloo, and if you have a special room you typically freebase in).  The principal determinant in this case in many situations is the IMPORTANCE OF PUTTING OUT THE FIRE, not the cost of accidentally dowsing one too many rooms with water.

Second, the Ukraine is not quite on the opposite side of the world from the US, but it’s pretty darn close (Google Maps tells me it is a 15 hour flight from St. Louis).  So, let’s think about what “clicking far from Ukraine when guessing where Ukraine is” implies about the (at least in the post) unaddressed correlation of “clicking close to the United States when guessing where Ukraine is”?  This picture demonstrates where each US survey respondent clicked when asked to locate Ukraine.  Focus on the misses, because these are the ones that will drive any correlation between “distance of inaccuracy and support for foreign intervention” correlation. (Because distances are bounded below by zero and a lot of people got Ukraine basically right.)

There are a lot of clicks in Greenland, Canada, and Alaska. I am going to leave now, but the general rule is that the elliptic geometry of the globe (and the fact that the Ukraine is not inside the United States[3]) implies that clicking farther away from Ukraine means that you are, with some positive (and in this case, significant) probability clicking closer to the United States.

So, suppose that the study said “those who think the Ukraine is located close to the US are more likely to support military intervention to stem Russian expansion?”  Would that be surprising?  Would that make you think voters are irrational?

Look, people have limited time and aren’t asked to make foreign policy decisions very often (i.e., ever).  So, let’s stop picking on them.  It is elitist, and it offers nothing other than a headline/tweet that draws elitists (yes, like me) to your webpage.

Also, let’s not forget that, as far as I know, there is no chance in the current situation of the United States government intervening in the Ukraine. So, even if voters are irrational, maybe that’s meta: we have an indirect democracy for a reason, perhaps?


[1] If I was going to get really in the weeds, I would raise the question of which metric is used to measure distance between a point and a convex shape with nonempty interior.  There are a lot of sensible ones. And, indeed, the fact that there isn’t an unambiguously correct one is actually an instantiation of Arrow’s theorem.  Think about that for a second.  And then thank me for not prattling on more about that.  [That's called constructing the counterfactual. -Ed.]

[2] And, as the authors state, “two-thirds of Americans have reported following the situation at least “somewhat closely,

[3] Just think about conducting this same survey with a conflict in Georgia.  Far-fetched, right?  HAHAHAHA

Donation Discrimination Denotes Deliverance of Democracy

A recent paper by Joshua Kalla & David Broockman has attracted some attention (for example, in this Washington Post storyMonkey Cage post, and this excellent, reflective post on Mischiefs of Faction by Jennifer Victor).  In a nutshell, the paper reports the results of a well-designed field experiment that provides evidence that donations to a Member of Congress “open doors” in the sense that being a donor promotes access to more high ranking officials in the Member’s staff, including possibly the Member of Congress himself or herself.

I am not going to critique the study. Jennifer does that well in several ways.  Unrelatedly, I am also not going to doubt (or cast doubt upon) the results.  Rather, doing what I do, I am going to make a quick point about the question at hand.

We have a situation in which a (quasi-)monopolist (the Member) has a “good” to sell (access/face time).  Simply put, let’s suppose this good is valuable to some people and, similarly, that donations are valuable to the Member.  Then, it follows from a classic corner of social science known as price discrimination that the Member (in self-interested terms) should privilege those who are willing to pay for it.  That is, those who want access most will be willing to pay more than those want access less, and an efficient means to allocate the scarce/costly resource of access is to give to those who are most willing to pay.  Is this normatively disturbing?  Hell, yes.  Is it troubling even in everyman’s language?  Oh, for sure.  Is it inevitable?  Well, yes, that too.

Here’s another, more methods-meets-theory take on it.  Suppose that a Member imposed a policy where donations did not offer an advantage in obtaining access.   Now, think about your position as a constituent/citizen seeking access.

What would you do?

Let’s suppose that you like money. We’ve already supposed you seek access.  Now, finally, put those two together in the face of the hypothetical Member who does not reward donations with preferential access. … You should be very happy as you realize that you can have your cake and eat it, too, as you keep your money and waltz into the Member’s office, swilling sherry and talking Grand Strategy into the wee hours.

The summary of this hypothetical is this: if you believe that is plausible (1) that members don’t reward donations with preferential access and (2) that potential donors like money, then the predicted level of donations to any members is zero.[1]

We know that people give money to campaigns.  We also know or at least strongly believe that people expect something for their money.  Putting these together, I will simply say that the conjunction of these makes me feel better, not worse, about our democratic system.

Paraphrasing at least an apocryphal version of Churchill, democracy is better than every system we’ve ever tried, but it’s still only capable of delivering second-best…at best.  The Kalla & Broockman results, as clean as a whistle, further confirm my belief in this.



[1] This is a blog post, and I’ve been away for a while for many reasons, including that these take me a lot of time.  Accordingly, I’ll simply note that other motivations for giving (e.g., financing reelection campaigns in a purely instrumental fashion) can be accomplished by other routes in the Federal campaign finance system (party committees, other PACs, etc., and unless you are really focused on a given Member’s reelection (but why, except for access?), these routes have transaction costs/flexibility advantages over direct giving to a single Member’s campaign).

Game Theory is Punk

I’ve joked before with people that I liken social science models to rock songs.  My actual mapping is horribly incomplete.  So I’ll set that chatter to the side.

That said, the practice of modeling, in my experience, is a lot like rock ‘n roll.

You give me a topic, and I’ll think for a minute, make an awkward joke to stall, and then say, “well…I think we can throw in a bit of Romer-Rosenthal, maybe a touch of Crawford & Sobel, plus a flourish of valence, and Voilà! … We have a model.” (Participants at EITM 2013 can vouch for this…for better or worse.)

But….I’m serious. Modeling is a delicate balance of divine insight and practice.  And, given the relative and regrettable scarcity of divinity in practice, more practice than insight.

Modeling requires balancing (1) a substantive question, (2) generality, (3) the finitude of time. It lies at the heart of both what are putatively purely-empirical and purely-theoretical enterprises (the only class of social science theory that is “not-putatively-but-actually-purely-and-absolutely-theoretical-and-therefore-unambiguously-correct-and-applicable” is social choice theory.)  Methodologists, game theorists—they all rightly make assumptions to get to the point of their argument.


If I said, “tell me how to make a yummy dish,” you’d ask “what’s yummy?” If I, being as obstinate and/or distracted as I usually am, did not answer—you’d have to make some assumptions about what I might like. If you assumed that I liked what everybody else liked, you’d probably hand me “Joy of Cooking.”  On the other hand, if you assumed I asked because I’d looked in the Joy of Cooking and not found what I liked, you would appropriately presume that I wanted something other than “the normal,” and you’d then be  seen by the outside world as playing punk. You’d probably (rightly) take off-the-shelf tools, utilize standard analogies, and leverage structure that threatens few to provide me with a new conclusionThat’s punk.

[During the perhaps overly-artsy bass solo, let me confess that not all punk is good. But all punk is, tautologically, punk.]

…Cue big build, drum crescendo, and….harmonic ending that sends crowd into rhapsodic frenzy…

Ok, What I’m saying is a short thing: good (formal/stat/etc) modeling is punk: it takes “old” tools, “expected” tricks, and combines them to “make the house rock,” or “get the message across.”  (Lucky are those situations when “the house rocks to the message.”)

Does the Pixies anthem “Gouge Away” address every possible situation?  Is it robust to every ephemeral, existential robustness barrage one might throw at it?

Hell no. That’s why the phrase “holy fingers” is so haunting. After all, “holy fingers” are rare unless you count Chicken Fingers ™.

So, when you want to say “well, your explanation for that is just an example, I’ll just say `Get Over It.’ … And then I’ll be gone, making more noise pop, playing a flying Fiddle to the Quotidian.

With that, I leave you with this.