On The Possibility of An Ethical Election Experiment

The recent events in Montana have sparked a broad debate about the ethics of field experiments (I’ve written once and twice about it, and other recent posts include this letter from Dan Carpenter, this Upshot post by Derek Willis, and this Monkey Cage post by Dan Drezner).  I wanted to continue a point that I hinted at in my first post:

[T]he irony is that this experiment is susceptible to second-guessing precisely because it was carried out by academics working under the auspices of research universities.  The brouhaha over this experiment has the potential to lead to the next study of this form—and more will happen—being carried out outside of such institutional channels.  While one might not like this kind of research being conducted, it is ridiculous to claim that is better that it be performed outside of the academy by individuals and organizations cloaked in even more obscurity.  Indeed, such organizations are already doing it, at least this kind of academic research can provide us with some guess about what those other organizations are finding.

Personal communications with colleagues and readers indicated that Paul Gronke was not alone in interpreting my message in that passage as something like “well, others intervene in elections in unethical ways, so scholars don’t need to worry about ethics.”  That was not my intent.  Rather, I was trying to make the point that interventions by academic researchers are more likely to be transparent and, accordingly, capable of being judged on ethical grounds, than interventions by others.  Of course, that is a contention with which one might disagree, but I’ll take it as plausible for the purposes of the rest of this post.[1]

Reflecting further on the ethics of field experiments led me to a classical social choice result known as the liberal paradox, first described by Amartya Sen.  The paradox is that respecting individual rights can lead to socially inferior outcomes.  The secret of the paradox is that sometimes our preferences over our actions depend on what others’ do (also known as “nosy preferences”).

The link between the paradox and the ethics of experimenting on elections in the following simple way.  Let’s choose between four possible worlds, depending on whether scholars and/or political parties do field experiments on elections, and let’s take my assertion about the value of open academic research as given, so that “society’s preference” is as follows:[2]

  1. Nobody does any field experiments on elections, (the “best” option)
  2. Scholars do field experiments on elections, political parties do not,
  3. Both scholars and political parties do field experiments on election, and
  4. Partisan researchers do field experiments on elections, scholars do not (the “worst” option).

Then, let’s suppose that we have two principles we’d like to respect:

  • Noninterference in Elections: Field Experiments on Elections are Unethical if They Might Affect the Election Outcome.
  • Free Speech: Political Parties Are Allowed to Do Experiments If They Choose to.

It is impossible to respect these (reasonable) principles and maximize social welfare.  Here’s the logic:

  1. If a field experiment might affect an election, then some political party will want to do it, but the experiment would be considered unethical.
  2. Thus, if a field experiment is unethical and we respect Free Speech, then some political party will do the field experiment.
  3. But if scholars behave in accordance with Noninterference, then they will not perform a field experiment that might affect the election outcome.
  4. This leads to the outcome “Partisan researchers do field experiments on elections, scholars do not,” which is clearly inefficient.  Indeed, it is the worst possible outcome from society’s standpoint.

It is not my intent to judge the ethics of any particular field experiment study here, and I do believe that there are plenty of unethical designs for field experiments.  However, I am rejecting the notion that a field experiment on an election is ethical only if it does not affect the outcome of the election.  This is because it is precisely in these cases that others will do these experiments in non-transparent ways.  This is not the same as saying “other groups do unethical things, so scholars should too.”  Rather, this is saying “groups are intervening in elections in both ethical and unethical ways, so it is important for scholars to transparently learn from and about election interventions in ethical ways.”  To say that potentially affecting an election outcome is presumptively unethical implies that a scholar who values ethical behavior will never learn about how election interventions that are occurring work, what effects they might be having on us individually and collectively, and how society might better leverage the interventions’ desirable effects and mitigate their undesirable effects.

____________

[1] Relatedly and more generally, my post has (perhaps understandably) been read as defending all field experiments on elections.  My intent, however, was two-fold: (1) guaranteeing that a field experiment will have no effect on the outcome requires the experiment to be useless and thus is too strong a requirement for a reasonable notion of ethicality and (2) coming up with a reasonable notion of ethicality requires taking (social choice) theory seriously, during the design of the field experiment.

[2] One can substitute any private corporation/interest/government agency/conspiracy one wants for “political parties.”

Ethics, Experiments, and Election Administration

Nothing gets political scientists as excited as elections.  In this previous post, I discussed the Montana field experiment controversy. In that post, I pointed out that the ethics of field experiments in elections—e.g., in which some people are given additional information and others are not—are complicated.  In the majority of the post, I was attempting to respond to claims by some that ethical field experiments must have no effect on the “outcome.”[1]

Moving back from us egg-heads and our science, it dawned on me that the notion of an intervention (or treatment) is quite broad.  In particular, any change in electoral institutions—such as early voting, voter ID requirements, or partisan/non-partisan elections, to name a few—is, setting intentions aside, equivalent to a field experiment.[2]  By considering this analogy in just a bit more detail, I hope to make clear the point of my original post, which was that

In the end, the ethical design of field experiments requires making trade-offs between at least two desiderata:

1. The value of the information to be learned and
2. The invasiveness of the intervention.

Whenever one makes trade-offs, one is engaging in the aggregation of two or more goals or criteria […] and thus requires thinking in theoretical terms before running the experiment.  One should have taken the time to think about both the likely immediate effects of the experiment and also what will be affected by the information that is learned from the results.

Along these lines, consider the question of whether one should institute early voting.  There are two trade-offs to consider.  On the “pro” side, early voting can enhance/broaden participation.  On the “con” side, early voting can allow people to cast less-than-perfectly informed votes, because they vote before the election campaign is over.[3]

So, is early voting ethical?  Well, the (strong and/or “straw man-ized”) arguments about the ethics of field experiments would imply that this experiment/intervention is ethical only if it doesn’t affect the outcome of the election.   It is nonsense to claim that we are collectively certain that early voting has no effect on election outcomes.[4]

So, then, the question would be whether the good (increased participation) “outweighs” the bad (uninformed voting).  If there are any voters who would have voted on election day, but vote early and then regret that they can’t vote on election day, this trade-off is contestable—it depends on (1) how important participation is to you and (2) how costly mistaken/uninformed voting is to youI’ll submit that these two weights are not universally shared. 

To be clear, I favor early voting.  But that’s because I think participation is per se valuable, and most individuals’ votes are not pivotal in most elections.  That is, I think that the second dimension—uninformed voting—doesn’t affect election outcomes very often and making participation less costly is a good thing for more general social outcomes beyond elections.

But you see, that evaluation—the conclusion that early voting is ethical—is based not only on my own values, but also on an explicit, non-trivial calculation.  In thinking about the Montana experiment and similar field experiments, my point is this: if you want to be ethical, you need to do some theorizing when designing your experiment. Because an experimental manipulation of an election is—in practice—equivalent to a “reform” of election administration.[5]

With that, I leave you with this.

_____

[1] The notion of what exactly is an outcome is unclear, but it is okay for this post to just consider the question of “who won the election?”

[2] I say set intentions aside, because critics of my position (see Paul Gronke’s post, for example, which quotes a casual (and accurate) footnote from my previous post.)

[3] I am not an expert in all forms of early voting.  However, it is the case that in some states at least (Texas, for example), once you’ve voted early, you can’t cancel the vote.

[4] See, I didn’t even get into the mess that follows when one tries to figure out what an ethical democratic/collective norm would be, which this necessarily must be, since it is concerning collective outcomes.  Strong non-interference arguments in this context would nearly immediately imply that we should all follow Rousseau’s suggestion and each go figure out the common will on our own.

[5] You can easily port this argument over to the arguments about voter ID laws, where the trade-offs are between participation and voter fraud.

Well, In a Worst Case Scenario, Your Treatment Works…

Three political scientists have recently attracted a great deal of attention because they sent mailers to 100,000 Montana voters.  The basics of the story are available elsewhere (see the link above), so I’ll move along to my points.  The researchers’ study is being criticized on at least three grounds, and I’ll respond to two of these, setting the third to the side because it isn’t that interesting.[1]

The two criticisms of the study I’ll discuss here share a common core, as each centers on whether it is okay to intervene in elections.  They are distinguished by specificity—whether it was okay to intervene in these elections vs. whether it is okay to intervene in any election.  My initial point deals with these elections, which aren’t as “pure” as one might infer from some of the narrative out there, and my second, more general point is that you can’t make an omelet without breaking some eggs.  Or, put another way, you usually can’t take measurements of an object without affecting the object itself.

400px-Montana-StateSeal.svg

 

“Non-Partisan” Doesn’t Mean What You Think It Means.  The Montana elections in question are nonpartisan judicial elections.  The mailers “placed” candidates on an ideological scale that was anchored by President Obama and Mitt Romney.  So, perhaps the mailers affected the electoral process by making it “partisan.”  I think this criticism is pretty shaky.  Non-partisan doesn’t mean non-ideological.  Rather, it means that parties play no official role in placing candidates on the ballot.  A principal argument for such elections is a “Progressive” concern with partisan “control” of the office in question.  I’ll note that Obama and Romney are partisans, of course, but candidates for non-partisan races can be partisans, too.  Indeed, candidates in non-partisan races can, and do, address issues that are clearly associated with partisan alignment (death penalty, abortion, drug policy, etc.)  In fact, prior to this, one of the races addressed in the mailers was already attracting attention for its “partisan tone.” So, while non-partisan politics might sound dreamy, expecting real electoral politics to play in concert with such a goal is indeed only that: a dream.

Intervention Is Necessary For Learning & Our Job Is To Learn. The most interesting criticism of the study rests on concerns that the study itself might have affected the election outcome.  The presumption in this criticism is that affecting the election outcome is bad.  I don’t accept that premise, but I don’t reject it either.  A key question in my mind is whether the intent of the research was to influence the election outcome and, if so, to what end.  I think it is fair to assume that the researchers didn’t have some ulterior motive in this case.  Period.

That said, along these lines, Chris Blattman makes a related point about whether it is permissible to want to affect the election outcome. I’ll take the argument a step farther and say that the field is supposed to generate work that might guide the choice of public policies, the design of institutions, and ultimately individual behavior itself.  Otherwise, why the heck are we in this business?

Even setting that aside, those who argue that this type of research (known as “field experiments”) should have no impact on real-world outcomes (e.g., see this excellent post by Melissa Michelson) kind of miss the point of doing the study at all.  This is because the point of the experiment is to identify the impact of some treatment/intervention on individual behavior.  There are three related points hidden in here.  First, the idea of a well-designed study is to measure an effect that we don’t already have precise knowledge of.[2]  So, one can never be certain that an experiment will have no effect: should ethics be judged ex ante or ex post?  (I have already implied that I think ex ante is the proper standpoint.)

Second, it is arguably impossible to obtain the desired measurement without affecting the outcomes, particularly if one views the outcome as being more than simply “who won the election?”    To guarantee that the outcome is not affected implies that one has to design the experiment to fail in a measurement sense.

Third, the question of whether the treatment had an effect can be gauged only imprecisely (e.g., by comparing treated individuals with untreated ones).  Knowing whether one had an effect requires measuring/estimating the counterfactual of what would have happened in the absence of the experiment.  I’ll set this aside, but note that there’s an even deeper question in where if one wanted to think about how one would fairly or democratically design an experiment on collective choice/action situations.

So, while protecting the democratic process is obviously of near-paramount importance, if you want to have gold standard quality information about how elections actually work—if you want to know things like

  1. whether non-partisan elections are better than partisan elections,
  2. what information voters pay attention to and what information they don’t, or
  3. what kind of information promotes responsiveness by incumbents,

then one needs to potentially affect election outcomes.  The analogy with drug trials is spot-on.  On the one hand, a drug trial should be designed to give as much quality of life to as many patients as possible.  But the question is, relative to what baseline?  A naive approach would be to say “well, minimize the number of people who are made worse off by having been in the drug trial.”  That’s easy: cancel the trial. But of course that comes with a cost—maybe the drug is helpful.  Similarly, one can’t just shuffle the problem aside by arguing for the “least invasive” treatment, because the logic unravels again to imply that the drug trial should be scrapped.

Experimental Design is an Aggregation Problem. In the end, the ethical design of field experiments requires making trade-offs between at least two desiderata:

  1. The value of the information to be learned and
  2. The invasiveness of the intervention.

Whenever one makes trade-offs, one is engaging in the aggregation of two or more goals or criteria.  Accordingly, evaluating the ethics of experimental design falls in the realm of social choice theory (see my new forthcoming paper with Maggie Penn, as well as our book, for more on these types of questions) and thus requires thinking in theoretical terms before running the experiment.  One should have taken the time to think about both the likely immediate effects of the experiment and also what will be affected by the information that is learned from the results.

This Ain’t That Different From What Many Others Do All The Time. My final point dovetails with Blattman’s argument in some ways.  Note that, aside from the matter of the Great Seal of the State of Montana, nothing that the researchers did would be inadmissible if they had just done it on their own as citizens.  Many groups do exactly this kind of thing, including non-partisan ones such as the League of Women Voters, ideological groups such as Americans for Democratic Action (ADA) and the American Conservative Union (ACU), and issue groups such as the National Rifle Association (NRA) and the Sierra Club.

Thus, the irony is that this experiment is susceptible to second-guessing precisely because it was carried out by academics working under the auspices of research universities.  The brouhaha over this experiment has the potential to lead to the next study of this form—and more will happen—being carried out outside of such institutional channels.  While one might not like this kind of research being conducted, it is ridiculous to claim that is better that it be performed outside of the academy by individuals and organizations cloaked in even more obscurity.  Indeed, such organizations are already doing it, at least this kind of academic research can provide us with some guess about what those other organizations are finding.[3][4]

With that, I leave you with this.

_____________

[1]One line of criticism centers on whether the mailer was deceptive, because it bore the official seal of the State of Montana. This was probably against the law. (There are apparently several other laws that the study might have violated as well, but this point travels to those as well.) While intriguing because we so rarely get to discuss the power of seals these days, this is a relatively simple matter: if it’s against the law to do it, then the researchers should not have done so.  Even if it is not against the law, I’d agree that it is deceptive.  Whether deception is a problem in social science experiments is itself somewhat controversial, but I’ll set that to the side.

[2] For example, while the reason we went to the moon was partly about “because it’s there,” aka the George Mallory theory of policymaking, it was also arguably about settling the “is it made of green cheese?” debate.  It turns out, no. :(

[3] I will point out quickly that this type of experimental work is done all the time by corporations.  This is often called “market research” or “market testing.”  People don’t like to think they are being treated like guinea pigs, but trust me…you are.  And you always will be.

[4] This excellent post by Thomas Leeper beat me to the irony of people getting upset at the policy relevance of political science research.

So Many Smells, So Little Time: In Defense of “Stinky” Academic Writing

Steven Pinker recently offered a lengthy explanation of “Why Academics Stink At Writing.”  First, it is important to note that the title of Pinker’s post is misleading.  Indeed, as he points out early on, he is actually arguing about why academic writing is “turgid, soggy, wooden, bloated, clumsy, obscure, unpleasant to read, and impossible to understand?”  This is different than why academics stink at writing—and, indeed, the claim that “academics stink at writing” is an example of stinky writing, unless one likes sweeping, pejorative generalizations.

Pinker writes that “the most popular answer outside the academy is the cynical one: Bad writing is a deliberate choice.”  I’m inside the academy, and I want to offer a non-cynical “deliberate choice” explanation for why academic writing is dense and obscure.

Pinker gets close to my explanation later on the post.[1]  Specifically, Pinker attributes dense and obscure academic writing to “the writer’s chief, if unstated, concern … to escape being convicted of philosophical naïveté about his own enterprise.”

The dense and obscure nature of much scholarly writing, of which I am frequent producer, is at least partly the result of the author’s need to convince the reader that the author knows what the hell he or she is talking about.

Qualifications (or “hedges,” in Pinker’s terminology) such as “almost,” “apparently,” “comparatively,” “relatively,” and so forth are not necessarily “wads of fluff that imply they are not willing to stand behind what they say.”

Rather, they ironically can serve as a way to make scholarly arguments more succinct while indicating thought by the author on the matter being described.  For example, suppose that I’m describing how members of Congress tend to vote.  I could say that “voting in Congress these days is partisan.”  Is that true?  Well, not exactly.  Is it pretty close to true?  Yes, in the sense that voting in Congress is highly correlated with partisanship: Members of either party tend to vote like their fellow partisans, and this correlation is stronger today than in much of American history.  But it’s not true that members always vote with their party’s leadership.  Thus, a more accurate statement—and one that reveals that one is thinking about the data more carefully—is as follows:

Voting in Congress these days is largely partisan.

Pinker describes a lot of words as “hedging,” and they’re not all the same.  Continuing the Congressional voting example, one might wonder why Members vote as they do.  Even if one thinks that the reader doesn’t need a qualifier like “largely,” the statement “Voting in Congress these days is partisan” is still unclear. For example, is the author claiming that Members of Congress vote as they do because of their partisanship?  That is, do Members of Congress simply follow their party’s directions when voting? This is an open question, it turns out.  Accordingly, a more accurate statement is

Voting in Congress these days is at least seemingly partisan.

Yes, that sentence is hedging.  For a reason—one conclusion a reader might draw from “Voting in Congress these days is partisan” is unwarranted.  Including the “at least seemingly” qualifier is not a wad of fluff to signal that I’m not willing to stand behind what I say—it’s a key part of what I want you to hear me saying.

I could go on, but I’ll conclude with the “math of politics” of this phenomenon.  Academic writing (and here I am thinking of writing intended to be subjected to peer-review of some form) is dense and obscure because the written presentation of the research is necessarily an incomplete rendition of the research itself.  That is, peer review is about trying to verify the qualities of the argument, which often requires inferring about the processes of the research that are by necessity incompletely conveyed in the written work.  Dense and obscure writing—jargon, qualifiers, etc.—are a bigger manifestation of the typographical convention “[sic.]”  When quoting a passage with an error, such as a misspelling or grammatical mistake, it is common practice to place “[sic.]” immediately after the mistake(s).  This is done because the author needs to signal to the editors, reviewers, and readers, that this mistake is not the author’s fault.  Importantly, though, it illustrates more than just that—[sic.] also signals that the author noticed the mistake.

Academic writing has to be dense and obscure, i.e., tough to parse, precisely because most scholars study phenomena that are tough to parse.  To continue Pinker’s theme, then, one might say that scholarly writing “stinks” because the real world “has so many smells.” Ironically, academic writing is difficult to read because it is attempting to portray what is almost always a big and variegated reality: often, the appealing parsimony of a conversational style is insufficient to accurately convey the knowledge and findings of the author.

In conclusion, academic writing is a very complicated signaling game—and I don’t mean “game” in a derogatory sense—that is necessitated by the various constraints we all labor under: time, resources, page limits, and exhaustion in both mental and physical forms. Dense and obscure language is more costly and complicated than conversational language, but this costly complication is a requisite outcome of the screening process that scholarly work is rightly subjected to.

 


 

[1] I couldn’t quite figure out how to put this in the body of this post, but the point at which Pinker turns to this argument occurs in an ironic paragraph:

In a brilliant little book called Clear and Simple as the Truth, the literary scholars Francis-Noël Thomas and Mark Turner argue that every style of writing can be understood as a model of the communication scenario that an author simulates in lieu of the real-time give-and-take of a conversation. They distinguish, in particular, romantic, oracular, prophetic, practical, and plain styles, each defined by how the writer imagines himself to be related to the reader, and what the writer is trying to accomplish. (To avoid the awkwardness of strings of he or she, I borrow a convention from linguistics and will refer to a male generic writer and a female generic reader.) Among those styles is one they single out as an aspiration for writers of expository prose. They call it classic style, and they credit its invention to 17th-century French essayists such as Descartes and La Rochefoucauld.

To be clear, it took me a couple of reads to comprehend that paragraph.  A conversational style is Pinker’s ideal for clarity—so why include the parenthetical explanation his gendered pronouns?

#Ferguson: The Racial Disconnect On Race

Yesterday, while actively following the events in Ferguson, I was asked the following by @GenXMedia: 

White Suburban America seems riddled with apathy, excuses and disconnect about #Ferguson. Any ideas why?

Upon further prompting, it became clear that @GenXMedia wanted a response to each of the three things that White Suburban America is riddled with: apathy, excuses, and disconnect.

It is important to note that, as many of you know, this important topic does not fall squarely in my “wheelhouse.”  I mostly think about institutions and strategic models of politics.  That said, and with the usual warning that you get what you pay for, here’s my promised response.


 

Apathy. If we define apathy as anything less than intense interest in the unfolding story in Ferguson then yes, unsurprisingly, it is clear that more white voters are apathetic toward the events in Ferguson, with 54% of black respondents saying they are following the story very closely, while only 25% of white respondents say the same thing:

8-18-2014_07

(Here is the full Pew survey and write-up.) It’s beyond my scope here but, to understand the intricate question of how race, civil rights, and Ferguson interact, it is important to note that only 18% of Hispanic respondents said they are following the story very closely.

Sadly, these numbers aren’t surprising to me.  Apathy is a “choice” only in the technical sense.  From a common sense standpoint, apathy is the absence of a choice to care/pay attention and “not choosing to pay attention” is a heck of a lot easier when the events seem less proximate to yourself.

I’m not saying that it’s rational to be apathetic, particularly about something as important and extreme as the events in Ferguson, but the results today are consistent with several decades of research into political attitudes in America, including the fact that the perception of “linked fate” is far more prevalent among black Americans than either whites or Latinos.[1]  Linked fate is a key concept in the study of race and politics.  A recent review of this literature describes linked fate as follows:

Linked fate is generally operationalized by an index formed by the combination of two questions. First, respondents are asked: “Do you think what happens generally to Black people in this country will have something to do with what happens in your life?” If there is an affirmative response, the respondent is then asked to evaluate the degree of connectedness: “Will it affect you a lot, some, or not very much?” [2]

Moving beyond (and/or in addition to) linked fate, one can also argue that the incentives (or perhaps proximities) of black and white Americans differ with respect to law enforcement.  Setting aside a more detailed discussion of this, just note the similarity between the racial breakdown of people closely following the events in Ferguson with the analogous breakdown of interest across gay rights, voting rights, and affirmative action in 2013:

6.24.13.-2

Excuses. It’s well established that white Americans generally perceive racism to be less prevalent and less important than black Americans.   Discussing racial attitudes in the post-Civil Rights era, Brown, et al. write

In the new conventional wisdom about race, white racism is regarded as a remnant from the past because most whites no longer express bigoted attitudes or racial hatred.[3]

Simply put, the Pew survey does nothing to contradict this conclusion.  Specifically, 47% of white respondents said that “race is getting more attention than it deserves” in the coverage of the shooting of Michael Brown, while only 18% of black respondents, and only 25% of Hispanic respondents, agreed with that statement (see here for the full breakdown):

8-18-14_012

In the end, it’s important to note that the racial divide in attention being paid to Ferguson is in line with the racial differences in individuals’ beliefs that race is an important part of the narrative.  While it is impossible to gauge causality here—namely, are fewer white people paying attention to Ferguson because they think it’s not about race or are more white people saying Michael Brown’s shooting wasn’t about race because they’re not paying attention to Ferguson—both are consistent with avoidance: simply put, issues like homelessness, inequality, and discrimination are difficult to get many people to pay sustained attention to.  I’ve argued elsewhere that politics is about problem-solving, and people like to debate problems they think can be solved.  Race is arguably the most complicated problem to solve. While by no means admirable, avoidance of the issue by those who can (i.e., white people) is not surprising.[4]

Disconnect. I’m not exactly sure how “disconnect” is different from both apathy and excuses, but I’ll take a stab and interpret this as “why do white people not seem to connect the events in Ferguson with race?”  My response here, sadly, is that they kind of do—at least insofar as the attitudes here are consistent with other similar racially charged events.  For example, following the acquittal of George Zimmerman in July 2013, Pew conducted a poll gauging reactions and attention to the case.  The racial breakdowns of responses to each are very similar to those just found in the case of Ferguson, with 60% of whites thinking the issue of race was getting more attention than it deserves, and only 13% of blacks feeling that way:

7-22-2013-1 Similarly, 63% of black respondents mentioned talking about the trial with friends, versus only 42% of white respondents:7-22-2013-2

Conclusion.  My own view on this is that Ferguson is most decidedly a racial issue.  This isn’t the same as saying that anyone involved is (or isn’t) racist.  Indeed, that issue, to me, misses the larger and more important point. In fact, while the racial realities of Michael Brown’s death—an unarmed black American killed by a white police officer—undoubtedly thrust race forward into the discussion, race should have been part of the discussion anyway.

That’s because any of the multiple dimensions of the context of Ferguson—the historical discrimination, the economic inequality, the political disparities, the unrepresentative political institutions, and the more general “special” features of local elections, to name just a few—make the issue of not only Michael Brown’s death, but also the largely and sadly ham-handed response a racial issue.

So, why don’t more white people see this?  A succinct (though definitely not exculpatory) answer is inertia: attitudes, like objects, tend to stay the same until acted upon an outside force. The reality of America is that white Americans are less likely to see their fates as being linked with those of black Americans and (perhaps because) they are less likely to face the everyday inequalities faced by far too many black Americans. In other words, and quite literally, most white Americans don’t often encounter an outside force with respect to race—definitely not like many black Americans do.  Whether they achieve this through apathy, excuses, and/or disconnect is a trickier question, but the correlation—the reality that race still divides Americans’ perceptions of politics and power—is sadly indisputable and robust, even in the 21st century.

____________

[1] See Dawson, Michael C. Behind the mule: Race and class in African-American politics. Princeton University Press, 1994.
[2] From Paula D. McClain, Jessica D. Johnson Carew, Eugene Walton, Jr., and Candis S. Watts. 2009. “Group Membership, Group Identity, and Group Consciousness: Measures of Racial Identity in American Politics?” Annual Review of Political Science (2009), p. 477.
[3] From Michael K. Brown, Martin Carnoy, Troy Duster, and David B. Oppenheimer. Whitewashing race: The myth of a color-blind society. University of California Press, 2003, p.36.
[4] Another, stronger, view of this is called “white privilege,” which describes the fact that issues that can be avoided are also deemed less important to others, without noticing that the ability to avoid these issues is not independent of race. (Thanks to Jessica Trounstine for adroitly directing me to this connection, as well as posting this telling graphic.)

 

Makes Us Stronger: The Math of Protest and Repression

Like many people, especially here in St. Louis, the ongoing events in Ferguson have consumed my attention and, frankly, really shaken me.  After much thought, I have possibly come up with a manageable take on one angle of “the math of” the situation.

It is important to distinguish protest from rebellion. Protest is distinguished from rebellion on the basis of intent.  Rebels intend to replace the government.  Protesters intend to change policy.

Protest, not rebellion, is what is happening in Ferguson.

In the end, this distinction is important because, in a nutshell, rebels don’t care what the government “thinks.”  In fact, rebellions are sometimes most successful when the government doesn’t notice them (until too late). Protesters, on the other hand, are directly attempting to change what the government (and/or other voters) “thinks.”[1]  In another nutshell, protest is about changing the government’s beliefs about who is upset about the policy in question, and how upset they are.

Protest is a form of costly signaling. Costly signaling describes any action that, because it is “expensive” or “unpleasant,” can convey something about oneself to others.[2]  Costly signaling is generally more informative than “cheap talk” signaling, in which one basically just says “hey, I am mad” but pays no cost to do so.

I’m not the first to make this point, of course.  But I wanted to bring it up again because thinking about the incentives to signal through protest can help us understand (some of) the events in Ferguson.  Below, I try to succinctly make a couple of points along these lines.

Protests are instrumentally rational only if they might work. As perhaps the canonical example of collective action, the problem for organizers is convincing citizens that participation will have some effect.[3]  The probability that a protest will have an effect is, generally speaking, an increasing function of the number of protesters.  This highlights one incentive for anyone trying to prevent the desired change: namely, clear the streets. By keeping protesters off the street, the government eliminates the possibility of the protesters sending (one type of) costly signal to those citizens “on the sidelines.”  This is really effective if the government can simply keep the streets clear from the beginning.[4] However, once protesters are “on the streets,” clearing the streets can have unintended consequences that become clear in a costly signaling framework.  Specifically:

Putting down a protest increases protest’s signaling value. Think about it this way: suppose that the government started giving money to those who showed up at the protest.  The “protest” would probably grow in size, right?  It would also become less informative about “who is upset about the policy in question, and how upset they are.”  This is because some of the people there are presumably there only for the money.  Indeed, some people who are really upset about the policy but were (for example) missing work for the protest might leave when the government starts giving away money, because their individual presence at the “protest” would have a smaller ultimate effect on the policy.

The converse of this logic can hold, too: by tear-gassing and shooting rubber bullets at citizens, the government amplifies the content/credibility of the message the protesters are trying to send.[5]

Conclusion: Two Reasons to Not Clear The Streets.  There’s more that can be said within the costly signaling conception of protest, of course, but I’ll keep this short and simply point out that clearing the streets is not only fundamentally undemocratic and counter to fundamental American values—it can easily lead to ironic results.  Understanding the proper response to protest (even if based on cynical motives) requires thinking about why the protesters are there.  They aren’t just upset—they’re trying to show others how upset they are.[6]

Good governments don’t threaten their citizens because it’s wrong to do so.
Smart governments don’t threaten their citizens because it’s stupid to do so.

Given the events of the past 10 days, I’ll take either type.

With that, I leave you with this.

_______________

[1] This is a blog post, so I will simply note the sloppiness of ascribing “thought” to a deceptively simple collective such as “the government.” Apologies to Ken Arrow, as appropriate.

[2] I make a lot of costly signaling arguments on this blog (e.g., here, here, here). This is itself a costly signal of how useful I believe the concept to be. KAPOW!

[3] And, to complicate things, this cuts both ways: the successful organizer must convince his or her followers that the outcome can be achieved, but only with the followers’ help.

[4] Arguably not too different from the policy that is being attempted today in Ferguson (8/18)

[5] This is particularly true now that there are so many excellent livestreams of protests.

[6] I thought about discussing the incentives of the government to portray its actions as being “not about the protest” (i.e., protecting property, responding to gunshots/fireworks?) but I’ll leave that for another post.

The Bigger The Data, The Harder The (Theory of) Measurement

We now live in a world of seemingly never-ending “data” and, relatedly, one of ever-cheaper computational resources.  This has led to lots of really cool topics being (re)discovered.  Text analysis, genetics, fMRI brain scans, (social and anti-social) networks, campaign finance data… these are all areas of analysis that, practically speaking were “doubly impossible” ten years ago: neither the data nor the computational power to analyze the data really existed in practical terms.

Big data is awesome…because it’s BIG.  I’m not going to weigh in on the debate about what the proper dimension is to judge “bigness” on (is it the size of the data set or the size of the phenomena they describe?).  Rather, I just wanted to point out that big data—even more than “small” data—require data reduction prior to analysis with standard (e.g., correlation/regression) techniques.  More generally, theories (and, accordingly, results or “findings”) are useful only to the extent that they are portable and explicable, and these each generally necessitate some sort of data reduction.  For example, a (good) theory of weather is never ignorant of geography, but a truly useful theory of weather is capable of producing findings (and hence being analyzed) in the absence of GPS data. A useful theory of weather needs to be at least mostly location-independent.  The same is true of social science: a useful theory’s predictions should be largely, if not completely, independent of the identities of the actors involved.  It’s not useful to have a theory of conflict that requires one to specify every aspect of the conflict prior to producing a prediction and/or prescription.

Data reduction is aggregation.  That is, data reduction takes big things and makes them small by (colloquially) “adding up/combining” the details into a smaller (and necessarily less-than-completely-precise) representation of the original.

Maggie Penn and I have recently written a short piece, tentatively titled “Analyzing Big Data: Social Choice & Measurement,” to hopefully be included in a symposium on “Big Data, Causal Inference, and Formal Theory” (or something like that), coordinated by Matt Golder.[1]

In a nutshell, our argument in the piece is that characterizing and judging data reduction is a subset of social choice theory.  Practically, then, we argue that the empirical and logistical difficulties with trying to characterize the properties/behaviors of various empirical approaches to dealing with “big data” suggest the value of the often-overlooked “axiomatic” approaches that form the heart of social choice theory.  We provide some examples from network analysis to illustrate our points.

Anyway, I throw this out there to provoke discussion as well as troll for feedback: we’re very interested in complaints, criticisms, and suggestions.[2]  Feel free to either comment here or email me at jpatty@wustl.edu.

With that, I leave you with this.

______________________
[1] The symposium came out of a roundtable that I had the pleasure of being part of at the Midwest Political Science Association meetings (which was surprisingly well-attended—you can see the top of my coiffure in the upper left corner of this picture).

[2] I’m also always interested in compliments.

 

 

The Math of Getting a Job in Political Science

The “academic job market season” in political science starts in the fall and continues through the early spring.[0]  If you aren’t familiar with how the academic job market works, it’s basically still old school: schools post ads looking to hire for a more or less specialized position, applicants (“candidates”) send in “packets” containing a curriculum vitae (“CV”), a statement of their teaching and research interests, some writing samples (“papers”), and typically three letters of recommendation.[1] At this point…

Obviously, this is a stressful time for applicants.

…a committee of faculty will review the applications, create a “short list” of candidates to interview.

Still stressful for applicants…and sometimes committee members.

Those candidates then typically visit the campus, meet with faculty, and give a “job talk” concerning one of their writing samples.  After that…

VERY stressful time for the short-listed candidates…

…and oftentimes members of the department, too.

the committee makes a recommendation to the department, the department chooses somebody to recommend to the Dean, and the Dean then (usually) authorizes an offer to the department’s recommended candidate.[2]  Negotiations then ensue, but I’ll leave that matter for another day.


 

In this post, I want to offer a brief series of pieces of advice about how to approach this stressful time.  I’ve been lucky enough to see both sides of the market a few times, and there is a lot of uncertainty/misinformation/folklore about how it works.

Before diving in, let me be clear that I understand that this is all “through my eyes.”  Everyone’s experiences and opinions can differ from my own, and my stating something to the contrary should not be taken as evidence that I disagree with conflicting advice.  In other words, you get what you pay for.

The CV. (Writing this, it dawned on me I should put some skin in the game.  This is my publicly available CV.)

Without a doubt in my mind, the CV is the most important part of the typical packet.[3]  Search committee members have to review sometimes hundreds of files, and time waits for no one.  For better or worse, committee members use various cues in determining whether to dig more deeply into a file.

For the sake of parsimony, there are three key characteristics of a “good CV.”

1. Clarity. Don’t get fancy with formatting.  The top of the first page of the CV should include:

a. Your contact information,

b. Your education history from Bachelor’s Degree through to the (perhaps expected) PhD (including title of, and committee for, your dissertation),

c. Your publications and working papers  available for circulation.

It probably should not contain:

a. Work experience (this goes later in the CV, if relevant to your research or if you’ve spent significant time (>1 year) working in the real-world),

b. Descriptions of your papers (these go in your research statement, in the abstracts of the papers themselves, and on your website)[4]

c. Awards/grants/media appearances/blogging[5]/etc.  (These should go later in the CV, see “Papers: Appear Prepared to Publish or Prep to Perish,” below

2. Keep It Short. Despite the meaning (“courses of life”), this isn’t about your whole life.  Your CV on the job market is arguably the best indicator of what your CV will look like at “tenure time” in 6-8 years.  Accordingly, because—from a CV standpoint—tenure is about publishing,[6] and the only thing faculty dread more than not hiring is hiring somebody that they will have to worry about at tenure time, the easiest thing to focus on in your CV should be your research.

What I’m saying here is that you don’t need to put your proficiency in using WordStar/LaTeX/R/Stata/SAS/SPSS/etc, your high school awards, your Mensa membership, etc. on your CV.

3. That said, don’t worry too much about #2. The point is that you should make the “top of your vita quickly indicate what you’ve written and where you’re coming from. If you’ve still got the reader’s attention, they are probably interested in knowing more about you.  Just remember to keep it brief.

When in doubt, remember this: your CV needs to make a case for youand quickly.

Realistically, think about what academics talk about when describing other academics:

1. What they’ve published (or sometimes what they are currently working on),

2. Where they work (and have worked), and

3. Where they got their PhD.

You want your CV to communicate with a busy reader who talks about other people in this way.  You need to communicate with him or her quickly about how he or she should convince others to read your papers/letters/etc.

MAKE IT EASY FOR OTHERS TO “SELL YOU” IN THEIR USUAL WAY.

Papers: Appear Prepared to Publish or Prep to Perish. This piece of advice is easier to give than to follow:

Have several papers.  On different (but not too different) topics.  Write papers on your own and with other graduate students and (less valuable to you at this point) other faculty.  In general:

Be active. Write lots of papers.

There are two sufficient conditions to “kill” (or least seriously harm) a candidate:

1. The file doesn’t make a case quickly. (See “The CV,” above: keep it succinct.)

2. The file doesn’t precipitate a clear narrative of what your “tenure-able CV” is going to look like in 6-8 years.

 In short, publishing is always a crapshoot: the more ideas you put on paper and send out, the more publications you will have.  More importantly, the more interesting and vibrant a colleague you will be likely to be.[7]

Put another way, the “quality or quantity” question presents a false dichotomy in the sense that—at least in my experience—it is nearly impossible to accurately judge the quality of your own ideas and schemes in any a priori way.  This is due to the fact that quality is ultimately judged by your peers upon publication. Accordingly, to accurately and precisely judge the quality of one’s idea prior to writing it down and sending it out for review requires (1) knowing what others will judge “high quality” and (2) knowing what will get accepted/published.  Take it as a maxim that almost nobody is good at judging either of these, much less both, and even more so much less with respect to their own ideas.

Outside The Packet. The final piece of advice I have is beyond your packet.  It is simple:

Put yourself out there.

This is a job that requires, and indeed is made of, rejection.  It requires fortitude to write something and claim that it is “new,” “important,” and “worthy,” only to have 2-3 nameless unpaid, busy peers look upon it skeptically.  In and beyond the job market, every “key to success” I’ve seen or experienced can be described as

Letting others know what you’re interested in and what you’re doing.

Practically, how does one do this?

a. Send emails.  Unsolicted, email others to see if you can buy them coffee at conferences.  Do not be ashamed of emailing those at schools that are hiring in your field: this is your career, and sending that email is not only possibly the best way to get your packet “looked at twice,” it indicates the kind of gumption and initiative that positively predicts having a tenure-able CV in 6-8 years (see above).

b. Send your papers to other people/conferences/special issues.  Rejection is the future.  In general, people don’t like to reject something, people like to be thought of as important/worthy of seeking advice from, and scholars got into this job to read/argue/write.  Engage.  You will not always like what you hear back (e.g., “nothing”), but this is the game. Taking the risks now is costly, and signals you’ll keep taking them on the tenure track.

c. Volunteer to do the things that you want to do. Graduate students and junior faculty frequently ask “how do I get asked to review papers by journal X?”  The answer is simple: email the editor(s) of Journal X and tell them you’d like to review papers for Journal X.[8]

Summary. Look, there ain’t much you can do after you send in the packet (except email people—see above).  Relax as best you can, and finish the dissertation/dive into the next project. I don’t have a silver bullet, but hopefully I have provided some support for the contention that a research academic career in political science is generally promoted by presenting an efficient picture of what you have done and will do, and making it clear that you’re willing and able to “take the emotional risks” generally required to get others to pay attention, and respond, to your thoughts and work.  In the end, the applicant is always “the prospective new kid at the table.”  Make it easy for your future colleagues to see why you’ll be a good, productive, and vibrant neighbor and colleague.  In other words: (1) keep it simple and to the point, (2) put yourself out there….(3) have a drink, take a nap, try to forget the stress for a moment, and (4) get back to work.

Because, when you’ve won this crazy lottery, you’ll need to repeat steps (1)-(4) for about 6-8 years.

With that, I leave you with this.

_________

[0] For better or worse, my discussion here is focused on academic jobs at “research Universities.”  Again, and throughout, I readily acknowledge that my experience and the applicability of my “advice” is limited in this, and doubtlessly other, respects.

[1] There are usually other items, too, including a cover letter, transcripts and teaching evaluations.

[2] Lots of (generally minor) variation here across departments.

[3] Some people say the cover letter is the most important for the same reasons I say the CV is the most important.  I understand why these people say this, and report it faithfully, but I aver that more faculty look at the CV first than the sum of those who even read the cover letter.  That said, cover letters are part of the packet and should be treated seriously: higher-ups of various sorts can and do review packets, and a sloppy cover letter looks bad in any event.  That said, “the shorter, the sweeter” in my opinion: fewer words implies fewer opportunities to write “you’re job” instead of “your job.”

[4] Note at this point that I say this because the CV’s importance is that it minimizes the reader’s cost in establishing “who you are.”  While you want people to know the details of your work, you first want them to think that they are interested in you as a scholar/potential colleague.

[5] See what I did there?  Do I?

[6] I say “From a CV standpoint” for an important point.  Tenure is about research, teaching, service, and research, plus a little research…but it’s also about teaching and service (and not being a jerk).  The important aspects of teaching and service from a tenure standpoint aren’t (and arguably can’t/shouldn’t) be described on a CV.  That’s my point: your CV is first and foremost your self-proffered portrait of your research presence.

[7] Yes, there is a theoretical limit beyond which you are publishing “too much.”  But, let’s be honest: simple realities of life and finitude of mental energy will keep most of us from ever approaching that event horizon.

[8] I write this as the coeditor of the Journal of Theoretical Politics. Accordingly, I feel I can speak for my fellow co-editor, Torun Dewan, when I encourage you to email me with such an pronouncement.

If Keyser Söze Ruled America, Would We Know?

In this post on Mischiefs of Faction, Seth Masket discusses the recent debate about whether (super-)rich are overly influential in American politics.  I’ve already said a bit about the recent Gilens and Page piece that provides evidence that rich interests might have more pull than those of the average American.  In a nutshell, I don’t believe that the (nonetheless impressive) evidence presented by Gilens and Page demonstrates that the rich are actually driving, as opposed to responding to, politics.[1]

Seth’s post echoes my skepticism in some respects.  First, the rich and “super rich” donors are less polarized than are “small” donors.  Second, and perhaps even more importantly, admittedly casual inspection of REALLY large donors suggests that they are backing losing causes.  As Seth writes,

…the very wealthy aren’t necessarily getting what they’re paying for. Note that Sheldon Adelson appears in the above graph. He’s pretty conservative, according to these figures, and he memorably spent about $20 million in 2012 to buy Newt Gingrich the Republican presidential nomination, which kind of didn’t happen […] he definitely didn’t get what he paid for. (Okay, yeah, he sent a signal that he’s a rich guy who will spend money on politics, but people knew that already.)

While most donations aren’t quite at this level, they nonetheless follow a similar path, with a lot of them not really buying anything at all. To some extent, the money gives them access to politicians, which isn’t nothing.“[2]

The Adelson point raises another problem we need to confront when looking for the influence of money in American politics.  Since the 1970s, most federal campaign contribution data has been public.  Furthermore, even the ways in which one can spend money that are less transparent (e.g., independent expenditures) can be credibly revealed to the public if the donor(s) want to do so.

Thus, a rich donor with strong, public opinions could achieve influence on candidates—even or especially those he or she does not contribute to—by donating a bunch of money to long-shot, extreme/fringe candidates.  This is a costly signal of how much the donor cares about the issue(s) he or she is raising, and might lead to other candidates “etch-a-sketching” their positions closer to the goals of the donor.  Indeed, these candidates need not expect to ever receive a dime from the donor in question: they might just want to “turn off the spigot” and move on with the other dimensions of the campaign.

Furthermore, such candidates might actually prefer to not receive donations/explicit support from these donors.  After all, a candidate might not want to be either associated with the donor from a personal or policy stance (do you think anyone is courting Donald Sterling for endorsements right now?) or, even more ironically, the candidate might worry about being seen as “in the donor’s pocket.” Finally, there are a lot of rich donors, and they don’t espouse identical views on every topic.  As Seth notes,

“politicians are wary of boldly adopting a wealthy donor’s views, and … they hear from a lot of wealthy donors across the political spectrum, who probably have conflicting ideas”

Overall, tracing political influence through known-to-be-observable actions such as donations, press releases, and endorsements is perilous.  A truly influential individual sometimes wants to minimize the public’s awareness of his or her influence, particularly when that influence is being exercised through others.  It is useful to always remember Kevin Spacey’s line from The Usual Suspects:

The greatest trick the Devil ever pulled was convincing the world he didn’t exist.”[3][4]

From an empirical standpoint, I think the current debate about influence in American politics is interesting: for example, it is motivating people to think about both what data can be collected and innovative ways to manipulate and visualize it.  But I caution against the temptation to jump from it to wholesale normative judgments about the state of American politics.  Specifically, there’s another Kevin Spacey line in The Usual Suspects that is useful to remember as politicos and pundits debate who truly “controls” American politics:

To a cop, the explanation is never that complicated. It’s always simple. There’s no mystery to the street, no arch criminal behind it all. If you got a dead body and you think his brother did it, you’re gonna find out you’re right.

 

 

_____________

[1] This is what is known as an “endogeneity problem.”  While some people roll their eyes at such claims, I provided a theory (and could provide more than couple of additional ones) that support the claim that such a problem might exist.  Hence, I humbly assert that the burden of proving that this is not a problem rests on those who claim that the evidence is indeed “causal” in nature.

[2] As a side note, I’ve also argued that donors should be expected to have more access to politicians than non-donors, and that this need not represent a failing of our (or any) democratic system.

[3] Verifying my memory of this quote, I found out that it is a restatement of a line by Baudelaire: “La plus belle des ruses du diable est de vous persuader qu’il n’existe pas.I have no idea what this has to do with anything, but I feel marginally more erudite after copy-and-pasting French into my post.

[4] I will simply note in passing the link between this and the entirety of the first two seasons of the US version of House of Cards.

 

How Political Science Makes Politics Make Us Less Stupid

This post by Ezra Klein discusses this study, entitled “Motivated Numeracy and Enlightened Self-Government,” by Dan M. Kahan, Erica Cantrell Dawson, Ellen Peters, and Paul Slovic.  The gist of the post and the study is that people are less mathematically sophisticated when considering statistical evidence regarding a political issue.

The study presented people with “data” from a (fake) experiment about the effect of a hand cream on rashes.  There were two treatment groups: one group used the cream and the other did not.  The group that used the skin cream had more subjects reported (i.e.a higher response rate), but a lower success rate.[1] Mathematically/scientifically sophisticated individuals should realize that the key statistics are the ratios of successes to failures within each treatment, not the absolute number of successes.

This was the baseline comparison, as it considered a nonpolitical issue (whether to use the skin cream).  The researchers then conducted the same study with a change in labeling. Rather than reporting on the effectiveness of skin cream, the same results were labeled as reporting the effectiveness of gun-control laws. All four treatments of the study are pictured below.

Gunning for Mathematical Literacy

Gunning for Mathematical Literacy

I want to make one methodological point about this study: the gun control treatments were not apples-to-apples comparisons with the skin cream treatment and, furthermore, the difference between them is an important distinction between well-done science and the messy realities of real-world (political/economic) policy evaluation/comparison.

Quoting from page 10 of the study,

Subjects were instructed that a “city government was trying to decide whether to pass a law banning private citizens from carrying concealed handguns in public.” Government officials, subjects were told, were “unsure whether the law will be more likely to decrease crime by reducing the number of people carrying weapons or increase crime by making it harder for law-abiding citizens to defend themselves from violent criminals.” To address this question, researchers had divided cities into two groups: one consisting of cities that had recently enacted bans on concealed weapons and another that had no such bans. They then observed the number of cities that experienced “decreases in crime” and those that experienced “increases in crime” in the next year. Supplied that information once more in a 2×2 contingency table, subjects were instructed to indicate whether “cities that enacted a ban on carrying concealed handguns were more likely to have a decrease in crime” or instead “more likely to have an increase in crime than cities without bans.” 

The sentence highlighted in bold (by me) is the core of my main point here.  It was not even suggested to the subjects that the data was experimental.  Rather, the description is that the data is observational.  In other words, it wasn’t the case in the hypothetical example that cities were randomly selected to implement gun-control laws.

While this might seem like a small point, it is a big deal.  This is because, to be direct about it, gun-control laws are adopted because they are perceived to be possibly effective in reducing gun crime,[2] they are controversial,[3] and accordingly will be more likely to be adopted in cities where gun crime is perceived to be bad and/or getting worse.

Without randomization, one needs to control for the cities’ situations to gain some leverage on what the true counterfactual in each case would have been.  That is, what would have happened in each city that passed a gun-control law if they had not passed a gun-control law, and vice-versa?

To make this point even more clearly, consider the following hypothetical.  Suppose that instead of gun-control laws and crime prevention, we compared the observed use of fire trucks in a city and then evaluated how many houses ultimately burned down?  Such a treatment is displayed below.

FireTrucks

From this hypothetical, the logic of the study implies that a sophisticated subject is one who says “sending out fire trucks causes more houses to burn down.”  Of course, a basic understanding of fires and fire trucks strongly suggests that such a conclusion is absolutely ridiculous.

What’s the point?  After all, the study shows that partisan subjects were more likely to say that the treatment their partisanship would tend to support (gun-control for Democrats, no gun-control for Republicans) was the more effective.   This is where the importance of counterfactuals comes in.  Let’s reasonably presume for simplicity that “Republicans don’t support gun-control” because they believe it is insufficiently effective at crime prevention to warrant the intrusion on personal liberties and that “Democrats support gun-control” because they believe conversely that it is sufficiently effective.[4] Then, these individuals, given that the hypothetical data was not collected experimentally, could arguably look at the hypothetical data in the following ways:

  • A Republican, when presented with hypothetical evidence of gun-control laws being effective, could argue that, because towns adopt gun control laws during a crime wave, regression to the mean might lead the evidence to overestimate the effectiveness of gun control laws on crime reduction.  That is, gun-control laws are ineffective and they are implemented as responses to transient bumps in crime.
  • A Democrat, when presented with hypothetical evidence of gun-control laws being ineffective, might reason along the lines of the fire truck example: cities that adopted gun control laws were/are experiencing increasing crime and that the proper comparison is not increase of crime, but increase of crime relative to the unobserved counterfactual.  That is, cities that implement gun-control laws are less crime-ridden than they would have been if they had not implemented the measures, but the measures themselves can not ensure a net reduction of crime during times in which other factors are driving crime rates.

Conclusion. The mathofpolitics points of this post are two.  First, it is completely reasonable that partisans have more well-developed (“tighter”) priors about the effectiveness/desirability of various political policy choices.  When we think about adoption of policies in the real world, it is also reasonable that these beliefs will drive the observed adoption of policies.  Finally, for almost every policy of any importance it is the case that the proper choice depends on the “facts on the ground.”  Different times, places, circumstances, and people typically call for different choices.  To forget this will lead one to naively conclude that chemotherapy causes people to die from cancer.

Second, it’s really time to stop picking on voters. Politics does not make you “dumb.” People have limited time, use shortcuts, take cues from elites, etc., in every walk of life.  Traffic-drawing headlines and pithy summaries like “How politics makes us stupid” are elitist and ironically anti-intellectual.  The Kahan, Dawson, Peters, and Slovic study is really cool in a lot of ways.  My methodological criticism is in a sense a virtue: it highlights the unique way in which science must be conducted in real-world political and economic settings.  Some policy changes can not be implemented experimentally for normative, ethical, and/or practical reasons, but it is nonetheless important to attempt to gauge their effectiveness in various ways.  Thinking about this and, more broadly, how such evidence is and should be interpreted by voters is arguably one of the central purposes of political science.

With that, I leave you with this.

Note: I neglected to mention this study—“Partisan Bias in Factual Beliefs about Politics (by John G. BullockAlan S. GerberSeth J. Hill, and Gregory A. Huber)-–which shows that some of the “partisan bias” can be removed by offering subjects tiny monetary rewards for being correct. Thanks to Keith Schnakenberg for reminding me of this study.

____________

[1] The study manipulated whether the cream was effective or not, but I’ll frame my discuss ion with respect to the manipulation in which the cream was not effective.

[2] Note that this is not saying that all “cities” perceive that gun-control laws are effective at reducing gun crime.  Just that only those cities in which they are perceived to possibly be effective will adopt them.

[3] Again, in cities where such a law is not controversial, one might infer something about the level of crime (and/or gun ownership) in that city.

[4] I am also leaving aside the possibility that Republicans like crime or that Democrats just don’t like guns.