Three political scientists have recently attracted a great deal of attention because they sent mailers to 100,000 Montana voters. The basics of the story are available elsewhere (see the link above), so I’ll move along to my points. The researchers’ study is being criticized on at least three grounds, and I’ll respond to two of these, setting the third to the side because it isn’t that interesting.
The two criticisms of the study I’ll discuss here share a common core, as each centers on whether it is okay to intervene in elections. They are distinguished by specificity—whether it was okay to intervene in these elections vs. whether it is okay to intervene in any election. My initial point deals with these elections, which aren’t as “pure” as one might infer from some of the narrative out there, and my second, more general point is that you can’t make an omelet without breaking some eggs. Or, put another way, you usually can’t take measurements of an object without affecting the object itself.
“Non-Partisan” Doesn’t Mean What You Think It Means. The Montana elections in question are nonpartisan judicial elections. The mailers “placed” candidates on an ideological scale that was anchored by President Obama and Mitt Romney. So, perhaps the mailers affected the electoral process by making it “partisan.” I think this criticism is pretty shaky. Non-partisan doesn’t mean non-ideological. Rather, it means that parties play no official role in placing candidates on the ballot. A principal argument for such elections is a “Progressive” concern with partisan “control” of the office in question. I’ll note that Obama and Romney are partisans, of course, but candidates for non-partisan races can be partisans, too. Indeed, candidates in non-partisan races can, and do, address issues that are clearly associated with partisan alignment (death penalty, abortion, drug policy, etc.) In fact, prior to this, one of the races addressed in the mailers was already attracting attention for its “partisan tone.” So, while non-partisan politics might sound dreamy, expecting real electoral politics to play in concert with such a goal is indeed only that: a dream.
Intervention Is Necessary For Learning & Our Job Is To Learn. The most interesting criticism of the study rests on concerns that the study itself might have affected the election outcome. The presumption in this criticism is that affecting the election outcome is bad. I don’t accept that premise, but I don’t reject it either. A key question in my mind is whether the intent of the research was to influence the election outcome and, if so, to what end. I think it is fair to assume that the researchers didn’t have some ulterior motive in this case. Period.
That said, along these lines, Chris Blattman makes a related point about whether it is permissible to want to affect the election outcome. I’ll take the argument a step farther and say that the field is supposed to generate work that might guide the choice of public policies, the design of institutions, and ultimately individual behavior itself. Otherwise, why the heck are we in this business?
Even setting that aside, those who argue that this type of research (known as “field experiments”) should have no impact on real-world outcomes (e.g., see this excellent post by Melissa Michelson) kind of miss the point of doing the study at all. This is because the point of the experiment is to identify the impact of some treatment/intervention on individual behavior. There are three related points hidden in here. First, the idea of a well-designed study is to measure an effect that we don’t already have precise knowledge of. So, one can never be certain that an experiment will have no effect: should ethics be judged ex ante or ex post? (I have already implied that I think ex ante is the proper standpoint.)
Second, it is arguably impossible to obtain the desired measurement without affecting the outcomes, particularly if one views the outcome as being more than simply “who won the election?” To guarantee that the outcome is not affected implies that one has to design the experiment to fail in a measurement sense.
Third, the question of whether the treatment had an effect can be gauged only imprecisely (e.g., by comparing treated individuals with untreated ones). Knowing whether one had an effect requires measuring/estimating the counterfactual of what would have happened in the absence of the experiment. I’ll set this aside, but note that there’s an even deeper question in where if one wanted to think about how one would fairly or democratically design an experiment on collective choice/action situations.
So, while protecting the democratic process is obviously of near-paramount importance, if you want to have gold standard quality information about how elections actually work—if you want to know things like
- whether non-partisan elections are better than partisan elections,
- what information voters pay attention to and what information they don’t, or
- what kind of information promotes responsiveness by incumbents,
then one needs to potentially affect election outcomes. The analogy with drug trials is spot-on. On the one hand, a drug trial should be designed to give as much quality of life to as many patients as possible. But the question is, relative to what baseline? A naive approach would be to say “well, minimize the number of people who are made worse off by having been in the drug trial.” That’s easy: cancel the trial. But of course that comes with a cost—maybe the drug is helpful. Similarly, one can’t just shuffle the problem aside by arguing for the “least invasive” treatment, because the logic unravels again to imply that the drug trial should be scrapped.
Experimental Design is an Aggregation Problem. In the end, the ethical design of field experiments requires making trade-offs between at least two desiderata:
- The value of the information to be learned and
- The invasiveness of the intervention.
Whenever one makes trade-offs, one is engaging in the aggregation of two or more goals or criteria. Accordingly, evaluating the ethics of experimental design falls in the realm of social choice theory (see my new forthcoming paper with Maggie Penn, as well as our book, for more on these types of questions) and thus requires thinking in theoretical terms before running the experiment. One should have taken the time to think about both the likely immediate effects of the experiment and also what will be affected by the information that is learned from the results.
This Ain’t That Different From What Many Others Do All The Time. My final point dovetails with Blattman’s argument in some ways. Note that, aside from the matter of the Great Seal of the State of Montana, nothing that the researchers did would be inadmissible if they had just done it on their own as citizens. Many groups do exactly this kind of thing, including non-partisan ones such as the League of Women Voters, ideological groups such as Americans for Democratic Action (ADA) and the American Conservative Union (ACU), and issue groups such as the National Rifle Association (NRA) and the Sierra Club.
Thus, the irony is that this experiment is susceptible to second-guessing precisely because it was carried out by academics working under the auspices of research universities. The brouhaha over this experiment has the potential to lead to the next study of this form—and more will happen—being carried out outside of such institutional channels. While one might not like this kind of research being conducted, it is ridiculous to claim that is better that it be performed outside of the academy by individuals and organizations cloaked in even more obscurity. Indeed, such organizations are already doing it, at least this kind of academic research can provide us with some guess about what those other organizations are finding.
With that, I leave you with this.
One line of criticism centers on whether the mailer was deceptive, because it bore the official seal of the State of Montana. This was probably against the law. (There are apparently several other laws that the study might have violated as well, but this point travels to those as well.) While intriguing because we so rarely get to discuss the power of seals these days, this is a relatively simple matter: if it’s against the law to do it, then the researchers should not have done so. Even if it is not against the law, I’d agree that it is deceptive. Whether deception is a problem in social science experiments is itself somewhat controversial, but I’ll set that to the side.
 For example, while the reason we went to the moon was partly about “because it’s there,” aka the George Mallory theory of policymaking, it was also arguably about settling the “is it made of green cheese?” debate. It turns out, no. 🙁
 I will point out quickly that this type of experimental work is done all the time by corporations. This is often called “market research” or “market testing.” People don’t like to think they are being treated like guinea pigs, but trust me…you are. And you always will be.
 This excellent post by Thomas Leeper beat me to the irony of people getting upset at the policy relevance of political science research.