So Many Smells, So Little Time: In Defense of “Stinky” Academic Writing

Steven Pinker recently offered a lengthy explanation of “Why Academics Stink At Writing.”  First, it is important to note that the title of Pinker’s post is misleading.  Indeed, as he points out early on, he is actually arguing about why academic writing is “turgid, soggy, wooden, bloated, clumsy, obscure, unpleasant to read, and impossible to understand?”  This is different than why academics stink at writing—and, indeed, the claim that “academics stink at writing” is an example of stinky writing, unless one likes sweeping, pejorative generalizations.

Pinker writes that “the most popular answer outside the academy is the cynical one: Bad writing is a deliberate choice.”  I’m inside the academy, and I want to offer a non-cynical “deliberate choice” explanation for why academic writing is dense and obscure.

Pinker gets close to my explanation later on the post.[1]  Specifically, Pinker attributes dense and obscure academic writing to “the writer’s chief, if unstated, concern … to escape being convicted of philosophical naïveté about his own enterprise.”

The dense and obscure nature of much scholarly writing, of which I am frequent producer, is at least partly the result of the author’s need to convince the reader that the author knows what the hell he or she is talking about.

Qualifications (or “hedges,” in Pinker’s terminology) such as “almost,” “apparently,” “comparatively,” “relatively,” and so forth are not necessarily “wads of fluff that imply they are not willing to stand behind what they say.”

Rather, they ironically can serve as a way to make scholarly arguments more succinct while indicating thought by the author on the matter being described.  For example, suppose that I’m describing how members of Congress tend to vote.  I could say that “voting in Congress these days is partisan.”  Is that true?  Well, not exactly.  Is it pretty close to true?  Yes, in the sense that voting in Congress is highly correlated with partisanship: Members of either party tend to vote like their fellow partisans, and this correlation is stronger today than in much of American history.  But it’s not true that members always vote with their party’s leadership.  Thus, a more accurate statement—and one that reveals that one is thinking about the data more carefully—is as follows:

Voting in Congress these days is largely partisan.

Pinker describes a lot of words as “hedging,” and they’re not all the same.  Continuing the Congressional voting example, one might wonder why Members vote as they do.  Even if one thinks that the reader doesn’t need a qualifier like “largely,” the statement “Voting in Congress these days is partisan” is still unclear. For example, is the author claiming that Members of Congress vote as they do because of their partisanship?  That is, do Members of Congress simply follow their party’s directions when voting? This is an open question, it turns out.  Accordingly, a more accurate statement is

Voting in Congress these days is at least seemingly partisan.

Yes, that sentence is hedging.  For a reason—one conclusion a reader might draw from “Voting in Congress these days is partisan” is unwarranted.  Including the “at least seemingly” qualifier is not a wad of fluff to signal that I’m not willing to stand behind what I say—it’s a key part of what I want you to hear me saying.

I could go on, but I’ll conclude with the “math of politics” of this phenomenon.  Academic writing (and here I am thinking of writing intended to be subjected to peer-review of some form) is dense and obscure because the written presentation of the research is necessarily an incomplete rendition of the research itself.  That is, peer review is about trying to verify the qualities of the argument, which often requires inferring about the processes of the research that are by necessity incompletely conveyed in the written work.  Dense and obscure writing—jargon, qualifiers, etc.—are a bigger manifestation of the typographical convention “[sic.]“  When quoting a passage with an error, such as a misspelling or grammatical mistake, it is common practice to place “[sic.]” immediately after the mistake(s).  This is done because the author needs to signal to the editors, reviewers, and readers, that this mistake is not the author’s fault.  Importantly, though, it illustrates more than just that—[sic.] also signals that the author noticed the mistake.

Academic writing has to be dense and obscure, i.e., tough to parse, precisely because most scholars study phenomena that are tough to parse.  To continue Pinker’s theme, then, one might say that scholarly writing “stinks” because the real world “has so many smells.” Ironically, academic writing is difficult to read because it is attempting to portray what is almost always a big and variegated reality: often, the appealing parsimony of a conversational style is insufficient to accurately convey the knowledge and findings of the author.

In conclusion, academic writing is a very complicated signaling game—and I don’t mean “game” in a derogatory sense—that is necessitated by the various constraints we all labor under: time, resources, page limits, and exhaustion in both mental and physical forms. Dense and obscure language is more costly and complicated than conversational language, but this costly complication is a requisite outcome of the screening process that scholarly work is rightly subjected to.

 


 

[1] I couldn’t quite figure out how to put this in the body of this post, but the point at which Pinker turns to this argument occurs in an ironic paragraph:

In a brilliant little book called Clear and Simple as the Truth, the literary scholars Francis-Noël Thomas and Mark Turner argue that every style of writing can be understood as a model of the communication scenario that an author simulates in lieu of the real-time give-and-take of a conversation. They distinguish, in particular, romantic, oracular, prophetic, practical, and plain styles, each defined by how the writer imagines himself to be related to the reader, and what the writer is trying to accomplish. (To avoid the awkwardness of strings of he or she, I borrow a convention from linguistics and will refer to a male generic writer and a female generic reader.) Among those styles is one they single out as an aspiration for writers of expository prose. They call it classic style, and they credit its invention to 17th-century French essayists such as Descartes and La Rochefoucauld.

To be clear, it took me a couple of reads to comprehend that paragraph.  A conversational style is Pinker’s ideal for clarity—so why include the parenthetical explanation his gendered pronouns?

#Ferguson: The Racial Disconnect On Race

Yesterday, while actively following the events in Ferguson, I was asked the following by @GenXMedia: 

White Suburban America seems riddled with apathy, excuses and disconnect about #Ferguson. Any ideas why?

Upon further prompting, it became clear that @GenXMedia wanted a response to each of the three things that White Suburban America is riddled with: apathy, excuses, and disconnect.

It is important to note that, as many of you know, this important topic does not fall squarely in my “wheelhouse.”  I mostly think about institutions and strategic models of politics.  That said, and with the usual warning that you get what you pay for, here’s my promised response.


 

Apathy. If we define apathy as anything less than intense interest in the unfolding story in Ferguson then yes, unsurprisingly, it is clear that more white voters are apathetic toward the events in Ferguson, with 54% of black respondents saying they are following the story very closely, while only 25% of white respondents say the same thing:

8-18-2014_07

(Here is the full Pew survey and write-up.) It’s beyond my scope here but, to understand the intricate question of how race, civil rights, and Ferguson interact, it is important to note that only 18% of Hispanic respondents said they are following the story very closely.

Sadly, these numbers aren’t surprising to me.  Apathy is a “choice” only in the technical sense.  From a common sense standpoint, apathy is the absence of a choice to care/pay attention and “not choosing to pay attention” is a heck of a lot easier when the events seem less proximate to yourself.

I’m not saying that it’s rational to be apathetic, particularly about something as important and extreme as the events in Ferguson, but the results today are consistent with several decades of research into political attitudes in America, including the fact that the perception of “linked fate” is far more prevalent among black Americans than either whites or Latinos.[1]  Linked fate is a key concept in the study of race and politics.  A recent review of this literature describes linked fate as follows:

Linked fate is generally operationalized by an index formed by the combination of two questions. First, respondents are asked: “Do you think what happens generally to Black people in this country will have something to do with what happens in your life?” If there is an affirmative response, the respondent is then asked to evaluate the degree of connectedness: “Will it affect you a lot, some, or not very much?” [2]

Moving beyond (and/or in addition to) linked fate, one can also argue that the incentives (or perhaps proximities) of black and white Americans differ with respect to law enforcement.  Setting aside a more detailed discussion of this, just note the similarity between the racial breakdown of people closely following the events in Ferguson with the analogous breakdown of interest across gay rights, voting rights, and affirmative action in 2013:

6.24.13.-2

Excuses. It’s well established that white Americans generally perceive racism to be less prevalent and less important than black Americans.   Discussing racial attitudes in the post-Civil Rights era, Brown, et al. write

In the new conventional wisdom about race, white racism is regarded as a remnant from the past because most whites no longer express bigoted attitudes or racial hatred.[3]

Simply put, the Pew survey does nothing to contradict this conclusion.  Specifically, 47% of white respondents said that “race is getting more attention than it deserves” in the coverage of the shooting of Michael Brown, while only 18% of black respondents, and only 25% of Hispanic respondents, agreed with that statement (see here for the full breakdown):

8-18-14_012

In the end, it’s important to note that the racial divide in attention being paid to Ferguson is in line with the racial differences in individuals’ beliefs that race is an important part of the narrative.  While it is impossible to gauge causality here—namely, are fewer white people paying attention to Ferguson because they think it’s not about race or are more white people saying Michael Brown’s shooting wasn’t about race because they’re not paying attention to Ferguson—both are consistent with avoidance: simply put, issues like homelessness, inequality, and discrimination are difficult to get many people to pay sustained attention to.  I’ve argued elsewhere that politics is about problem-solving, and people like to debate problems they think can be solved.  Race is arguably the most complicated problem to solve. While by no means admirable, avoidance of the issue by those who can (i.e., white people) is not surprising.[4]

Disconnect. I’m not exactly sure how “disconnect” is different from both apathy and excuses, but I’ll take a stab and interpret this as “why do white people not seem to connect the events in Ferguson with race?”  My response here, sadly, is that they kind of do—at least insofar as the attitudes here are consistent with other similar racially charged events.  For example, following the acquittal of George Zimmerman in July 2013, Pew conducted a poll gauging reactions and attention to the case.  The racial breakdowns of responses to each are very similar to those just found in the case of Ferguson, with 60% of whites thinking the issue of race was getting more attention than it deserves, and only 13% of blacks feeling that way:

7-22-2013-1 Similarly, 63% of black respondents mentioned talking about the trial with friends, versus only 42% of white respondents:7-22-2013-2

Conclusion.  My own view on this is that Ferguson is most decidedly a racial issue.  This isn’t the same as saying that anyone involved is (or isn’t) racist.  Indeed, that issue, to me, misses the larger and more important point. In fact, while the racial realities of Michael Brown’s death—an unarmed black American killed by a white police officer—undoubtedly thrust race forward into the discussion, race should have been part of the discussion anyway.

That’s because any of the multiple dimensions of the context of Ferguson—the historical discrimination, the economic inequality, the political disparities, the unrepresentative political institutions, and the more general “special” features of local elections, to name just a few—make the issue of not only Michael Brown’s death, but also the largely and sadly ham-handed response a racial issue.

So, why don’t more white people see this?  A succinct (though definitely not exculpatory) answer is inertia: attitudes, like objects, tend to stay the same until acted upon an outside force. The reality of America is that white Americans are less likely to see their fates as being linked with those of black Americans and (perhaps because) they are less likely to face the everyday inequalities faced by far too many black Americans. In other words, and quite literally, most white Americans don’t often encounter an outside force with respect to race—definitely not like many black Americans do.  Whether they achieve this through apathy, excuses, and/or disconnect is a trickier question, but the correlation—the reality that race still divides Americans’ perceptions of politics and power—is sadly indisputable and robust, even in the 21st century.

____________

[1] See Dawson, Michael C. Behind the mule: Race and class in African-American politics. Princeton University Press, 1994.
[2] From Paula D. McClain, Jessica D. Johnson Carew, Eugene Walton, Jr., and Candis S. Watts. 2009. “Group Membership, Group Identity, and Group Consciousness: Measures of Racial Identity in American Politics?” Annual Review of Political Science (2009), p. 477.
[3] From Michael K. Brown, Martin Carnoy, Troy Duster, and David B. Oppenheimer. Whitewashing race: The myth of a color-blind society. University of California Press, 2003, p.36.
[4] Another, stronger, view of this is called “white privilege,” which describes the fact that issues that can be avoided are also deemed less important to others, without noticing that the ability to avoid these issues is not independent of race. (Thanks to Jessica Trounstine for adroitly directing me to this connection, as well as posting this telling graphic.)

 

Makes Us Stronger: The Math of Protest and Repression

Like many people, especially here in St. Louis, the ongoing events in Ferguson have consumed my attention and, frankly, really shaken me.  After much thought, I have possibly come up with a manageable take on one angle of “the math of” the situation.

It is important to distinguish protest from rebellion. Protest is distinguished from rebellion on the basis of intent.  Rebels intend to replace the government.  Protesters intend to change policy.

Protest, not rebellion, is what is happening in Ferguson.

In the end, this distinction is important because, in a nutshell, rebels don’t care what the government “thinks.”  In fact, rebellions are sometimes most successful when the government doesn’t notice them (until too late). Protesters, on the other hand, are directly attempting to change what the government (and/or other voters) “thinks.”[1]  In another nutshell, protest is about changing the government’s beliefs about who is upset about the policy in question, and how upset they are.

Protest is a form of costly signaling. Costly signaling describes any action that, because it is “expensive” or “unpleasant,” can convey something about oneself to others.[2]  Costly signaling is generally more informative than “cheap talk” signaling, in which one basically just says “hey, I am mad” but pays no cost to do so.

I’m not the first to make this point, of course.  But I wanted to bring it up again because thinking about the incentives to signal through protest can help us understand (some of) the events in Ferguson.  Below, I try to succinctly make a couple of points along these lines.

Protests are instrumentally rational only if they might work. As perhaps the canonical example of collective action, the problem for organizers is convincing citizens that participation will have some effect.[3]  The probability that a protest will have an effect is, generally speaking, an increasing function of the number of protesters.  This highlights one incentive for anyone trying to prevent the desired change: namely, clear the streets. By keeping protesters off the street, the government eliminates the possibility of the protesters sending (one type of) costly signal to those citizens “on the sidelines.”  This is really effective if the government can simply keep the streets clear from the beginning.[4] However, once protesters are “on the streets,” clearing the streets can have unintended consequences that become clear in a costly signaling framework.  Specifically:

Putting down a protest increases protest’s signaling value. Think about it this way: suppose that the government started giving money to those who showed up at the protest.  The “protest” would probably grow in size, right?  It would also become less informative about “who is upset about the policy in question, and how upset they are.”  This is because some of the people there are presumably there only for the money.  Indeed, some people who are really upset about the policy but were (for example) missing work for the protest might leave when the government starts giving away money, because their individual presence at the “protest” would have a smaller ultimate effect on the policy.

The converse of this logic can hold, too: by tear-gassing and shooting rubber bullets at citizens, the government amplifies the content/credibility of the message the protesters are trying to send.[5]

Conclusion: Two Reasons to Not Clear The Streets.  There’s more that can be said within the costly signaling conception of protest, of course, but I’ll keep this short and simply point out that clearing the streets is not only fundamentally undemocratic and counter to fundamental American values—it can easily lead to ironic results.  Understanding the proper response to protest (even if based on cynical motives) requires thinking about why the protesters are there.  They aren’t just upset—they’re trying to show others how upset they are.[6]

Good governments don’t threaten their citizens because it’s wrong to do so.
Smart governments don’t threaten their citizens because it’s stupid to do so.

Given the events of the past 10 days, I’ll take either type.

With that, I leave you with this.

_______________

[1] This is a blog post, so I will simply note the sloppiness of ascribing “thought” to a deceptively simple collective such as “the government.” Apologies to Ken Arrow, as appropriate.

[2] I make a lot of costly signaling arguments on this blog (e.g., here, here, here). This is itself a costly signal of how useful I believe the concept to be. KAPOW!

[3] And, to complicate things, this cuts both ways: the successful organizer must convince his or her followers that the outcome can be achieved, but only with the followers’ help.

[4] Arguably not too different from the policy that is being attempted today in Ferguson (8/18)

[5] This is particularly true now that there are so many excellent livestreams of protests.

[6] I thought about discussing the incentives of the government to portray its actions as being “not about the protest” (i.e., protecting property, responding to gunshots/fireworks?) but I’ll leave that for another post.

The Bigger The Data, The Harder The (Theory of) Measurement

We now live in a world of seemingly never-ending “data” and, relatedly, one of ever-cheaper computational resources.  This has led to lots of really cool topics being (re)discovered.  Text analysis, genetics, fMRI brain scans, (social and anti-social) networks, campaign finance data… these are all areas of analysis that, practically speaking were “doubly impossible” ten years ago: neither the data nor the computational power to analyze the data really existed in practical terms.

Big data is awesome…because it’s BIG.  I’m not going to weigh in on the debate about what the proper dimension is to judge “bigness” on (is it the size of the data set or the size of the phenomena they describe?).  Rather, I just wanted to point out that big data—even more than “small” data—require data reduction prior to analysis with standard (e.g., correlation/regression) techniques.  More generally, theories (and, accordingly, results or “findings”) are useful only to the extent that they are portable and explicable, and these each generally necessitate some sort of data reduction.  For example, a (good) theory of weather is never ignorant of geography, but a truly useful theory of weather is capable of producing findings (and hence being analyzed) in the absence of GPS data. A useful theory of weather needs to be at least mostly location-independent.  The same is true of social science: a useful theory’s predictions should be largely, if not completely, independent of the identities of the actors involved.  It’s not useful to have a theory of conflict that requires one to specify every aspect of the conflict prior to producing a prediction and/or prescription.

Data reduction is aggregation.  That is, data reduction takes big things and makes them small by (colloquially) “adding up/combining” the details into a smaller (and necessarily less-than-completely-precise) representation of the original.

Maggie Penn and I have recently written a short piece, tentatively titled “Analyzing Big Data: Social Choice & Measurement,” to hopefully be included in a symposium on “Big Data, Causal Inference, and Formal Theory” (or something like that), coordinated by Matt Golder.[1]

In a nutshell, our argument in the piece is that characterizing and judging data reduction is a subset of social choice theory.  Practically, then, we argue that the empirical and logistical difficulties with trying to characterize the properties/behaviors of various empirical approaches to dealing with “big data” suggest the value of the often-overlooked “axiomatic” approaches that form the heart of social choice theory.  We provide some examples from network analysis to illustrate our points.

Anyway, I throw this out there to provoke discussion as well as troll for feedback: we’re very interested in complaints, criticisms, and suggestions.[2]  Feel free to either comment here or email me at jpatty@wustl.edu.

With that, I leave you with this.

______________________
[1] The symposium came out of a roundtable that I had the pleasure of being part of at the Midwest Political Science Association meetings (which was surprisingly well-attended—you can see the top of my coiffure in the upper left corner of this picture).

[2] I’m also always interested in compliments.

 

 

The Math of Getting a Job in Political Science

The “academic job market season” in political science starts in the fall and continues through the early spring.[0]  If you aren’t familiar with how the academic job market works, it’s basically still old school: schools post ads looking to hire for a more or less specialized position, applicants (“candidates”) send in “packets” containing a curriculum vitae (“CV”), a statement of their teaching and research interests, some writing samples (“papers”), and typically three letters of recommendation.[1] At this point…

Obviously, this is a stressful time for applicants.

…a committee of faculty will review the applications, create a “short list” of candidates to interview.

Still stressful for applicants…and sometimes committee members.

Those candidates then typically visit the campus, meet with faculty, and give a “job talk” concerning one of their writing samples.  After that…

VERY stressful time for the short-listed candidates…

…and oftentimes members of the department, too.

the committee makes a recommendation to the department, the department chooses somebody to recommend to the Dean, and the Dean then (usually) authorizes an offer to the department’s recommended candidate.[2]  Negotiations then ensue, but I’ll leave that matter for another day.


 

In this post, I want to offer a brief series of pieces of advice about how to approach this stressful time.  I’ve been lucky enough to see both sides of the market a few times, and there is a lot of uncertainty/misinformation/folklore about how it works.

Before diving in, let me be clear that I understand that this is all “through my eyes.”  Everyone’s experiences and opinions can differ from my own, and my stating something to the contrary should not be taken as evidence that I disagree with conflicting advice.  In other words, you get what you pay for.

The CV. (Writing this, it dawned on me I should put some skin in the game.  This is my publicly available CV.)

Without a doubt in my mind, the CV is the most important part of the typical packet.[3]  Search committee members have to review sometimes hundreds of files, and time waits for no one.  For better or worse, committee members use various cues in determining whether to dig more deeply into a file.

For the sake of parsimony, there are three key characteristics of a “good CV.”

1. Clarity. Don’t get fancy with formatting.  The top of the first page of the CV should include:

a. Your contact information,

b. Your education history from Bachelor’s Degree through to the (perhaps expected) PhD (including title of, and committee for, your dissertation),

c. Your publications and working papers  available for circulation.

It probably should not contain:

a. Work experience (this goes later in the CV, if relevant to your research or if you’ve spent significant time (>1 year) working in the real-world),

b. Descriptions of your papers (these go in your research statement, in the abstracts of the papers themselves, and on your website)[4]

c. Awards/grants/media appearances/blogging[5]/etc.  (These should go later in the CV, see “Papers: Appear Prepared to Publish or Prep to Perish,” below

2. Keep It Short. Despite the meaning (“courses of life”), this isn’t about your whole life.  Your CV on the job market is arguably the best indicator of what your CV will look like at “tenure time” in 6-8 years.  Accordingly, because—from a CV standpoint—tenure is about publishing,[6] and the only thing faculty dread more than not hiring is hiring somebody that they will have to worry about at tenure time, the easiest thing to focus on in your CV should be your research.

What I’m saying here is that you don’t need to put your proficiency in using WordStar/LaTeX/R/Stata/SAS/SPSS/etc, your high school awards, your Mensa membership, etc. on your CV.

3. That said, don’t worry too much about #2. The point is that you should make the “top of your vita quickly indicate what you’ve written and where you’re coming from. If you’ve still got the reader’s attention, they are probably interested in knowing more about you.  Just remember to keep it brief.

When in doubt, remember this: your CV needs to make a case for youand quickly.

Realistically, think about what academics talk about when describing other academics:

1. What they’ve published (or sometimes what they are currently working on),

2. Where they work (and have worked), and

3. Where they got their PhD.

You want your CV to communicate with a busy reader who talks about other people in this way.  You need to communicate with him or her quickly about how he or she should convince others to read your papers/letters/etc.

MAKE IT EASY FOR OTHERS TO “SELL YOU” IN THEIR USUAL WAY.

Papers: Appear Prepared to Publish or Prep to Perish. This piece of advice is easier to give than to follow:

Have several papers.  On different (but not too different) topics.  Write papers on your own and with other graduate students and (less valuable to you at this point) other faculty.  In general:

Be active. Write lots of papers.

There are two sufficient conditions to “kill” (or least seriously harm) a candidate:

1. The file doesn’t make a case quickly. (See “The CV,” above: keep it succinct.)

2. The file doesn’t precipitate a clear narrative of what your “tenure-able CV” is going to look like in 6-8 years.

 In short, publishing is always a crapshoot: the more ideas you put on paper and send out, the more publications you will have.  More importantly, the more interesting and vibrant a colleague you will be likely to be.[7]

Put another way, the “quality or quantity” question presents a false dichotomy in the sense that—at least in my experience—it is nearly impossible to accurately judge the quality of your own ideas and schemes in any a priori way.  This is due to the fact that quality is ultimately judged by your peers upon publication. Accordingly, to accurately and precisely judge the quality of one’s idea prior to writing it down and sending it out for review requires (1) knowing what others will judge “high quality” and (2) knowing what will get accepted/published.  Take it as a maxim that almost nobody is good at judging either of these, much less both, and even more so much less with respect to their own ideas.

Outside The Packet. The final piece of advice I have is beyond your packet.  It is simple:

Put yourself out there.

This is a job that requires, and indeed is made of, rejection.  It requires fortitude to write something and claim that it is “new,” “important,” and “worthy,” only to have 2-3 nameless unpaid, busy peers look upon it skeptically.  In and beyond the job market, every “key to success” I’ve seen or experienced can be described as

Letting others know what you’re interested in and what you’re doing.

Practically, how does one do this?

a. Send emails.  Unsolicted, email others to see if you can buy them coffee at conferences.  Do not be ashamed of emailing those at schools that are hiring in your field: this is your career, and sending that email is not only possibly the best way to get your packet “looked at twice,” it indicates the kind of gumption and initiative that positively predicts having a tenure-able CV in 6-8 years (see above).

b. Send your papers to other people/conferences/special issues.  Rejection is the future.  In general, people don’t like to reject something, people like to be thought of as important/worthy of seeking advice from, and scholars got into this job to read/argue/write.  Engage.  You will not always like what you hear back (e.g., “nothing”), but this is the game. Taking the risks now is costly, and signals you’ll keep taking them on the tenure track.

c. Volunteer to do the things that you want to do. Graduate students and junior faculty frequently ask “how do I get asked to review papers by journal X?”  The answer is simple: email the editor(s) of Journal X and tell them you’d like to review papers for Journal X.[8]

Summary. Look, there ain’t much you can do after you send in the packet (except email people—see above).  Relax as best you can, and finish the dissertation/dive into the next project. I don’t have a silver bullet, but hopefully I have provided some support for the contention that a research academic career in political science is generally promoted by presenting an efficient picture of what you have done and will do, and making it clear that you’re willing and able to “take the emotional risks” generally required to get others to pay attention, and respond, to your thoughts and work.  In the end, the applicant is always “the prospective new kid at the table.”  Make it easy for your future colleagues to see why you’ll be a good, productive, and vibrant neighbor and colleague.  In other words: (1) keep it simple and to the point, (2) put yourself out there….(3) have a drink, take a nap, try to forget the stress for a moment, and (4) get back to work.

Because, when you’ve won this crazy lottery, you’ll need to repeat steps (1)-(4) for about 6-8 years.

With that, I leave you with this.

_________

[0] For better or worse, my discussion here is focused on academic jobs at “research Universities.”  Again, and throughout, I readily acknowledge that my experience and the applicability of my “advice” is limited in this, and doubtlessly other, respects.

[1] There are usually other items, too, including a cover letter, transcripts and teaching evaluations.

[2] Lots of (generally minor) variation here across departments.

[3] Some people say the cover letter is the most important for the same reasons I say the CV is the most important.  I understand why these people say this, and report it faithfully, but I aver that more faculty look at the CV first than the sum of those who even read the cover letter.  That said, cover letters are part of the packet and should be treated seriously: higher-ups of various sorts can and do review packets, and a sloppy cover letter looks bad in any event.  That said, “the shorter, the sweeter” in my opinion: fewer words implies fewer opportunities to write “you’re job” instead of “your job.”

[4] Note at this point that I say this because the CV’s importance is that it minimizes the reader’s cost in establishing “who you are.”  While you want people to know the details of your work, you first want them to think that they are interested in you as a scholar/potential colleague.

[5] See what I did there?  Do I?

[6] I say “From a CV standpoint” for an important point.  Tenure is about research, teaching, service, and research, plus a little research…but it’s also about teaching and service (and not being a jerk).  The important aspects of teaching and service from a tenure standpoint aren’t (and arguably can’t/shouldn’t) be described on a CV.  That’s my point: your CV is first and foremost your self-proffered portrait of your research presence.

[7] Yes, there is a theoretical limit beyond which you are publishing “too much.”  But, let’s be honest: simple realities of life and finitude of mental energy will keep most of us from ever approaching that event horizon.

[8] I write this as the coeditor of the Journal of Theoretical Politics. Accordingly, I feel I can speak for my fellow co-editor, Torun Dewan, when I encourage you to email me with such an pronouncement.

If Keyser Söze Ruled America, Would We Know?

In this post on Mischiefs of Faction, Seth Masket discusses the recent debate about whether (super-)rich are overly influential in American politics.  I’ve already said a bit about the recent Gilens and Page piece that provides evidence that rich interests might have more pull than those of the average American.  In a nutshell, I don’t believe that the (nonetheless impressive) evidence presented by Gilens and Page demonstrates that the rich are actually driving, as opposed to responding to, politics.[1]

Seth’s post echoes my skepticism in some respects.  First, the rich and “super rich” donors are less polarized than are “small” donors.  Second, and perhaps even more importantly, admittedly casual inspection of REALLY large donors suggests that they are backing losing causes.  As Seth writes,

…the very wealthy aren’t necessarily getting what they’re paying for. Note that Sheldon Adelson appears in the above graph. He’s pretty conservative, according to these figures, and he memorably spent about $20 million in 2012 to buy Newt Gingrich the Republican presidential nomination, which kind of didn’t happen [...] he definitely didn’t get what he paid for. (Okay, yeah, he sent a signal that he’s a rich guy who will spend money on politics, but people knew that already.)

While most donations aren’t quite at this level, they nonetheless follow a similar path, with a lot of them not really buying anything at all. To some extent, the money gives them access to politicians, which isn’t nothing.“[2]

The Adelson point raises another problem we need to confront when looking for the influence of money in American politics.  Since the 1970s, most federal campaign contribution data has been public.  Furthermore, even the ways in which one can spend money that are less transparent (e.g., independent expenditures) can be credibly revealed to the public if the donor(s) want to do so.

Thus, a rich donor with strong, public opinions could achieve influence on candidates—even or especially those he or she does not contribute to—by donating a bunch of money to long-shot, extreme/fringe candidates.  This is a costly signal of how much the donor cares about the issue(s) he or she is raising, and might lead to other candidates “etch-a-sketching” their positions closer to the goals of the donor.  Indeed, these candidates need not expect to ever receive a dime from the donor in question: they might just want to “turn off the spigot” and move on with the other dimensions of the campaign.

Furthermore, such candidates might actually prefer to not receive donations/explicit support from these donors.  After all, a candidate might not want to be either associated with the donor from a personal or policy stance (do you think anyone is courting Donald Sterling for endorsements right now?) or, even more ironically, the candidate might worry about being seen as “in the donor’s pocket.” Finally, there are a lot of rich donors, and they don’t espouse identical views on every topic.  As Seth notes,

“politicians are wary of boldly adopting a wealthy donor’s views, and … they hear from a lot of wealthy donors across the political spectrum, who probably have conflicting ideas”

Overall, tracing political influence through known-to-be-observable actions such as donations, press releases, and endorsements is perilous.  A truly influential individual sometimes wants to minimize the public’s awareness of his or her influence, particularly when that influence is being exercised through others.  It is useful to always remember Kevin Spacey’s line from The Usual Suspects:

The greatest trick the Devil ever pulled was convincing the world he didn’t exist.”[3][4]

From an empirical standpoint, I think the current debate about influence in American politics is interesting: for example, it is motivating people to think about both what data can be collected and innovative ways to manipulate and visualize it.  But I caution against the temptation to jump from it to wholesale normative judgments about the state of American politics.  Specifically, there’s another Kevin Spacey line in The Usual Suspects that is useful to remember as politicos and pundits debate who truly “controls” American politics:

To a cop, the explanation is never that complicated. It’s always simple. There’s no mystery to the street, no arch criminal behind it all. If you got a dead body and you think his brother did it, you’re gonna find out you’re right.

 

 

_____________

[1] This is what is known as an “endogeneity problem.”  While some people roll their eyes at such claims, I provided a theory (and could provide more than couple of additional ones) that support the claim that such a problem might exist.  Hence, I humbly assert that the burden of proving that this is not a problem rests on those who claim that the evidence is indeed “causal” in nature.

[2] As a side note, I’ve also argued that donors should be expected to have more access to politicians than non-donors, and that this need not represent a failing of our (or any) democratic system.

[3] Verifying my memory of this quote, I found out that it is a restatement of a line by Baudelaire: “La plus belle des ruses du diable est de vous persuader qu’il n’existe pas.I have no idea what this has to do with anything, but I feel marginally more erudite after copy-and-pasting French into my post.

[4] I will simply note in passing the link between this and the entirety of the first two seasons of the US version of House of Cards.

 

How Political Science Makes Politics Make Us Less Stupid

This post by Ezra Klein discusses this study, entitled “Motivated Numeracy and Enlightened Self-Government,” by Dan M. Kahan, Erica Cantrell Dawson, Ellen Peters, and Paul Slovic.  The gist of the post and the study is that people are less mathematically sophisticated when considering statistical evidence regarding a political issue.

The study presented people with “data” from a (fake) experiment about the effect of a hand cream on rashes.  There were two treatment groups: one group used the cream and the other did not.  The group that used the skin cream had more subjects reported (i.e.a higher response rate), but a lower success rate.[1] Mathematically/scientifically sophisticated individuals should realize that the key statistics are the ratios of successes to failures within each treatment, not the absolute number of successes.

This was the baseline comparison, as it considered a nonpolitical issue (whether to use the skin cream).  The researchers then conducted the same study with a change in labeling. Rather than reporting on the effectiveness of skin cream, the same results were labeled as reporting the effectiveness of gun-control laws. All four treatments of the study are pictured below.

Gunning for Mathematical Literacy

Gunning for Mathematical Literacy

I want to make one methodological point about this study: the gun control treatments were not apples-to-apples comparisons with the skin cream treatment and, furthermore, the difference between them is an important distinction between well-done science and the messy realities of real-world (political/economic) policy evaluation/comparison.

Quoting from page 10 of the study,

Subjects were instructed that a “city government was trying to decide whether to pass a law banning private citizens from carrying concealed handguns in public.” Government officials, subjects were told, were “unsure whether the law will be more likely to decrease crime by reducing the number of people carrying weapons or increase crime by making it harder for law-abiding citizens to defend themselves from violent criminals.” To address this question, researchers had divided cities into two groups: one consisting of cities that had recently enacted bans on concealed weapons and another that had no such bans. They then observed the number of cities that experienced “decreases in crime” and those that experienced “increases in crime” in the next year. Supplied that information once more in a 2×2 contingency table, subjects were instructed to indicate whether “cities that enacted a ban on carrying concealed handguns were more likely to have a decrease in crime” or instead “more likely to have an increase in crime than cities without bans.” 

The sentence highlighted in bold (by me) is the core of my main point here.  It was not even suggested to the subjects that the data was experimental.  Rather, the description is that the data is observational.  In other words, it wasn’t the case in the hypothetical example that cities were randomly selected to implement gun-control laws.

While this might seem like a small point, it is a big deal.  This is because, to be direct about it, gun-control laws are adopted because they are perceived to be possibly effective in reducing gun crime,[2] they are controversial,[3] and accordingly will be more likely to be adopted in cities where gun crime is perceived to be bad and/or getting worse.

Without randomization, one needs to control for the cities’ situations to gain some leverage on what the true counterfactual in each case would have been.  That is, what would have happened in each city that passed a gun-control law if they had not passed a gun-control law, and vice-versa?

To make this point even more clearly, consider the following hypothetical.  Suppose that instead of gun-control laws and crime prevention, we compared the observed use of fire trucks in a city and then evaluated how many houses ultimately burned down?  Such a treatment is displayed below.

FireTrucks

From this hypothetical, the logic of the study implies that a sophisticated subject is one who says “sending out fire trucks causes more houses to burn down.”  Of course, a basic understanding of fires and fire trucks strongly suggests that such a conclusion is absolutely ridiculous.

What’s the point?  After all, the study shows that partisan subjects were more likely to say that the treatment their partisanship would tend to support (gun-control for Democrats, no gun-control for Republicans) was the more effective.   This is where the importance of counterfactuals comes in.  Let’s reasonably presume for simplicity that “Republicans don’t support gun-control” because they believe it is insufficiently effective at crime prevention to warrant the intrusion on personal liberties and that “Democrats support gun-control” because they believe conversely that it is sufficiently effective.[4] Then, these individuals, given that the hypothetical data was not collected experimentally, could arguably look at the hypothetical data in the following ways:

  • A Republican, when presented with hypothetical evidence of gun-control laws being effective, could argue that, because towns adopt gun control laws during a crime wave, regression to the mean might lead the evidence to overestimate the effectiveness of gun control laws on crime reduction.  That is, gun-control laws are ineffective and they are implemented as responses to transient bumps in crime.
  • A Democrat, when presented with hypothetical evidence of gun-control laws being ineffective, might reason along the lines of the fire truck example: cities that adopted gun control laws were/are experiencing increasing crime and that the proper comparison is not increase of crime, but increase of crime relative to the unobserved counterfactual.  That is, cities that implement gun-control laws are less crime-ridden than they would have been if they had not implemented the measures, but the measures themselves can not ensure a net reduction of crime during times in which other factors are driving crime rates.

Conclusion. The mathofpolitics points of this post are two.  First, it is completely reasonable that partisans have more well-developed (“tighter”) priors about the effectiveness/desirability of various political policy choices.  When we think about adoption of policies in the real world, it is also reasonable that these beliefs will drive the observed adoption of policies.  Finally, for almost every policy of any importance it is the case that the proper choice depends on the “facts on the ground.”  Different times, places, circumstances, and people typically call for different choices.  To forget this will lead one to naively conclude that chemotherapy causes people to die from cancer.

Second, it’s really time to stop picking on voters. Politics does not make you “dumb.” People have limited time, use shortcuts, take cues from elites, etc., in every walk of life.  Traffic-drawing headlines and pithy summaries like “How politics makes us stupid” are elitist and ironically anti-intellectual.  The Kahan, Dawson, Peters, and Slovic study is really cool in a lot of ways.  My methodological criticism is in a sense a virtue: it highlights the unique way in which science must be conducted in real-world political and economic settings.  Some policy changes can not be implemented experimentally for normative, ethical, and/or practical reasons, but it is nonetheless important to attempt to gauge their effectiveness in various ways.  Thinking about this and, more broadly, how such evidence is and should be interpreted by voters is arguably one of the central purposes of political science.

With that, I leave you with this.

Note: I neglected to mention this study—“Partisan Bias in Factual Beliefs about Politics (by John G. BullockAlan S. GerberSeth J. Hill, and Gregory A. Huber)-–which shows that some of the “partisan bias” can be removed by offering subjects tiny monetary rewards for being correct. Thanks to Keith Schnakenberg for reminding me of this study.

____________

[1] The study manipulated whether the cream was effective or not, but I’ll frame my discuss ion with respect to the manipulation in which the cream was not effective.

[2] Note that this is not saying that all “cities” perceive that gun-control laws are effective at reducing gun crime.  Just that only those cities in which they are perceived to possibly be effective will adopt them.

[3] Again, in cities where such a law is not controversial, one might infer something about the level of crime (and/or gun ownership) in that city.

[4] I am also leaving aside the possibility that Republicans like crime or that Democrats just don’t like guns.

Shining A Little More Light On Transparency

Thinking more about transparency (which I just wrote about), it occurred to me that I neglected two pieces (of many) that are relevant for the point about transparency of decision-making in bodies like the Federal Open Market Committee (FOMC) in which expertise plays an important role in justifying the body’s authority.

David Stasavage and Ellen Meade made use of a great (and entirely on point) data set in their analysis of the effect of transparency on FOMC decision-making in their Economic Journal article, “Publicity of Debate and the Incentive to Dissent: Evidence from the US Federal Reserve.” They find strong evidence that, once members knew their statements were being recorded, both the content of their opinions and their individual votes on monetary decisions changed.  

The general implications of this point from a theoretical perspective are nicely laid out in Stasavage’s Journal of Politics article, “Polarization and Publicity: Rethinking the Benefits of Deliberative Democracy.” Transparency can affect individual incentives, particularly among career-motivated decision-makers.  If one presumes that the decision-makers in a deliberative are motivated to “look good” by making good decisions, and one is mostly or wholly concerned with the quality of their performance then, in a specific sense, transparency of individual decision-makers’ opinions and votes can “only hurt” actual performance, because the decision-makers are not worried not only about the performance of their collective decisions (e.g., the actual inflation rate), but also by how their individual opinions/inputs are viewed.

Why Have Transparency At All, Then?

There are two broad categories of theoretical arguments in favor of transparency.  The first of these is screening and the second is record-keeping.

Screening. Recall that the problems with transparency sketched out above and in my previous post follow from the presumption that some or all of the decision-makers are interested in being rewarded and/or retained by voters/Congress/the president or whomever else might employ them in the future.  This “career-concerns model” of course implies that somebody else is going to be considering whether to retain, hire, or promote these decision-makers again in the future.  I’ll leave the details to the side for now and simply note that, if the “next job” for which they will be considered is sufficiently important relative to the current job, the ability to possibly infer something about the relative expertise or abilities of the decision-makers might be sufficiently valuable to warrant introducing some “noise” into the current decision-making.[1]

Record-Keeping. Nobody lives forever.  Many decision-making bodies that have authority because it is believed that expert decision-making can and should be used to set policy exist for many years, with decision-makers rotating in and out.  In such situations, because one is leveraging expertise as a justification, one might think that past experience can inform future decisions.  Steve Callander has recently published several excellent articles (here, here, and here) that offer a good starting point (unexplored as far as I know) for us to consider the types situations in which transparency can be helpful by allowing future decision-makers to not only observe past performance, but also learn how policy decisions actually affect outcomes by observing the details of the decisions that produced those outcomes.

Note that this argument, as opposed to the screening argument above, leaves room for one to think meaningfully about the proper “lag” or delay of transparency.  As the evolution of FOMC policy illustrates, many transparency policies involve a delay between decision and publication.[2] Interesting aspects of the policy process, such as how much information is conveyed by more recent versus older decisions, would presumably play a role in the final derivation of how much transparency is optimal.

Conclusions. If there’s any grand conclusion from this post, it’s that I think there’s a lot of important topics left in the study of transparency, and as social science theorists we should start thinking about getting closer to the “policy technology side” of the decision(s) being made.  Abstract static models provide a lot of very key and portable insights.  But they can take us only so far.

_________

[1] Of course, if transparency in the current decision process leads every decision-maker to “pool” and do the same thing, regardless of their type, then one can’t infer anything about the decision-makers from their decision, thereby obviating this argument for transparency.  This will be the case when the decision-makers are sufficiently motivated to “get hired in the next job” relative to their innate preference to “make the right decision” in the current matter at hand.  In the FOMC, this would be an FOMC member who cares a lot more about becoming (say) Fed Chairman someday than he or she does about getting monetary policy “right” today.

[2] This type of argument, combined with career concerns, would also allow us to think in more detail about to whom the decisions ought to be made transparent and from whom this information should be withheld.

 

Why Separate When You Can…Lustrate!?!

Today’s post by Maria Popova and Vincent Post, “What is lustration and is it a good idea for Ukraine to adopt it?” made me think about the difference between what I will call policy and discretionary purges.

It is not easy for a nation to fix itself after a period of authoritarian rule.  There are many individuals who actually compose the government, and it is not clear that they share the ideals of the new government and, even if they do, the worries about career concerns and adverse selection that I raised a few minutes ago here suggest that changing behaviors might be hard even if the vast majority of bureaucrats/judges/legislators agree with democratic norms, the rule of law, the relative inelegance of bashing your opponents’ heads in, etc.

So, one practical approach to fixing an institution in the sense of massively and quickly redirecting its aggregate behavior (as produced by the panoply of individual decision-makers’ choices) is what we might call wiping the slate clean.  Clear the decks, Ctrl-Alt-Delete the whole shebang.

Another way is to find the people who are the problem(s) and eliminate them.  The prospect of removal might, in equilibrium, convert some who were previously scofflaws into temperate and sage clerks, after all.

I want to make a quick point.  Removal of officials is practically hard (because those who fear removal will hide evidence and otherwise obstruct the Remover’s attempts to ferret them out).  But, more intriguingly, removal of officials is politically hard…for the Remover. In cases like the Ukraine, this isn’t because removal of any official is likely to be unpopular (it’s probably the reverse…just ask Vergniaud). Rather, the problem is one of adverse selection in terms of those who are judging the motivations and trying to predict the future actions of the Remover.[1]

To think about this clearly and quickly, consider the baseline case where the Remover “cleans house,” removing everyone, and then consider the deviation from this in which the Remover “forgives” one official, who I will call “Official X.”[2]

What should we infer?  Does the Remover really have information that exculpates Official X?  Or perhaps Official X paid a bribe?  Or perhaps Official X is blackmailing the Remover? Or perhaps…       You can see where this is going.  The Remover is at risk of being suspected of being or doing something untoward if he or she has and uses any discretion.  Accordingly, the Remover would prefer to not have discretion.

The same logic applies, obviously, to a plan of “well, let the Remover prosecute those who `should’ be removed.”  Unless the Remover’s hands are tied with respect to whom to prosecute, people will always have reason to wonder “well, Official Y got prosecuted….but not Official X….”

Is Lustration a good idea?  I don’t know.  And I will mention that Popova and Post are making a different point, which is really about the extent and severity of lustration.  My point here is just that “statutory/mandated purges” are very different from “executive/discretionary purges” and, somewhat counterintuitively, it may very well be in the interests of “the Remover” to have his or her discretion taken away.[3]

__________

[1] Note that, as always, this is equivalent to a problem of credible commitment on the part of the Remover to “not use his/her biases” when deciding whom to remove.

[2] The logic holds generally (i.e., when the Remover forgives/pardons more than one official), but this focuses our attention in a nice way.

[3] I’m trying to keep these short, but I’ll note that this incentive is stronger for a Remover who believes that the external audiences who are trying to judge the Remover’s information/character/etc. are really uncertain about the Remover’s information/character/etc.  This is because high levels of initial uncertainty imply that the discretionary actions of the Remover will have a larger impact on the beliefs held by the members of the audience, and the adverse implications of discretion on these beliefs is the justification for the Remover wanting to limit his or her own discretion.

How Transparency Could Harm You, Me, and the FOMC

Sarah Binder, as usual, provides excellent insights into a difficult political problem in this post discussing the potential political and economic pitfalls of imposing greater transparency on the Federal Open Market Committee (FOMC), which essentially directs the Federal Reserve’s active participation in the economy, thereby having the most direct control over short-term interest rates and, accordingly, day-to-day “monetary policy” in the United States.

The FOMC is a really big deal.  As Binder notes, the importance of the committee accordingly makes both economic and political observers keen to understand and forecast what it will do in the future.  By deciding over the past decade or so to publish more and more detailed data about the views of the FOMC members,[1] the Fed has increased the transparency of the information it receives.

This seems like a good idea, right?

Well, social science theories in both economics and political science acknowledge the importance of whether the FOMC’s behavior is predictable or not.  On the economics side, predictability of monetary policy (at least in terms of its outputs such inflation) is generally perceived to be a good thing, because it allows investors to focus more attention on the “fundamentals” of an asset’s value, as opposed to paying a lot of attention to purely nominal phenomena and/or inefficiently delaying/accelerating investment and consumption decisions.  In other words, while a low, fixed inflation rate is good, variation in the inflation rate is inevitable, and if this variation can be reasonably accurately forecast, this is a “second-best” outcome.

On the political science side of things, a traditional argument for transparency (in addition to the one above) is that it fosters legitimacy and/or public confidence in the Fed, and thereby makes the Fed a more credible “political actor.”  A more technical description of this is that transparency alleviates an adverse selection problem between the Fed and the public.  The Fed knows something that the public/Congress/Presidents want to know, and—in some situations—everyone would be better off if the Fed could somehow just reveal this information to the public/Congress/Presidents.

Solving this kind of problem is very tricky in practice, because a real solution requires that the Fed not be responsible for releasing the information.  And there’s some interesting things in the FOMC structure (it’s composed of multiple, and members with various overlapping terms) and the evolution of the transparency.

Being the contrarian that I am, I wanted to note two arguments against too much transparency.  I don’t think these are strong enough to justify total opacity, of course, but I do believe they’re strong enough to serve as cautionary tales regarding total transparency.

Each of these arguments revolves around an additional potential instantiation of adverse selection.  The first regards the motives of the individual members of the FOMC.  When decision-makers are career-oriented (they want to be reappointed/promoted/rewarded for their ability/performance, etc.), too much transparency about the decision-maker’s actual decision (i.e., votes and personal positions on monetary policy in the FOMC meetings) can induce conformism (or “pooling”) by the agents such that their policy decisions become suboptimally unresponsive.  For example, everybody might start acting as an inflation hawk would so as to increase the perception of their hawkishness (a worry indirectly indicated in Yellen’s comments as discussed by Binder).[2]

The second argument involves the incentives of those that make individual decisions that the Fed observes.  In particular, the Fed (and every regulatory agency) collects lots of data about the behaviors of firms and individuals.  In some cases, if (say) major firms (as the Fed is responsible for regulating) have access to the information that the Fed will ultimately use to make policy, the incentives of these firms to make decisions that are individually suboptimal in order to try and manipulate the Fed’s subsequent decision-making will be exacerbated.  That is, transparency of the Fed’s information can increase the incentives of major banks (and, arguably, even other regulators) to choose their own actions in ways that try to obscure their own private information.  When this happens, you have a double-whammy: (1) the individual firms’ decisions are not optimal and (2) the Fed does not glean as much information about the real state of the economy from the decisions of these firms.

Sean Gailmard and I make this point (coincidentally with an empirical application to Financial Industry Regulatory Authority (FINRA)) in our recent working paper, “Giving Advice vs. Making Decisions: Transparency, Information, and Delegation.”

Conclusion. I definitely don’t know what the “right” policy for the Fed is without further thought.  But the supposition that “increased transparency is unambiguously good” is at odds with at least two theoretical arguments. Accordingly, it might not be nefarious motives that lead policymakers to call for discussion of “how much transparency is too much?”

____________________

[1] See this description of the recent evolution of Fed transparency and, for a little historical context, see this report describing the 2007 change.

[2] Note that this argument implies that observing the actions of the decision-maker(s) can be bad, but it does not necessarily imply that observing what happens from those decisions (e.g., the actual inflation rate) can be bad. (Good citations on this point are Prat (2005) (ungated working paper here) and Levy (2007) (ungated working paper here), and my colleague Justin Fox has produced multiple excellent theoretical studies centering on this question (here, here (with Ken Shotts), and here (with Richard Van Weelden)).