In Comes Volatility, Nonplussing Both Fairness & Inequality

You know where you are?
You’re down in the jungle baby, you’re gonna die…
In the jungle…welcome to the jungle….
Watch it bring you to your knees, knees…
– Guns N’ Roses, “Welcome to the Jungle”

It’s a jungle out there, and even though you think you’ve made it today, you just wait…poverty is more than likely in your future…BEFORE YOU TURN 65!  Or at least that’s what some would have you believe (for example, here, here, and here).

In a study recently published on PLoS ONE, Mark R. Rank and Thomas A. Hirschl examine how individuals tended to traverse the income hierarchy in the United States between 1968 and 2011. Rank and Hirschl specifically and notably focus on relative income levels, considering in particular the likelihood of an individual falling into relative poverty (defined as being in bottom 20% of incomes in a given year) or extreme relative poverty (the bottom 10% of incomes in a given year) at any point between the ages of 25 and 60.  To give an idea of what these levels entail in terms of actual incomes the 20th percentile of incomes in 2011 was $25,368 and the 10th percentile in 2011 was$14,447. (p.4)

A key finding of the study is as follows:

Between the ages of 25 to 60, “61.8 percent of the American population will have experienced a year of poverty” (p.4), and “42.1 percent of the population will have encountered a year in which their household income fell into extreme poverty.” (p.5)

I wanted to make two points about this admirably simple and fascinating study.  The first is that it is unclear what to make of this study with respect to the dynamic determinants of income in the United States.  Specifically, I will argue that the statistics are consistent with a simple (and silly) model of dynamic incomes.  I then consider, with that model as a backdrop, what the findings really say about income inequality in the United States.

A Simple, Silly Dynamic Model of Income.  Suppose that society has 100 people (there’s no need for more people, given our focus on percentiles) and, at the beginning of time, we give everybody a unique ID number between 1 and 100, which we then use as their Base Income, or BI. Then, at the beginning of each year and for each person i, we draw an (independent) random number uniformly distributed between 0 and 1 and multiply it by the Volatility Factor,  which is some positive and fixed number.  This is the Income Fluctuation, or IF, for that person in that year: that person’s income in that year is then

$\text{Income}_i^t = \text{BI}_i^t + \text{IF}_i^t$.

In this model, each person’s income path is simply a random walk (with maximum distance equal to the Volatility Factor) “above” their Baseline Income.  If we run this for 35 years, we can then score, for each person i, where their income in that year ranked relative to the other 99 individuals’ incomes in that year.

I simulated this model with a range of Volatility Factors ranging from 1 to 200. [1]  I then plotted out percentages analogous to those reported by Rank and Hirschl for each Volatility Factor, as well as the percentage of people who spent at least one year out of the 35 years in the top 1% (i.e., as the richest person out of the 100).  The results are shown in Figure 1, below.[2]  In the figure, the red solid line graphs the simulated percentage of individuals who experienced at least one year of poverty (out of 35 years total), the blue solid line does the same for extreme poverty, and the green solid line does this for visiting the top 1%.  The dotted lines indicate the empirical estimates from Rank and Hirschl—the poverty line is at 61.8%, the extreme poverty line at 42.1% and the “rich” line at 11%.[3]

Figure 1. Simulation Results

Intuition indicates that each of these percentages should be increasing in the Volatility Factor (referred to equivalently as the Volatility Ratio in the figure)—this is because volatility is independent across time and people in this model: more volatility, the less one’s Base Income matters in determining one’s relative standing.

What is interesting about Figure 1 is that the simulated Poor and Extremely Poor occurrence percentages intersect Rank and Hirschl’s estimated percentages at almost exactly the same place—a volatility factor around 90 leads to simulated “visits to poverty and extreme poverty” that mimic those found by Rank and Hirschl.  Also interesting is that this volatility factor leads to slightly higher frequency of visiting the top 1% than Rank and Hirschl found in their study.

Summing that up in a concise but slightly sloppy way: comparing my simple and silly model with real-world data suggests that (relative) income volatility is higher among poorer people than it is among richer people.  … Why does it suggest this, you ask?

Well, in my simple and silly model, and even at a volatility factor as high as 90, the bottom 10% of individuals in terms of Base Income can never enter the top 1%.  At volatility factors greater than 80, however, the top 1% of individuals in Base Income can enter the bottom 20% at some point in their life (though it is really, really rare).  Individuals who are not entering relative poverty at all are disproportionately those with higher Base Incomes (and conversely for those who are not entering the top 1% at all).  Thus, to get the “churn” high enough to pull those individuals “down” into relative poverty, one has to drive the overall volatility of incomes to a level at which “too many” of the individuals with lower Base Incomes are appearing in the rich at some point in their life.  Thus, a simplistic take from the simulations is that (relative) volatility of incomes is around 85-90 for average and poor households, and a little lower for the really rich households. (I will simply note at this point that the federal tax structure differentially privileges income streams typically drawn from pre-existing wealth. See here for a quick read on this.)

Stepping back, I think the most interesting aspect of the silly model/simulation exercise—indeed, the reason I wrote this code—is that it demonstrates the difficulty of inferring anything about income inequality or truly interesting issues from the (very good) data that Rank and Hirschl are using.  The reason for this is that the data is simply an outcome.  I discuss below some of the even more interesting aspects of their analysis, which goes beyond the click-bait “you’ll probably be poor sometime in your life” catchline, but it is worth pointing out that this level of their analysis is arguably interesting only because it has to do with incomes, and that might be what makes it so dangerous.  It is unclear (and Rank and Hirschl are admirably noncommittal when it comes to this) what one should–or can—infer from this level of analysis about the nature of the economy, opportunity, inequalities, or so forth.  Simply put, it would seem lots of models would be consistent with these estimates—I came up with a very silly and highly abstract one in about 20 minutes.

Is Randomness Fair? While the model I explored above is not a very compelling one from a verisimilitude perspective, it is a useful benchmark for considering what Rank and Hirschl’s findings say about income inequality in the US.  Setting aside the question of whether (or, rather, for what purposes) “relative poverty” is a useful benchmark, the fact that many people will at some point be relatively poor during their lifetime at first seems disturbing.  But, for someone interested in fairness, it shouldn’t necessarily be.  This is because relative poverty is ineradicable: at any point in time, exactly 20% of people will be “poor” under Rank and Hirschl’s benchmark.[4]  In other words, somebody has to be the poorest person, two people have to compose the set of the poorest two people, and so forth.

Given that somebody has to be relatively poor at any given point in time, it immediately follows that it might be fair for everybody to have to be relatively poor at some point in their life: in simple terms, maybe everybody ought to share the burden of doing poorly for a year. Note that, in my silly model, the distribution of incomes is not completely fair.  Even though shocks to incomes—the Income Fluctuations—are independently and randomly (i.e., fairly) distributed across individuals, the baseline incomes establish a preexisting hierarchy that may or may not be fair.[5] For simplicity, I will simply refer to my model as being “random and pretty fair.”

Of course, under a strong and neutral sense of fairness, this sharing would be truly random and unrelated to (at least immutable, value neutral) characteristics of individuals, such as gender and race.  Note that, in my “random and pretty fair” model, the heterogeneity of Base Incomes implies that the sharing would be truly random or fair only in the limit as the Volatility Factor diverges to $\infty$.

Rank and Hirschl’s analysis probes whether the “sharing” observed in the real world is actually fair in this strong sense and, unsurprisingly, finds that it is not independent:

Those who are younger, nonwhite, female, not married, with 12 years or less of education, and who have a work disability, are significantly more likely to
encounter a year of poverty or extreme poverty.
(pp.7-8)

This, in my mind, is the more telling takeaway from Rank and Hirschl’s piece—many of the standard determinants of absolute poverty remain significant predictors of relative poverty.  The reason I think this is the more telling takeaway follows on the analysis of my silly model: a high frequency of experiencing relative poverty is not inconsistent with a “pretty fair” model of incomes, but the frequency of experiencing poverty being predicted by factors such as gender and race does raise at least the question of fairness.

With that, and for my best friend, co-conspirator, and partner in crime, I leave you with this.

______________

[1]Note that, when the Volatility Factor is less than or equal to 1, individuals’ ranks are fixed across time: the top earner is always the same, as are the bottom 20%, the bottom 10%, and so forth.  It’s a very boring world.

[2]Also, as always when I do this sort of thing, I am very happy to share the Mathematica code for the simulations if you want to play with them—simply email me. Maybe we can write a real paper together.

[3] The top 1% percentage is taken from this PLoS ONE article by Rank and Hirschl.

[4] I leave aside the knife-edge case of multiple households having the exact same income.

[5] Whether such preexisting distinctions are fair or not is a much deeper issue than I wish to address in this post.  That said, my simple argument here would imply that such distinctions, because they persist, are at least “dynamically unfair.”

The Statistical Realities of Measuring Segregation: It’s Hard Being Both Diverse & Homogeneous

This great post by Nate Silver on fivethirtyeight.com prodded me to think again about how we measure residential segregation.  As I am moving from St. Louis to Chicago,[1] this topic is of great personal interest to me.  Silver’s post names Chicago as the most segregated major city in the United States, according to what one might call a “relative” measure.

Silver rightly argues that diversity and segregation are two related, but distinct, things.  To the point, meaningful segregation requires diversity: if a city has no racial diversity, it is impossible for that city to be (internally) segregated.  However, diversity of a city as a whole does not imply that the smaller parts of the city are each also diverse.  One way to distinguish between city-wide diversity and neighborhood-by-neighborhood diversity is by using diversity indices at the different levels of aggregation.  Silver does this in the following table.

Citywide and Neighborhood Diversity Indices, Fivethirtyeight.com

Citywide Diversity. For any city C, city C‘s Citywide Diversity Index (CDI) is measured according to the following formula:

$CDI(C) = 1 - \sum_{g} \left(\frac{pop^C_g}{Pop^C}\right)^2$,

where $pop^c_g$ is the number of people in group g in city C and $latex Pop^C$ is the total population of city C.  Higher levels of CDI reflect more even populations across the different groups.[2]

Neighborhood Diversity. For any city C, let N(c) denote the set of neighborhoods in city C, let $pop^n_g$ denote the number of people in group g in neighborhood n, and let $Pop^n$ denote the total population in neighborhood n.  Then city C’s Neighborhood Diversity Index (NDI) is measured as follows:

$NDI(C) = 1 - \sum_{g} \left(\frac{pop^C_g}{Pop^C}\right)\sum_{n}\frac{\left(pop^n_g\right)^2}{pop^C_c Pop^n}$.

In a nutshell, the NDI measures how similar the neighborhoods are to each other in terms of their own diversities.  Somewhat ironically, the ideally diverse city is one in which, viewed collectively, the neighborhoods are themselves homogenous with respect to their composition: they all “look like the city as a whole.”

(This turns out to be one of the central challenges to comparing two or more cities with different CDIs on the basis of the NDIs.  More on that below.)

A Relative Measure of Segregation. In order to account for both measures of diversity, Silver constructs the “Integration/Segregation Index,” or ISI.  The ISI measures how much more (e.g., Irvine) or less (e.g., Chicago) integrated the city is at the neighborhood level relative to how much integrated it “should” be, given its citywide diversity. This makes more sense with the following figure from Silver’s post.

Neighborhood Diversity Indices vs. Citywide Diversity Indices, Fivethirtyeight.com

Silver’s analysis basically uses the 100 largest cities in the US to establish an “expected” neighborhood diversity index based on citywide diversity index.[3] Then, Silver’s ISI is (I think) the size of the city’s residual in this analysis—this is the difference between the city’s neighborhood diversity index and the city’s “predicted” or “expected” neighborhood diversity index, given the city’s citywide diversity index.  Thus, Chicago is the most segregated under this measure because it “falls the farthest below the red line” in the figure above.

This is all well and good, though one could easily argue that the proper normalization of this measure would account for the city’s citywide diversity index, because the neighborhood diversity index is bounded between 0 and the citywide diversity index.  Thus, Baton Rouge or Baltimore might be performing even worse than Chicago, given their lower baseline, or Lincoln might be performing even better than Irvine, for the same reason.[4]

In any event my attention was drawn to this statement in Silver’s post:

But here’s the awful thing about that red line. It grades cities on a curve. It does so because there aren’t a lot of American cities that meet the ideal of being both diverse and integrated. There are more Baltimores than Sacramentos.

I assume that Silver is using the term “curve” in the colloquial fashion, as opposed to referring to the nonlinear regression model: Silver is stating that, because the ISI is measured relative to the expected value of the NDI calculated from real (and segregated) cities, the fact that cities with high CDI scores tend to underperform relative to cities with lower CDI scores.

As alluded to above, this result could be at least partly artifactual because cities with higher CDIs have more absolute “room” to underperform.  More interestingly, however, is to first consider what Silver is holding forth as “absolute performance.”  The 45 degree line in the figure above represents the “ideal” NDI to CDI relationship: any city falling on this line (as Lincoln and Laredo essentially do) is as diverse at the neighborhood level as it can be, given its CDI.  Note that any city with a CDI equal to zero (i.e., a city composed entirely of only one group) will hit this target with certainty.

That got me to thinking: cities with higher CDIs might have a “harder time” performing at this theoretical maximum.  The statistical logic behind this can be sketched out using an analogy with flipping a possibly biased coin and asking how likely a given set of say 6 successive flips will be representative of the coin’s bias.  If the coin always comes up heads, then of course every set of 6 successive flips will contain 6 heads, but if the coin is fair, then a set of six successive flips will contain exactly 3 heads and 3 tails only

$\binom{6}{3} \left(\frac{1}{2}\right)^6 =\frac{5}{16}$,

or 31.25% of the time.  Cities with higher CDI scores are like “fairer” coins from a statistical standpoint: they have a harder target to hit in terms of what one might call “local representativeness.”

To test my intuition, I coded up a simple simulation. The simulation draws 100 cities, each containing a set of neighborhoods, each of which has a randomly determined number of people in each of five categories, or “groups.”  I then calculated the CDI and NDI for each of these fake cities, plotted the NDI versus CDI as in Silver’s figure above, and also calculated a predicted NDI based on a generalized linear model including both $CDI$ and $CDI^2$.  The result of one run is pictured below.

Simulated NDI vs. CDI

What is important about the figure—which qualitatively mirrors Silver’s figure—is that it is based on an assumption of unbiased behavior—it is generated as if people located themselves completely randomly.[5] Put another way, the simulations assume that individuals can not perceive race.

So what?  Well, this implies two points, in my mind.

1. The “curve” described by Silver is not necessarily emerging because bigger and more diverse cities are somehow “more accepting” of local segregation than are less diverse cities.  Rather, from a purely statistical standpoint, diverse cities are being scored according to a tougher test than are less diverse cities.
2. Silver’s ISI index is better than it might appear at first, because I think the “red line” is actually, from a statistical standpoint, a better baseline/normative expectation than the 45 degree line.

The final point I want to make that is not addressed by my own analysis is that Silver’s measure takes as given (or, perhaps, leaves essentially unjudged) a city’s CDI.  Thus, to look better on the ISI, a city should limit its citywide diversity, which is of course ironic.

With that, I leave you with this.

_________________

[1] And prior to moving to St Louis, I lived in Boston, Pittsburgh, Los Angeles, Chapel Hill, NC, London, Durham, NC, and Greensboro, NC.

[2] The details are a bit murky (and that’s perfectly okay, given that it’s a blog post), but are alluded to here.

[3] The maximum level of CDI—the “most diverse score” possible—is $1-\frac{1}{\text{Number of Groups}}$.  Thus, this measure is problematic to use when comparing cities that have measured “groups” in different ways.

[4] For example, one could use the following quick and dirty normalization:

$\frac{ISI(C)}{CDI(C)}$.

[5] An implementation detail, which did not appear to be too important in my trials, is that the five groups have expected sizes following the proportions of White, Black, Hispanic, Native American, and Asian American census groups in the United States, respectively.  This leads to the spread of CDI estimates looking very similar to those in Silver’s analysis, with the predictable exception of some extreme outliers like Sacramento and Laredo.

Cotton Pickin’?

[This is the first ever guest post on Math Of Politics. If you’re interested in posting on Math Of Politics, drop me an email.]

To understand the Cotton letter, we need to think about the operation of treaties. Treaties are like contracts, designed to solidify current behaviors or constrain future behaviors for some period of time. Treaties fail when the prescribed behaviors are no longer individually rational for at least one of the parties to maintain. Failures can occur almost instantly – consider some of the recent treaties regarding Ukraine. Alternatively, treaties can teeter on the verge of failure (or success) for quite some time.[1] Of course, individual rationality is best understood in the eyes of the beholders, and holders of political office come and go (as Cotton reminded everyone). Is political succession problematic for treaties? Elections are known, but who will win the next presidential election is unknown. In the words of a former defense secretary, these are known unknowns. But there are also unknown unknowns. What the hell is happening in this world? Summing up, some exogenous shocks are more exogenous than others. Treaties are not automatically vulnerable (or invulnerable) to exogenous shocks, but it is easier to brace for the known unknowns than the unknown unknowns.

So, why did Cotton remind the Iranian leaders that the U.S. has regular elections and that treaties can fail?

Ostensibly, Cotton was worried about the sort of treaty being negotiated. To what type of treaty will leaders submit? A desperate leader might agree to a treaty with very little lasting power. Consider 24 or 48-hour ceasefires. Does the success or failure of Obama’s last two years depend on securing any sort of agreement with Iran? I think it’s reasonable to suggest that Obama secures few immediate brownie points for his negotiations with Iran, so the possibility of long-term gains likely play the dominant role. But if Obama is not focused on short-term political gains, he needs a treaty with some lasting power.[2]

Of course, a treaty with lasting power must be somewhat invulnerable to regular electoral shocks – our known unknowns. To think that Obama would agree to a treaty that will be undone as soon as he leaves office is to suggest that Obama has only short-term gains in mind. Long-term gains would be impossible because political succession would ensure the renegotiation and demise of Obama’s treaty. Unless presidents cannot imagine the world after they leave office, some subgame calculations seem warranted. My guess is that the president and the Iranian leaders were both comfortable assessing the implications of elections and other known unknowns.

So, why did Cotton write the letter? I can see two possibilities.

Possibility 1. Cotton et al. truly do presume that the Iranian leaders are unaware of the role of executive agreements in U.S. international affairs. This is possible, but unlikely. Newcomers to chess might like to think “he won’t see this,” but chess is a game of complete information. Everything is upright, literally above board. Similarly, executive agreements do not hide in the nether reaches of constitutional authority.

Possibility 2. Cotton et al. do believe that Iranian leaders are aware of the role of executive agreements. Cotton simply wants to emphasize that any agreement would be tied to a particular Democratic president and not a succeeding president. This could make sense if Cotton were to believe that the negotiators are not employing subgame perfection. Suppose Cotton does manage to convince the Iranian leadership that the agreement is designed for failure. If Cotton were convincing, the Iranians could heavily discount long-term costs and implications.[3] Reluctant or hardline Iranians might be more willing to accede to a treaty that is projected by the political cognoscente to collapse in two years. Agreeing to a ten-year treaty is trickier. Thanks, Senator Cotton.

In the end, perhaps Cotton et al. did not use any subgame thinking of their own. That is, the authors of this infamous letter are complaining about a path of the game tree, but they are unaware that the offending path is well removed from the equilibrium path.[4] It is reasonable to consider and debate possible equilibrium paths. Different equilibria typically treat parties very differently. Not all equilibria are worthy of our support. But to debate something off the equilibrium path seems to be a waste of everyone’s time – unless it is meant for crass political consumption. That the Iranian leadership failed to bite is telling. That 47 senators are still running a “Nobama” campaign is also telling. That a newly minted senator can secure 46 Republican signatures for a letter of questionable value bodes ill for the Grand Ol’ Party. There are few gains from debating the merits of non-equilibrium paths.

With that, I leave you with this.

_______________________

[1] By some measures, negotiations with Iran have been ongoing for over a decade.

[2] The Iranians, who have their own concerns about political succession, are also likely focusing on long-term gains.

[3] As an analogy, consider a bank note. Suppose a large 30-year bank loan is problematic for a borrower. If mysteriously the bank were to dissolve with 50% likelihood after two years, the borrower’s long-term situation is less problematic. If mysteriously the bank itself were to dissolve entirely, the borrower’s situation is not problematic at all.

[4] This is not the time to consider trembling hand perfection or other equilibrium refinements that might allow one to imagine being off the equilibrium path.

How Two People’s Rights Can Do Both People Wrong: Vaccines & (Anti-)Social Choice Theory

Vaccination, both in terms of its social good and the role of government in securing that social good while respecting individual liberty, has been a hot topic lately.  In fact, it’s gone viral. (HAHAHAHA.  Sorry.)  In this short post, I link the debate about vaccination, liberty, and social welfare, with the work of Amartya Sen, a preeminent social choice theorist who won the 1998 Nobel Memorial Prize in Economic Sciences.

The Vaccination Paradox. Suppose that—due to there only being one dose of the measles vaccine available—two families, A and B, each with a single child, a and b, are confronted with choosing which child (if any) to vaccinate against measles.  The choices are “a: vaccinate child a,” “b: vaccinate child b,” “n: vaccinate neither child.”

Family would prefer that child b get vaccinated because child a has a compromised immune system, but would prefer that child a get vaccinated rather than neither child gets vaccinated.  In other words, Family A‘s preference over the three outcomes is:

b > a > n.

Due to personal beliefs, Family B is opposed to vaccination for anyone, but due to child a‘s situation, prefers that child b get vaccinated rather than child a.  Thus, Family B’s preference over the three outcomes is:

n > b > a.

Now, suppose that a government agency is tasked with choosing whether to vaccinate a child and, if so, which one.  Furthermore, suppose that the government agency is required to respect the families’ wishes with respect to their own children.  That is, if either family prefers having nobody vaccinated to having their own child vaccinated, then their child is not vaccinated (i.e., the government agency is required to grant an “opt-out” exemption to each family).

What would the result be?  The opt-out exemption requirement implies that Family A is decisive with respect to a versus n, so that n will not occur: child a will be vaccinated if child b is not.  Similarly, Family B is decisive with respect to b versus n, so that b will not occur: child b will not be vaccinated. Accordingly, because the government agency can not choose n, and it can not choose b, it must choose a.  Because the government agency is required to respect individual rights to opt-out, child a will receive the vaccine.

Okay.  But, wait… the government agency has (implicitly) ranked the three possible vaccination choices as

a >> n >> b,

so that in spite of both families agreeing that they prefer that child b be vaccinated rather than child a:

b > a,

The government agency—because it is respecting individual rights—must vaccinate child a instead of child b.

This is an example of the Liberal Paradox (or Sen’s Paradox), which states that no policymaking system can simultaneously

1. be committed to individual rights and
2. guarantee Pareto efficiency.

This paradox is at the heart of a surprising number of political/social conundrums. One basic reason it emerges is that individual rights are in a sense absolute and not conditioned on the preferences of others.  That is, if Families A and B could somehow write a binding contract and Family B knew/believed Family A‘s preferences, then Family B would agree to sign away their right to decline the vaccination for child b.

I’ll leave this here, but my limited take-away point is this: individual rights are important, but even in situation in which their definition seems straightforward, there’s no free lunch here: individual rights invariably come into conflict with social welfare.  That’s not saying that individual rights should be sacrificed, of course.  But it is saying that preserving individual rights does not always maximize social welfare.

Default In Our Stars: Kant-ankerous Varoufakis

The Greek Tragedy is a “thing.” And lately it has reemerged.  The question at the heart of this post is how one should bargain when between a rock and hard place.[0]  This point was raised and discussed very well by Henry Farrell in this piece, which was responding to this op-ed in the NY Times by Yanis Varoufakis, the finance minister of Greece and, in earlier times at least, a game theorist. Varoufakis claims in his op-ed to essentially disown game theory in pursuit of bigger, and of course more noble, goals.

I am not actually interested in what Varoufakis’s true goals are here.  Instead, I am going to attack the face validity of the claim that he is not “busily devising bluffs, stratagems and outside options” — because I am going to argue that he was (at least arguably[1]) strategically using that very op-ed as a strategem to make it seem less likely that he is bluffing because the op-ed alters his outside options.

Varoufakis claims in his op-ed that “[t]he trouble with game theory, as I used to tell my students, is that it takes for granted the players’ motives.”

wait…give me a second…
…oh my goodness…
…I just vomited a little in my mouth.

Look, sure… the first 6 weeks of introductory game theory does this, just like physics first starts out neglecting air resistance.  But, really, does game theory “take for granted” the players’ motives?

Answer: OH HELL NO, GAME THEORY DOES NOT TAKE PLAYERS’ MOTIVES FOR GRANTED.

The Ironic Emergence of Concerns About Reputation.  The classic case of game theory not taking for granted players’ motives is something known as “the chain store paradox,” which poses the question of when and whether someone would be willing to incur losses to themselves so as to establish a reputation for toughness.  In the interest of being succinct, that is exactly what Varoufakis is (putatively) attempting to establish in the op-ed.  The fact that game theory is entirely and completely consonant with such behavior was established no later than 1982, when both David Kreps & Robert Wilson and Paul Milgrom and John Roberts independently established that game theory can and does predict that individuals will have an incentive to fake having “tough” or “principled” beliefs so that otherwise “irrational decisions” make sense to or be believed by an opponent.  As the articles by Kreps, Milgrom, Roberts, & Wilson[2] show, it is often the case that a “pure bottom line” player will have an incentive — in repeated negotiations/interactions — to act as if the player has a purpose other than “the bottom line”, regardless of whether this other thing (in Varoufakis’s case, it’s something called “doing the right thing”) is something that is deemed “irrational.”  The reason for this, in intuitive terms, is reputation.  In a sense, Kreps, et al. saved game theory 33 years before Varoufakis attempted to throw it under the bus by showing that, against some naive expectations, it is consistent with common sense.  The Bully on the playground need not actually like hitting people, he might just be someone who really likes not being hit and accordingly “pays it forward” by beating a few people up so as to make others think that he or she likes fighting, thereby making others in the future less likely to challenge him or her.

Thomas Schelling is a genius, and properly credited by Farrell for offering erudite understanding of the dynamic that Farrell discusses.  However, Farrell focuses exclusively on “appearing irrational” in his discussion.  Moving beyond simple “Varoufakis versus the EU” narratives, Schelling, and others (including Bob Putnam and Andy Kydd), have commented on the importance of hiring mediators that are themselves biased/irrational.  The basic idea is the same as that behind why you hire a hit man that you don’t know and can’t be recalled—hiring a crazy “agent” is a commitment device that makes your negotiating partner change his or her valuation of holding out against your demands.[3]

Following this logic, presumably Varoufakis was installed as finance minister precisely because he is a very good game theorist.  And, to boot, he was installed by a government that itself is worried not only about interactions with the EU, but also with the citizens of Greece in upcoming elections.  The question, then, is whom do you hire to go bargain your way out of the absolute poop-storm of debt and austerity that surrounds you?

On one hand, you could install a technocrat that wants to make markets easy and handle things in a mechanistic and (economically/technically) efficient way.  But, to be short about it, economic/technical efficiency is irrelevant to most voters.  Such a technocrat would have a hard time sealing the technocratic deal struck with lenders by selling it to the voters in the form of the ruling coalition.  Accordingly, such a technocrat would have little leverage at the bargaining table with the lenders in the first place.[4]

On the other hand, you could hire a true believing, firebrand populist who will quickly and unabashedly pursue a “forgive, haircut, or default” strategy with the EU.  That person would cause other problems: short term populist gains, but long term fiscal problems that would probably undermine the ruling coalition.  And (unless that person is strategic, see below) the firebrand will also exert little bargaining power because his goals are too extreme and he or she would prefer to walk away.

Finally, you could install someone who is widely believed to be an excellent bargainer.  You know, like an internationally recognized game theorist.  Then suppose that this individual announces that he or she does not believe in being strategic, that he or she is just committed to getting the “right” outcome for the country.  (From the op-ed: Varoufakis promises to “reveal the red lines beyond which logic and duty prevent us from going,” alleges that the “circumstances” dictate that he “must do what is right not as a strategy but simply because it is … right,” and even invokes Immanuel Kant!) Finally, suppose that the voters to some degree “believe” the game theorist insofar as they become more willing to support a somewhat technocratic deal, falling somewhere short of absolute forgiveness.

The arguments of Kreps, et al. imply that a smart game theorist should say the things that Varoufakis said in his op-ed.  If voters respond as supposed above (i.e., believe the statements even a little bit), this increases the credibility with which he can negotiate with the lenders.  Note that the voters’ beliefs that the game theorist actually has stopped believing in being strategic should be stronger if the game theorist takes a very public stand (say, you know, in an op-ed in a globally read newspaper) to that effect,[4] and especially if, as I have pointed out, he aims his arrow at what at least once was his bread and butter.[5]

Conclusion: Varoufakis Doth Protest Too Much.  I actually applaud Varoufakis for the strategy I see him playing (not that he should care, of course).  Nonetheless, I think that he went farther than he needed to go by parroting a frequently tossed-about and grossly inaccurate criticism of game theory.  Of course this is ironic.  I can only hope that at some point in the future, Varoufakis might fess up to the gambit.  Regardless of whether he does or doesn’t “believe in” game theory, I am under no impression that he does not believe in being strategic.  Especially not after reading his op-ed.

_______

[0] This post is about game theory, and good game theorists would advise one to think about how not to wind up being between a rock and a hard place in the first, ahem, place.

[1] I am getting tired of the academic tradition of admitting that perhaps I might not be right.  Of course, I might not be right.  But, that said, this is one of those “every 18 months or so” arguments where I can say “well, if I’m not right, then I am right, because that’s the crazy kind of bull-hooey that emerges in strategic situations.”  And, yes, “bull-hooey” is jarring technical jargon, which is why I put it in a footnote.

[2] These four giants of game theory are, because of their multiple contributions to this incredibly seminal 1982 issue of the Journal of Economic Theory, sometimes referred to as the Gang of Four, a reference that will hopefully still please at least a few people in the set of “game theory and awesome rock fans.”  But seriously, each of these four have contributed huge ideas separately and in combination to game theory for 3+ decades, and for Varoufakis to pretend otherwise is absolutely offensive.  I only say that because he has coauthored a textbook on game theory.  He should know better (for example, see Secction 3.3.4 of the linked textbook).  I also say this because I have the privilege of writing a blog that is at best occasionally clicked on by (some of) my family members.  But, again, I LEARNED THIS STUFF AS AN UNDERGRADUATE.

[3] A great piece about how this works between chambers of Congress was written by Sean Gailmard and Tom Hammond.

[4] This is arguably an example of what some social scientists call “audience costs.”

[5] This is akin to the notion of “burning one’s boats,” in which one eliminates or reduces the attractiveness of backing down at some future point so as to make one’s demands more credible in the present.

On The Possibility of An Ethical Election Experiment

The recent events in Montana have sparked a broad debate about the ethics of field experiments (I’ve written once and twice about it, and other recent posts include this letter from Dan Carpenter, this Upshot post by Derek Willis, and this Monkey Cage post by Dan Drezner).  I wanted to continue a point that I hinted at in my first post:

[T]he irony is that this experiment is susceptible to second-guessing precisely because it was carried out by academics working under the auspices of research universities.  The brouhaha over this experiment has the potential to lead to the next study of this form—and more will happen—being carried out outside of such institutional channels.  While one might not like this kind of research being conducted, it is ridiculous to claim that is better that it be performed outside of the academy by individuals and organizations cloaked in even more obscurity.  Indeed, such organizations are already doing it, at least this kind of academic research can provide us with some guess about what those other organizations are finding.

Personal communications with colleagues and readers indicated that Paul Gronke was not alone in interpreting my message in that passage as something like “well, others intervene in elections in unethical ways, so scholars don’t need to worry about ethics.”  That was not my intent.  Rather, I was trying to make the point that interventions by academic researchers are more likely to be transparent and, accordingly, capable of being judged on ethical grounds, than interventions by others.  Of course, that is a contention with which one might disagree, but I’ll take it as plausible for the purposes of the rest of this post.[1]

Reflecting further on the ethics of field experiments led me to a classical social choice result known as the liberal paradox, first described by Amartya Sen.  The paradox is that respecting individual rights can lead to socially inferior outcomes.  The secret of the paradox is that sometimes our preferences over our actions depend on what others’ do (also known as “nosy preferences”).

The link between the paradox and the ethics of experimenting on elections in the following simple way.  Let’s choose between four possible worlds, depending on whether scholars and/or political parties do field experiments on elections, and let’s take my assertion about the value of open academic research as given, so that “society’s preference” is as follows:[2]

1. Nobody does any field experiments on elections, (the “best” option)
2. Scholars do field experiments on elections, political parties do not,
3. Both scholars and political parties do field experiments on election, and
4. Partisan researchers do field experiments on elections, scholars do not (the “worst” option).

Then, let’s suppose that we have two principles we’d like to respect:

• Noninterference in Elections: Field Experiments on Elections are Unethical if They Might Affect the Election Outcome.
• Free Speech: Political Parties Are Allowed to Do Experiments If They Choose to.

It is impossible to respect these (reasonable) principles and maximize social welfare.  Here’s the logic:

1. If a field experiment might affect an election, then some political party will want to do it, but the experiment would be considered unethical.
2. Thus, if a field experiment is unethical and we respect Free Speech, then some political party will do the field experiment.
3. But if scholars behave in accordance with Noninterference, then they will not perform a field experiment that might affect the election outcome.
4. This leads to the outcome “Partisan researchers do field experiments on elections, scholars do not,” which is clearly inefficient.  Indeed, it is the worst possible outcome from society’s standpoint.

It is not my intent to judge the ethics of any particular field experiment study here, and I do believe that there are plenty of unethical designs for field experiments.  However, I am rejecting the notion that a field experiment on an election is ethical only if it does not affect the outcome of the election.  This is because it is precisely in these cases that others will do these experiments in non-transparent ways.  This is not the same as saying “other groups do unethical things, so scholars should too.”  Rather, this is saying “groups are intervening in elections in both ethical and unethical ways, so it is important for scholars to transparently learn from and about election interventions in ethical ways.”  To say that potentially affecting an election outcome is presumptively unethical implies that a scholar who values ethical behavior will never learn about how election interventions that are occurring work, what effects they might be having on us individually and collectively, and how society might better leverage the interventions’ desirable effects and mitigate their undesirable effects.

____________

[1] Relatedly and more generally, my post has (perhaps understandably) been read as defending all field experiments on elections.  My intent, however, was two-fold: (1) guaranteeing that a field experiment will have no effect on the outcome requires the experiment to be useless and thus is too strong a requirement for a reasonable notion of ethicality and (2) coming up with a reasonable notion of ethicality requires taking (social choice) theory seriously, during the design of the field experiment.

[2] One can substitute any private corporation/interest/government agency/conspiracy one wants for “political parties.”

Nothing gets political scientists as excited as elections.  In this previous post, I discussed the Montana field experiment controversy. In that post, I pointed out that the ethics of field experiments in elections—e.g., in which some people are given additional information and others are not—are complicated.  In the majority of the post, I was attempting to respond to claims by some that ethical field experiments must have no effect on the “outcome.”[1]

Moving back from us egg-heads and our science, it dawned on me that the notion of an intervention (or treatment) is quite broad.  In particular, any change in electoral institutions—such as early voting, voter ID requirements, or partisan/non-partisan elections, to name a few—is, setting intentions aside, equivalent to a field experiment.[2]  By considering this analogy in just a bit more detail, I hope to make clear the point of my original post, which was that

In the end, the ethical design of field experiments requires making trade-offs between at least two desiderata:

1. The value of the information to be learned and
2. The invasiveness of the intervention.

Whenever one makes trade-offs, one is engaging in the aggregation of two or more goals or criteria […] and thus requires thinking in theoretical terms before running the experiment.  One should have taken the time to think about both the likely immediate effects of the experiment and also what will be affected by the information that is learned from the results.

Along these lines, consider the question of whether one should institute early voting.  There are two trade-offs to consider.  On the “pro” side, early voting can enhance/broaden participation.  On the “con” side, early voting can allow people to cast less-than-perfectly informed votes, because they vote before the election campaign is over.[3]

So, is early voting ethical?  Well, the (strong and/or “straw man-ized”) arguments about the ethics of field experiments would imply that this experiment/intervention is ethical only if it doesn’t affect the outcome of the election.   It is nonsense to claim that we are collectively certain that early voting has no effect on election outcomes.[4]

So, then, the question would be whether the good (increased participation) “outweighs” the bad (uninformed voting).  If there are any voters who would have voted on election day, but vote early and then regret that they can’t vote on election day, this trade-off is contestable—it depends on (1) how important participation is to you and (2) how costly mistaken/uninformed voting is to youI’ll submit that these two weights are not universally shared.

To be clear, I favor early voting.  But that’s because I think participation is per se valuable, and most individuals’ votes are not pivotal in most elections.  That is, I think that the second dimension—uninformed voting—doesn’t affect election outcomes very often and making participation less costly is a good thing for more general social outcomes beyond elections.

But you see, that evaluation—the conclusion that early voting is ethical—is based not only on my own values, but also on an explicit, non-trivial calculation.  In thinking about the Montana experiment and similar field experiments, my point is this: if you want to be ethical, you need to do some theorizing when designing your experiment. Because an experimental manipulation of an election is—in practice—equivalent to a “reform” of election administration.[5]

With that, I leave you with this.

_____

[1] The notion of what exactly is an outcome is unclear, but it is okay for this post to just consider the question of “who won the election?”

[2] I say set intentions aside, because critics of my position (see Paul Gronke’s post, for example, which quotes a casual (and accurate) footnote from my previous post.)

[3] I am not an expert in all forms of early voting.  However, it is the case that in some states at least (Texas, for example), once you’ve voted early, you can’t cancel the vote.

[4] See, I didn’t even get into the mess that follows when one tries to figure out what an ethical democratic/collective norm would be, which this necessarily must be, since it is concerning collective outcomes.  Strong non-interference arguments in this context would nearly immediately imply that we should all follow Rousseau’s suggestion and each go figure out the common will on our own.

[5] You can easily port this argument over to the arguments about voter ID laws, where the trade-offs are between participation and voter fraud.

Well, In a Worst Case Scenario, Your Treatment Works…

Three political scientists have recently attracted a great deal of attention because they sent mailers to 100,000 Montana voters.  The basics of the story are available elsewhere (see the link above), so I’ll move along to my points.  The researchers’ study is being criticized on at least three grounds, and I’ll respond to two of these, setting the third to the side because it isn’t that interesting.[1]

The two criticisms of the study I’ll discuss here share a common core, as each centers on whether it is okay to intervene in elections.  They are distinguished by specificity—whether it was okay to intervene in these elections vs. whether it is okay to intervene in any election.  My initial point deals with these elections, which aren’t as “pure” as one might infer from some of the narrative out there, and my second, more general point is that you can’t make an omelet without breaking some eggs.  Or, put another way, you usually can’t take measurements of an object without affecting the object itself.

“Non-Partisan” Doesn’t Mean What You Think It Means.  The Montana elections in question are nonpartisan judicial elections.  The mailers “placed” candidates on an ideological scale that was anchored by President Obama and Mitt Romney.  So, perhaps the mailers affected the electoral process by making it “partisan.”  I think this criticism is pretty shaky.  Non-partisan doesn’t mean non-ideological.  Rather, it means that parties play no official role in placing candidates on the ballot.  A principal argument for such elections is a “Progressive” concern with partisan “control” of the office in question.  I’ll note that Obama and Romney are partisans, of course, but candidates for non-partisan races can be partisans, too.  Indeed, candidates in non-partisan races can, and do, address issues that are clearly associated with partisan alignment (death penalty, abortion, drug policy, etc.)  In fact, prior to this, one of the races addressed in the mailers was already attracting attention for its “partisan tone.” So, while non-partisan politics might sound dreamy, expecting real electoral politics to play in concert with such a goal is indeed only that: a dream.

Intervention Is Necessary For Learning & Our Job Is To Learn. The most interesting criticism of the study rests on concerns that the study itself might have affected the election outcome.  The presumption in this criticism is that affecting the election outcome is bad.  I don’t accept that premise, but I don’t reject it either.  A key question in my mind is whether the intent of the research was to influence the election outcome and, if so, to what end.  I think it is fair to assume that the researchers didn’t have some ulterior motive in this case.  Period.

That said, along these lines, Chris Blattman makes a related point about whether it is permissible to want to affect the election outcome. I’ll take the argument a step farther and say that the field is supposed to generate work that might guide the choice of public policies, the design of institutions, and ultimately individual behavior itself.  Otherwise, why the heck are we in this business?

Even setting that aside, those who argue that this type of research (known as “field experiments”) should have no impact on real-world outcomes (e.g., see this excellent post by Melissa Michelson) kind of miss the point of doing the study at all.  This is because the point of the experiment is to identify the impact of some treatment/intervention on individual behavior.  There are three related points hidden in here.  First, the idea of a well-designed study is to measure an effect that we don’t already have precise knowledge of.[2]  So, one can never be certain that an experiment will have no effect: should ethics be judged ex ante or ex post?  (I have already implied that I think ex ante is the proper standpoint.)

Second, it is arguably impossible to obtain the desired measurement without affecting the outcomes, particularly if one views the outcome as being more than simply “who won the election?”    To guarantee that the outcome is not affected implies that one has to design the experiment to fail in a measurement sense.

Third, the question of whether the treatment had an effect can be gauged only imprecisely (e.g., by comparing treated individuals with untreated ones).  Knowing whether one had an effect requires measuring/estimating the counterfactual of what would have happened in the absence of the experiment.  I’ll set this aside, but note that there’s an even deeper question in where if one wanted to think about how one would fairly or democratically design an experiment on collective choice/action situations.

So, while protecting the democratic process is obviously of near-paramount importance, if you want to have gold standard quality information about how elections actually work—if you want to know things like

1. whether non-partisan elections are better than partisan elections,
2. what information voters pay attention to and what information they don’t, or
3. what kind of information promotes responsiveness by incumbents,

then one needs to potentially affect election outcomes.  The analogy with drug trials is spot-on.  On the one hand, a drug trial should be designed to give as much quality of life to as many patients as possible.  But the question is, relative to what baseline?  A naive approach would be to say “well, minimize the number of people who are made worse off by having been in the drug trial.”  That’s easy: cancel the trial. But of course that comes with a cost—maybe the drug is helpful.  Similarly, one can’t just shuffle the problem aside by arguing for the “least invasive” treatment, because the logic unravels again to imply that the drug trial should be scrapped.

Experimental Design is an Aggregation Problem. In the end, the ethical design of field experiments requires making trade-offs between at least two desiderata:

1. The value of the information to be learned and
2. The invasiveness of the intervention.

Whenever one makes trade-offs, one is engaging in the aggregation of two or more goals or criteria.  Accordingly, evaluating the ethics of experimental design falls in the realm of social choice theory (see my new forthcoming paper with Maggie Penn, as well as our book, for more on these types of questions) and thus requires thinking in theoretical terms before running the experiment.  One should have taken the time to think about both the likely immediate effects of the experiment and also what will be affected by the information that is learned from the results.

This Ain’t That Different From What Many Others Do All The Time. My final point dovetails with Blattman’s argument in some ways.  Note that, aside from the matter of the Great Seal of the State of Montana, nothing that the researchers did would be inadmissible if they had just done it on their own as citizens.  Many groups do exactly this kind of thing, including non-partisan ones such as the League of Women Voters, ideological groups such as Americans for Democratic Action (ADA) and the American Conservative Union (ACU), and issue groups such as the National Rifle Association (NRA) and the Sierra Club.

Thus, the irony is that this experiment is susceptible to second-guessing precisely because it was carried out by academics working under the auspices of research universities.  The brouhaha over this experiment has the potential to lead to the next study of this form—and more will happen—being carried out outside of such institutional channels.  While one might not like this kind of research being conducted, it is ridiculous to claim that is better that it be performed outside of the academy by individuals and organizations cloaked in even more obscurity.  Indeed, such organizations are already doing it, at least this kind of academic research can provide us with some guess about what those other organizations are finding.[3][4]

With that, I leave you with this.

_____________

[1]One line of criticism centers on whether the mailer was deceptive, because it bore the official seal of the State of Montana. This was probably against the law. (There are apparently several other laws that the study might have violated as well, but this point travels to those as well.) While intriguing because we so rarely get to discuss the power of seals these days, this is a relatively simple matter: if it’s against the law to do it, then the researchers should not have done so.  Even if it is not against the law, I’d agree that it is deceptive.  Whether deception is a problem in social science experiments is itself somewhat controversial, but I’ll set that to the side.

[2] For example, while the reason we went to the moon was partly about “because it’s there,” aka the George Mallory theory of policymaking, it was also arguably about settling the “is it made of green cheese?” debate.  It turns out, no.

[3] I will point out quickly that this type of experimental work is done all the time by corporations.  This is often called “market research” or “market testing.”  People don’t like to think they are being treated like guinea pigs, but trust me…you are.  And you always will be.

[4] This excellent post by Thomas Leeper beat me to the irony of people getting upset at the policy relevance of political science research.

So Many Smells, So Little Time: In Defense of “Stinky” Academic Writing

Steven Pinker recently offered a lengthy explanation of “Why Academics Stink At Writing.”  First, it is important to note that the title of Pinker’s post is misleading.  Indeed, as he points out early on, he is actually arguing about why academic writing is “turgid, soggy, wooden, bloated, clumsy, obscure, unpleasant to read, and impossible to understand?”  This is different than why academics stink at writing—and, indeed, the claim that “academics stink at writing” is an example of stinky writing, unless one likes sweeping, pejorative generalizations.

Pinker writes that “the most popular answer outside the academy is the cynical one: Bad writing is a deliberate choice.”  I’m inside the academy, and I want to offer a non-cynical “deliberate choice” explanation for why academic writing is dense and obscure.

Pinker gets close to my explanation later on the post.[1]  Specifically, Pinker attributes dense and obscure academic writing to “the writer’s chief, if unstated, concern … to escape being convicted of philosophical naïveté about his own enterprise.”

The dense and obscure nature of much scholarly writing, of which I am frequent producer, is at least partly the result of the author’s need to convince the reader that the author knows what the hell he or she is talking about.

Qualifications (or “hedges,” in Pinker’s terminology) such as “almost,” “apparently,” “comparatively,” “relatively,” and so forth are not necessarily “wads of fluff that imply they are not willing to stand behind what they say.”

Rather, they ironically can serve as a way to make scholarly arguments more succinct while indicating thought by the author on the matter being described.  For example, suppose that I’m describing how members of Congress tend to vote.  I could say that “voting in Congress these days is partisan.”  Is that true?  Well, not exactly.  Is it pretty close to true?  Yes, in the sense that voting in Congress is highly correlated with partisanship: Members of either party tend to vote like their fellow partisans, and this correlation is stronger today than in much of American history.  But it’s not true that members always vote with their party’s leadership.  Thus, a more accurate statement—and one that reveals that one is thinking about the data more carefully—is as follows:

Voting in Congress these days is largely partisan.

Pinker describes a lot of words as “hedging,” and they’re not all the same.  Continuing the Congressional voting example, one might wonder why Members vote as they do.  Even if one thinks that the reader doesn’t need a qualifier like “largely,” the statement “Voting in Congress these days is partisan” is still unclear. For example, is the author claiming that Members of Congress vote as they do because of their partisanship?  That is, do Members of Congress simply follow their party’s directions when voting? This is an open question, it turns out.  Accordingly, a more accurate statement is

Voting in Congress these days is at least seemingly partisan.

Yes, that sentence is hedging.  For a reason—one conclusion a reader might draw from “Voting in Congress these days is partisan” is unwarranted.  Including the “at least seemingly” qualifier is not a wad of fluff to signal that I’m not willing to stand behind what I say—it’s a key part of what I want you to hear me saying.

I could go on, but I’ll conclude with the “math of politics” of this phenomenon.  Academic writing (and here I am thinking of writing intended to be subjected to peer-review of some form) is dense and obscure because the written presentation of the research is necessarily an incomplete rendition of the research itself.  That is, peer review is about trying to verify the qualities of the argument, which often requires inferring about the processes of the research that are by necessity incompletely conveyed in the written work.  Dense and obscure writing—jargon, qualifiers, etc.—are a bigger manifestation of the typographical convention “[sic.]”  When quoting a passage with an error, such as a misspelling or grammatical mistake, it is common practice to place “[sic.]” immediately after the mistake(s).  This is done because the author needs to signal to the editors, reviewers, and readers, that this mistake is not the author’s fault.  Importantly, though, it illustrates more than just that—[sic.] also signals that the author noticed the mistake.

Academic writing has to be dense and obscure, i.e., tough to parse, precisely because most scholars study phenomena that are tough to parse.  To continue Pinker’s theme, then, one might say that scholarly writing “stinks” because the real world “has so many smells.” Ironically, academic writing is difficult to read because it is attempting to portray what is almost always a big and variegated reality: often, the appealing parsimony of a conversational style is insufficient to accurately convey the knowledge and findings of the author.

In conclusion, academic writing is a very complicated signaling game—and I don’t mean “game” in a derogatory sense—that is necessitated by the various constraints we all labor under: time, resources, page limits, and exhaustion in both mental and physical forms. Dense and obscure language is more costly and complicated than conversational language, but this costly complication is a requisite outcome of the screening process that scholarly work is rightly subjected to.

[1] I couldn’t quite figure out how to put this in the body of this post, but the point at which Pinker turns to this argument occurs in an ironic paragraph:

In a brilliant little book called Clear and Simple as the Truth, the literary scholars Francis-Noël Thomas and Mark Turner argue that every style of writing can be understood as a model of the communication scenario that an author simulates in lieu of the real-time give-and-take of a conversation. They distinguish, in particular, romantic, oracular, prophetic, practical, and plain styles, each defined by how the writer imagines himself to be related to the reader, and what the writer is trying to accomplish. (To avoid the awkwardness of strings of he or she, I borrow a convention from linguistics and will refer to a male generic writer and a female generic reader.) Among those styles is one they single out as an aspiration for writers of expository prose. They call it classic style, and they credit its invention to 17th-century French essayists such as Descartes and La Rochefoucauld.

To be clear, it took me a couple of reads to comprehend that paragraph.  A conversational style is Pinker’s ideal for clarity—so why include the parenthetical explanation his gendered pronouns?

#Ferguson: The Racial Disconnect On Race

Yesterday, while actively following the events in Ferguson, I was asked the following by @GenXMedia:

White Suburban America seems riddled with apathy, excuses and disconnect about #Ferguson. Any ideas why?

Upon further prompting, it became clear that @GenXMedia wanted a response to each of the three things that White Suburban America is riddled with: apathy, excuses, and disconnect.

It is important to note that, as many of you know, this important topic does not fall squarely in my “wheelhouse.”  I mostly think about institutions and strategic models of politics.  That said, and with the usual warning that you get what you pay for, here’s my promised response.

Apathy. If we define apathy as anything less than intense interest in the unfolding story in Ferguson then yes, unsurprisingly, it is clear that more white voters are apathetic toward the events in Ferguson, with 54% of black respondents saying they are following the story very closely, while only 25% of white respondents say the same thing:

(Here is the full Pew survey and write-up.) It’s beyond my scope here but, to understand the intricate question of how race, civil rights, and Ferguson interact, it is important to note that only 18% of Hispanic respondents said they are following the story very closely.

Sadly, these numbers aren’t surprising to me.  Apathy is a “choice” only in the technical sense.  From a common sense standpoint, apathy is the absence of a choice to care/pay attention and “not choosing to pay attention” is a heck of a lot easier when the events seem less proximate to yourself.

I’m not saying that it’s rational to be apathetic, particularly about something as important and extreme as the events in Ferguson, but the results today are consistent with several decades of research into political attitudes in America, including the fact that the perception of “linked fate” is far more prevalent among black Americans than either whites or Latinos.[1]  Linked fate is a key concept in the study of race and politics.  A recent review of this literature describes linked fate as follows:

Linked fate is generally operationalized by an index formed by the combination of two questions. First, respondents are asked: “Do you think what happens generally to Black people in this country will have something to do with what happens in your life?” If there is an affirmative response, the respondent is then asked to evaluate the degree of connectedness: “Will it affect you a lot, some, or not very much?” [2]

Moving beyond (and/or in addition to) linked fate, one can also argue that the incentives (or perhaps proximities) of black and white Americans differ with respect to law enforcement.  Setting aside a more detailed discussion of this, just note the similarity between the racial breakdown of people closely following the events in Ferguson with the analogous breakdown of interest across gay rights, voting rights, and affirmative action in 2013:

Excuses. It’s well established that white Americans generally perceive racism to be less prevalent and less important than black Americans.   Discussing racial attitudes in the post-Civil Rights era, Brown, et al. write

In the new conventional wisdom about race, white racism is regarded as a remnant from the past because most whites no longer express bigoted attitudes or racial hatred.[3]

Simply put, the Pew survey does nothing to contradict this conclusion.  Specifically, 47% of white respondents said that “race is getting more attention than it deserves” in the coverage of the shooting of Michael Brown, while only 18% of black respondents, and only 25% of Hispanic respondents, agreed with that statement (see here for the full breakdown):

In the end, it’s important to note that the racial divide in attention being paid to Ferguson is in line with the racial differences in individuals’ beliefs that race is an important part of the narrative.  While it is impossible to gauge causality here—namely, are fewer white people paying attention to Ferguson because they think it’s not about race or are more white people saying Michael Brown’s shooting wasn’t about race because they’re not paying attention to Ferguson—both are consistent with avoidance: simply put, issues like homelessness, inequality, and discrimination are difficult to get many people to pay sustained attention to.  I’ve argued elsewhere that politics is about problem-solving, and people like to debate problems they think can be solved.  Race is arguably the most complicated problem to solve. While by no means admirable, avoidance of the issue by those who can (i.e., white people) is not surprising.[4]

Disconnect. I’m not exactly sure how “disconnect” is different from both apathy and excuses, but I’ll take a stab and interpret this as “why do white people not seem to connect the events in Ferguson with race?”  My response here, sadly, is that they kind of do—at least insofar as the attitudes here are consistent with other similar racially charged events.  For example, following the acquittal of George Zimmerman in July 2013, Pew conducted a poll gauging reactions and attention to the case.  The racial breakdowns of responses to each are very similar to those just found in the case of Ferguson, with 60% of whites thinking the issue of race was getting more attention than it deserves, and only 13% of blacks feeling that way:

Similarly, 63% of black respondents mentioned talking about the trial with friends, versus only 42% of white respondents:

Conclusion.  My own view on this is that Ferguson is most decidedly a racial issue.  This isn’t the same as saying that anyone involved is (or isn’t) racist.  Indeed, that issue, to me, misses the larger and more important point. In fact, while the racial realities of Michael Brown’s death—an unarmed black American killed by a white police officer—undoubtedly thrust race forward into the discussion, race should have been part of the discussion anyway.

That’s because any of the multiple dimensions of the context of Ferguson—the historical discrimination, the economic inequality, the political disparities, the unrepresentative political institutions, and the more general “special” features of local elections, to name just a few—make the issue of not only Michael Brown’s death, but also the largely and sadly ham-handed response a racial issue.

So, why don’t more white people see this?  A succinct (though definitely not exculpatory) answer is inertia: attitudes, like objects, tend to stay the same until acted upon an outside force. The reality of America is that white Americans are less likely to see their fates as being linked with those of black Americans and (perhaps because) they are less likely to face the everyday inequalities faced by far too many black Americans. In other words, and quite literally, most white Americans don’t often encounter an outside force with respect to race—definitely not like many black Americans do.  Whether they achieve this through apathy, excuses, and/or disconnect is a trickier question, but the correlation—the reality that race still divides Americans’ perceptions of politics and power—is sadly indisputable and robust, even in the 21st century.

____________

[1] See Dawson, Michael C. Behind the mule: Race and class in African-American politics. Princeton University Press, 1994.
[2] From Paula D. McClain, Jessica D. Johnson Carew, Eugene Walton, Jr., and Candis S. Watts. 2009. “Group Membership, Group Identity, and Group Consciousness: Measures of Racial Identity in American Politics?” Annual Review of Political Science (2009), p. 477.
[3] From Michael K. Brown, Martin Carnoy, Troy Duster, and David B. Oppenheimer. Whitewashing race: The myth of a color-blind society. University of California Press, 2003, p.36.
[4] Another, stronger, view of this is called “white privilege,” which describes the fact that issues that can be avoided are also deemed less important to others, without noticing that the ability to avoid these issues is not independent of race. (Thanks to Jessica Trounstine for adroitly directing me to this connection, as well as posting this telling graphic.)