If You Whip Me, The Voters Will Whup Me

Quoting Politico …

“[House Minority Leader Nancy] Pelosi said Wednesday at an event in San Francisco she does not plan to whip a Syria resolution when it comes to the House floor…”

Leaving aside the moral and strategic questions about the advisability of striking Syria (far beyond my competence), the dynamic unfolding here is intriguig from a Math of Politics standpoint.  Why would Pelosi not whip Democrats to support a president of their party to support an item of the highest profile that he has requested?

There are plenty of ready-made (and randomly-ordered) solutions: (a) Pelosi doesn’t have “the juice” to deliver and wants to cover for the potential of failure, (b) a win for the authorization measure would be at best a wash in terms of political gain for Democrats, given the divided control of Congress, (c) Pelosi is more dove than hawk, or even (d) Obama might prefer to “blame shift” nonaction onto Congress (slash potentially accentuate his own foreseen presidential unilateral action in Syria).

These are all quite viable, but—with the exception of (d)—fairly first-order.  That is, they don’t look at the bigger picture.  Very quickly, let me introduce a fifth option, (e).

(e) Democrats who vote in favor of authorizing military action in Syria would prefer—for reelection purposes—to be seen as doing so sans party pressure as far as possible.

Here’s the quick model: a moderate voter in 2014 is considering whether to vote D or R.  They have a D incumbent and are essentially choosing whether to take the “known” commodity or vote for a relatively unknown replacement from the other party.  Regardless of the voter’s position vis-a-vis military intervention in Syria, this voter (by the presumption that he or she is moderate) would prefer to reelect an incumbent whose preferences are aligned with her own and would be capable of acting on them in times of (electoral) uncertainty.

It seems, at this point, that Americans are not clearly for or against military intervention in Syria. Timing and question wording each make survey responses mover “too much” for anyone to be sure about how a vote in favor of any resolution that authorizes military action will be ultimately interpreted by “the decisive voter” in 2014 in most districts.  (Think Iraq and Afghanistan, and then think Egypt and then think Libya, and then think Rwanda.  And then, seriously, take a moment to both hug those you love and pray for everyone in Syria and elsewhere.)

So, back to the First World Problems of Congressmen (this piece from The Onion is, as usual, apropos and adroit), let’s consider the inference that a voter would make about his or her representative upon seeing a vote for military action after observing/believing that there was party pressure (“whipping”) to conjure/cajole such votes:

“My incumbent might or might not support military action.  Conversely he or she might be predisposed to follow the party line, because his or her vote might be the result of party pressure.”

On the other hand, if there is no party pressure, [1] the voter would infer

“My incumbent was subject to no party pressure.  Accordingly, I treat his or her vote as a relatively uncontaminated signal of his or her position on [whatever the voter thinks the Syria vote represents].”

The second scenario is obviously more transparent and, accordingly, (perhaps naively) normatively appealing.  But it is strategically riskier for the incumbents.  Why would Pelosi do it, rather than providing cover for the incumbents, as is the normal presumption about the optimal approach for electorally secure leaders with respect to tough votes?

Well, I think a key point here is that many Democrats in the House are in “safe districts,” where their greatest “net electoral threat” based on their vote(s) on Syria comes from the left (i.e., in the primary).  Accordingly, voting in favor of military action in Syria is actually easier for incumbents if there is no party pressure: self-described “liberals” distrust Obama (and presumably, Pelosi if she trotted out a whip to support Obama) on Syria. In a somewhat surprising sense, Pelosi applying no party pressure to Democrats may make it easier/more likely for Obama to secure votes from House Democrats than if she went public (or didn’t deny) that the party apparatus was whipping votes to support the President.

I thought this would be short: in my mind, the model is quite stripped down…sparse even. But context matters…and with that, I leave you with this.

___________________________

[1] I will not go into the signaling/auditing game that follows from thinking about Pelosi’s incentives with respect to whether to truthfully announce her intention to whip.  That’s also very interesting, but much more complicated.  And, hey, this is just a blog.  At some point, we all have to be sincere or at least presume that everybody else is. #Godel

My Research Is Kind Of Obscene…But I Knew It Only When I Blogged It.

My last post dealt with my personal conundrum about how best to deal with the problem of “I know these data are interesting, but I don’t (yet) have a theory to understand/explain/”test with” them.  I got some very nice responses from colleagues and virtual friends.  Thank you.  (I have no idea why I get no comments on the blog, but from years of lurking/surfing I am actually “O.K.” with this second best outcome.  In short, I am under no delusion that, if you read this, you probably know how to talk with me “offline,” and truly appreciate when you do, even (or perhaps especially) when you disagree with what I post.)

All that said, I thought it useful to delve a little more into the problem I face(d) here.  (We’ll come back at the end to why I added a (d) to that.)

Simply put, the data I have represent how policy is made at the federal level in the United States.  By “represent how policy is made at the federal level,” I mean “are federal policy, per se.” My questions are multiple and somewhat in-the-weeds, but for the purpose of the post, I’ll focus on the question: “why do some issues get dealt with at a given point in time and others do not?”

The most basic theoretical problem I have with this enterprise is one of measurement.  (It’s the most basic one I have because it is the most basic theoretical problem in empirical analysis, full stop.)

To make this concrete: consider the notions of “issue” and “get dealt with.”  Suppose, for simplicity, that we take a law duly enacted under Artice I, Section 7 of the US Constitution.  What are the issues that law deals with?  Now, note that there are many practical ways to answer this question, but all of them—to my knowledge—are based on one of three approaches:

  1. Human coding: (very) smart and fair individuals (say) read the bill and accompanying contextual data (debates, press coverage, etc.) and assign the law to a topic.
  2. Ascription based on source: for example, if the bill was dealt with by the Senate Foreign Relations Committee, then it must have at least partially dealt with foreign relations, or
  3. Automated (or semi-automated) text processing approaches: essentially, very fast computers cluster bills/laws with similar words and/or semantic structures.

The two main problems (for my purposes) with approaches in class (1) is that human coding is (a) slow/expensive (implying that most preexisting codings are subject to selection effects due to the natural desire to maximize speed/minimize cost—e.g., it many researchers focus largely or solely on bills that were enacted or at least got to the floor of one chamber) and (b) inevitably designed to test preexisting theories or match preexisting ancillary data sources.

The main problem with approaches in class (2) is that I am interested (for example) in how institutions (i.e., sources) are aligned vis-a-vis what the human coders would call the “issues” of the true (i.e., latent) policy space.  Thus, to use the institutions that generate policy instruments as the basis for coding the issues dealt with by those policy instruments is very close to tautological for my purposes.

So, I was/am playing with the NKOTB of approaches, those in class (3).  The progress I have made there is classically ironic in the sense that that the more I learned/discovered, the less I knew.  Put another way, I increasingly realized that the validity of any conclusions I could reach would be necessarily predicated on the assumptions undergirding those approaches.  These assumptions, to put it mildly, are orthogonal to traditional methodologically individualistic social science.  (For example, what is the social science justification for viewing documents as “bags of words” or “term frequency inverse document frequency”—look it up—as a measure of the relative importance of a law in identifying the latent issues of the 112th Congress?  [crickets])

This is not an attack on any of these methods—I am so very interested in these questions, I’m happy to grasp at straws if need be, but I’d rather find a lifeboat.

So, again, I return to the question: how do I measure (i.e., discriminate between) what voters/congressmen/judges/presidents would call a topic/issue from the instruments that I will then derive face-melting models demonstrating the incentives of voters/congressmen/judges/presidents to conflate/combine/obfuscate those topics when drafting/amending/interpreting those very same instruments?  Wait for it…you knew it was coming…it’s a top-down version of the Gibbard-Satterthwaite theorem.

Thus, before concluding, I will pose “the big question”: is it impossible for us to actually gauge the match between politics in practice and the latent structure of policy?  In other words, when we talk about “strange bedfellows” in terms of political actors, we mean based on that they are typically in opposition, but in the case in question, they are allied.  How can we detect the analogue with political issues: how can we discern when a bill contains both apples and oranges, if one took/had the time to read it? [1]  …Still thinking about that.

To conclude, let me return quickly to why I implied that the problem is no longer pertinent (“face(d)”)?  Well, in short order, my previous blog post cleared my head and forced me to think about the problem from a third-person version of my own perspective.  As a result, I have had a (truly) very fun 36 hours or so of active modeling: change a word or two in a google search here and there, and…SHAZAM!…I have plenty of new ideas about what could be the right models for the problem.  And, as I said in my last post, modeling is truly what I do.  So, stay tuned…I really think there’s some cool stuff that’s about to drop.

With that, I leave you with this.

_________________________

Footnotes:

[1] There is a political science term for this, due to William Rikerheresthetics: in somewhat ironic self-promotion terms, Scott MoserMaggie Penn, and I have published on the topic in the Journal of Theoretical Politics.

Which Comes First, Theory or Data?

It’s kind of a trick question, exactly the type of gambit that drives both research and blog posts. (The answer, it seems, is “both should magically emerge simultaneously.”)

Anyway…I’ve been in a bit of a funk lately, and not the twerking kind.  Both the seasonal goings-on and my mind doing laps on a vexing problem have left me a bit, ummm, unmotivated to post.  Without further delay, let’s make petroleum jelly out of petroleum…here’s my intellectual/professional quandary/blogging impediment in the form of a blog post.

1. I have a lot of data (BIG data) that I just know is important.  Basically, it is (a big part of) the substance of federal policymaking.

2. I don’t know any theories that really speak to it.

3. Well, I have some, but they are both intractable as presently formulated and I don’t know how best to simplify them to get results.  I have strong hunches about how I could do so, but I’d like to choose the simplifications that are most appropriate for the questions I want to answer.  (In a nutshell, the questions I want to answer are “why are some issues raised and acted upon while others are set aside,” and “how are people and resources deployed across multiple issues at a given point in time?”)

So…what to do?  (If you think about it for a moment, my conundrum is very meta.)

From a “math of math of politics” angle, the real rub is exemplified by the astute question raised by one of my colleagues when I described something I was doing/wanted to do with this big data.

“But what’s the theory?”

I have been told by other colleagues that theory is not necessary—though highly desirable—for empirical social science.  I fundamentally and, if I say so myself, quite correctly disagree with this assertion on theoretical grounds.  (See what I did there?)  But, the more practical rectitude of my assertion that there is no empirical analysis without some kind of theory—in terms of interpreting, publishing, and communicating empirical analysis—is also illustrated by my (empirically focused) colleague’s question.

More “math of math of politics” is raised by the fact that my absolute advantage in terms of scholarly production is in theory (really, modeling), rather than “pure” empirical analysis.  So, maybe I should take a hint and take a leap, “doing the models” that I can do, and letting others sort them out.  After all, I’m tenured, and therefore have the freedom to take the time to do this—the bulk of “the time” in this case (in my expectation, at least) will be navigating what I expect will be a bumpy road to publication and communication of the models.  I foresee plenty of (in-the-weeds) speedbumps in pursuing a “pure theory” approach to the questions I am interested in.

The irony, of course, is that one might think that tenure is at least partly there to motivate me to “take risks” in the sense of really trying to do things right.  In this case, the right path in terms of getting people to listen to the ultimate analysis might involve developing models that, at least now, can not be motivated by empirical verisimilitude.

So, what to do, I ask you.  Most of you are social scientists, and I am honestly befuddled by the proper way to aggregate/trade-off the two competing intellectual incentives: should I patiently, doggedly, and perhaps inefficiently chase the (as now unknown) “right analysis,” or do the analysis that will be more readily heard and may accordingly grease the wheels for ultimate production and communication of the right analysis?

With that, I leave you with this.

There is no Networking without “two” and “work” or, Incentives & Smelt at APSA!

As Labor Day weekend approaches, scores of scholars are steeling themselves for the “networking experience” that is the annual meeting of the American Political Science Association.  Of course, the main value of networking is establishing relationships.  For example, meeting new people can lead to coauthorships, useful information about grants/jobs/conferences, invitations to give talks, and so forth.

Like it or not, networking is important: to be truly successful in social science (and any academic or creative field), your ideas have to reach and influence others, and the constraints of time and attention lead to a variant on “the squeaky wheel gets the grease” in this, and all, professions.  Networking both exposes others to your ideas and, in the best case, helps you generate (sometimes, but not always, in overt cooperation with others) new ones.

All that said, I wanted to make three quick points about what this aspect of the role of networking implies, from a strategic (but not cynical) standpoint, about how one should network.

1. To the degree that one wants to create a relationship through networking, it is better, ceteris paribus, that the relationship have a longer expected duration.  Nobody washes a rented car (see: Breaking Bad), and in terms of dyadic relationships, the length of the relationship is bounded above by the shorter of the two scholars’, ahem, “time horizons.”

2. To the degree that one wants to generate, produce, and publish influential ideas, it is better, ceteris paribus, to create relationships with those who have stronger incentives (e.g., getting a job, getting tenure, being promoted, etc.) than with those who have lower extrinsic incentives to “get stuff out the door.”

3. To the degree that one wants to avoid conflicts of interest in terms of shirking, credit-claiming, and so forth, it is better (as in the repeated prisoners’ dilemma) that both parties have long time horizons so as to increase the (both intrinsic and extrinsic) salience of potential future punishment/comeuppance for transgressions.

All three of these factors suggest that, if you’re a young scholar considering who to spend time with in Chicago in two weeks, don’t forget to meet other young scholars.  Share your ideas, buy a round of smelt, and remember why you’re doing this.  Similarly, it is also important to remember the famous line from Seinfeld:

When you’re in your thirties it’s very hard to make a new friend. Whatever the 
group is that you’ve got now that’s who you’re going with. You’re not 
interviewing, you’re not looking at any new people, you’re not interested in 
seeing any applications. They don’t know the places. They don’t know the food. 
They don’t know the activities. If I meet a guy in a club, in the gym or 
someplace I’m sure you’re a very nice person you seem to have a lot of 
potential, but we’re just not hiring right now.

With that, I leave you with this.

DON’T PANIC. Theory and Empirics Are Both Alive & Well…at least in political science.

Paul Krugman recently wrote a post about how/why formal theory has fallen behind empirical work in prestige/prominence in economics.  I agree with Krugman that the decline (if one thinks it has occurred) is not due to behavioral social science (Kahneman & Tversky’s voluminous body of work being the most notable of this field).  Krugman argues that this can’t be because people had long known that the axioms of decision-making that undergird much of formal theory in the social sciences:

“…anyone sensible had long known that the axioms of rational choice didn’t hold in the real world, and those who didn’t care weren’t going to be persuaded by one man’s work.”

Well, I agree with this statement (for example, Adam Smith was famously well-aware of this (see The Theory of Moral Sentiments).  But I disagree that this is why behavioral economics did not “cause” the decline of theory.  Mostly, this is because behavioral economics (and behavioral economists) have been looking for a theory to unify their disparate findings.  For example, Kahneman & Tversky are arguably most famous for prospect theory. That is, Kahneman & Tversky were not merely throwing hand grenades—they were at least partially occupied with the classical task of inductive theorizing.

I don’t have any dog in the fight about the relative position of theory and empirics in economics.  And by that, I mean, I am not even sure that dogs are involved in the skirmish or even if there is skirmish worth keeping tabs on.  And, in many ways, I’m an economist.  Well, I am an economist to those who distrust economists and “just maybe an economist” to economists.  (See what I did there?)

In political science, which I proudly call my home, theory is definitely not “dead” (Krugman’s title is “What Killed Theory?”).  Rather, I like to think that, most days of the week, theory and empirics reside quite amicably side-by-side in our big tent of a discipline.  Sure, theorists make jokes about empiricists and empiricists make (typically funnier) jokes about theorists, but this is simply incentive compatibility: every empiricist chose not to be a theorist, and every theorist chose not to be an empiricist.  (Of course, many political scientists are a little bit of both, but rarely at the same time, if only because the jokes become oh so much more poignant.)  As a theorist, I (honestly) love empirical work—particularly descriptive and qualitative work that gives me fodder for new models, but also “causal” findings and quantitative conclusions that I can “get all contrarian on.”[1]

What has happened in political science during the last 20 years is a decline (in terms of number of articles published) of what one might call “pure,” or “technical,” theory.  In a nutshell, I—and others—think of social science theory as being usefully broken into two categories: pure and applied.  Pure theory (tends to) focus on the technical aspects of the model and accordingly ask more “general” questions.  The “purest” theory is inherently “untestable” outside of the theory itself: Arrow’s theorem, the Gibbard-Satterthwaite theorem, Nash’s theorem, May’s theorem, etc. all reach very general conclusions about a theoretical construct (Arrow’s theorem describes all aggregation rules (for 3 or more alternatives), Gibbard-Satterthwaite describes all choice functions (for 3 or more alternatives), Nash’s theorem describes all finite games, and May’s theorem describes all social choice functions between two alternatives, etc.).  This type of theory is hard in a specific sense: useful/explicable results are notoriously hard to obtain.  A fundamental reality of theorizing is that the expected number of results one can obtain from a model is proportional to the number of assumptions one makes.  Without belaboring the point, this difficulty is part of the reason such theory has become less prevalent in political science.  (However, as a “shameless” plug, I will note that Elizabeth Maggie Penn, Sean Gailmard, and I recently published just such a theory, entitled “Manipulation and Single-Peakedness: A General Result,” in the American Journal of Political Science (ungated version here).)

Applied theory, on the other hand, involves making more assumptions and, as a price, exerting the effort to motivate the model as descriptive or illustrative of something that either does or “could” happen.  I’ve talked about my view of the proper role of theory before.  I’ll keep it brief here and say that this type of theorizing is very much alive in political science.  Because “applied” sounds pejorative, I like to refer to this practice as “modeling,” which sounds sexier (and the Brits spell it “modelling,” possibly because their models tend to involve “maths” rather than “math”).

Relevant to Krugman’s point at least as it might be extended to political science, modern political models include some that are “behavioral” in spirit (bounded rationality, etc.) and some are more classical (common knowledge of the game, rationality, etc.) To me at least, that ‘s a distinction without a difference: the quality of a theory/model is per se independent of its assumptions.  Rather, the quality is based on what it teaches me or makes me see in new ways.  This is why “rational actor” models are useful: for example, some rational actor models can explain apparently irrational behavior.  This is important for those who see the “irrational” behavior in question and start to make conclusions about policy and institutions based on their potentially flawed inference that people are irrational per se.  Similarly, behavioral models can generate predictions and possibility results that outperform (and/or are more easily understood than) rational actor models of the same phenomenon.  I like to think of rational actor models as being like the Cantor Set (or perhaps the Banach-Tarski paradox) and behavioral models as being like Taylor polynomials.

In other words, and regardless of whether you are comparing behavioral and rational actor models or pure and applied theory, neither is better or worse than the other from an a priori perspective, just like it is nonsense to assert that a hammer is “better” than a screwdriver: it depends on whether you need to smack something or twist it.  (AND WHAT IF YOU NEED TO DO BOTH? A.K.A. “A Theory of Revise & Resubmits.”)

As a final note, it is important to note (as we are seeing in the “big data” revolution—inter alia, here and here) that the process of “…theory with empirics with theory…” is one of complements, not substitutes: the value of new theory/data increases as data/theory gets ahead of it, and conversely, the value of additional data/theories declines as theory/data lag behind.  Krugman sort of tells this story in his post, but at least doesn’t explicitly extend to the conclusion: theory and empirics have tended, and will probably continue, to cycle “in and out of fashion.”  I am fortunate that, at least right now, the big tent of political science includes active work in both areas.

With that, I leave you with this.

_________________________________________

Footnotes.

[1] I used to do empirical work, until a court (of my peers) ordered me, in the interests of both society and data everywhere, to cease and desist.

 

“Strength & Numbers”: Is a Weak Argument Better Than A Strong One?

Thanks to Kevin Collins, I saw this forthcoming article (described succinctly here) by Omair Akhtar, David Paunesku, and Zakary L. Tormala.  In a nutshell, the article, entitled “The Ironic Effect of Argument Strength on Supportive Advocacy,” reports four studies that suggest “that under some conditions…in particular, presenting weak rather than strong arguments might stimulate greater advocacy and action.”

This caught my attention because I think a lot about information and, in particular lately, “advice” in politics.  One of the central questions (in my mind at least) in political interactions is when communication can be credible/persuasive.  I was additionally attracted, given my contrarian nature, to the article because of statements such as this:

[These findings] suggest, counterintuitively, that it might sometimes behoove advocacy groups to expose their supporters to weak arguments from others—especially if those supporters are initially uncertain about their attitudes or about their ability to make the case for them. (p. 11)

Is this actually counterintuitive? I would argue, unsurprisingly since I’m writing this post, “no.”  Why not?

I have two simple models that indicate two different intuitions for this finding, both in a “collective action” tradition.  In addition to sharing a mathematical instantiation, also common to the motivations behind both of these models is the fact that the article’s findings/results are largely confined to individuals who already supported the position being advocated.  For example, weak “pro-Obama” arguments were more motivating than strong “pro-Obama” arguments among individuals who supported Obama prior to exposure to the arguments. (The effect of argument strength was insignificant and actually in the opposite direction among those who did not support Obama prior to exposure.)

My focus on collective action in this post is justified because the 4 studies reported in the paper each examined advocacy for a collective choice (either an election of a candidate and adoption of a public policy). Thus, in all studies, advocacy can potentially have an instrumental purpose: secure the individual’s desired collective choice.  Accordingly, suppose that an individual i is predisposed to support/vote for President Obama.  To keep it simple, suppose that advocacy is costly—if advocacy is less likely to affect the outcome of the election, then individual i will be less likely to perceive advocacy as being “worth it.”

Collective Action: Complementary Strength and Numbers. The question is how individual i should react to hearing a pro-Obama argument from a “random voter.”  If the argument is weak, should individual i—who, remember, already supports Obama—view advocacy as more or less likely to affect the outcome?

Well, suppose for simplicity that the election outcome is perceived to be a function like this:

f(q,n) \equiv \Pr[\rm{Obama\; Wins}] = \frac{ q * n }{1+ q * n},

where q>0 is the quality of the best argument that can be made for Obama (a content-based persuasive effect), and n>0 is the number of advocates for Obama (a “sample size”-based persuasive effect).  Then, being a bit sloppy and using derivatives (approximating n being large), the marginal value of advocacy is

\frac{ q }{1+ q * n} (1-f(q,n))

and, more importantly, the marginal effect of quality on the marginal value of advocacy (the “cross-partial”) is

\frac{2 n^2 q^2}{(n q+1)^3}-\frac{3 n q}{(n q+1)^2}+\frac{1}{n q+1}

The key point is that, for reasonable range of parameters (specifically in this case, if n and q are both larger than 1), increasing the perceived quality of the best argument that can be made for Obama reduces that the marginal instrumental value of advocacy for an Obama supporter.  Note that the perceived quality of the best argument that can be made for Obama is a (weakly) increasing function of the observed quality of any pro-Obama that one is presented with.  In other words, observing a higher quality pro-Obama argument should lower an Obama support’s motivation to engage in advocacy.

Collective Action: Increasing Persuasive Strength. For the second model, let’s pull “numbers of advocates” out and, instead, let’s modify the election outcome model as follows:

f(q) \equiv \Pr[\rm{Obama\; Wins}] = \frac{ q}{1+ q},

where q>0 is the quality of the best argument that is made for Obama.  Now, add a little bit of heterogeneity.  Suppose that a(i) is the quality of the best argument that individual i “has” in favor of Obama.  This, at least initially, is private information to individual i, and suppose it is distributed according to a cumulative distribution function G.  Suppose for simplicity that the argument to which individual i is exposed is the best he or she has yet seen (this isn’t necessary, but allows us to get to the point faster), and denote this by Q.  Furthermore, suppose that individual i will find it worthwhile to advocate (i.e., spread/share his or her own pro-Obama arguments) if a(i)>Q. (This is similar to assuming that advocacy is costless, but this is not important for the conclusion.) Then what is the probability that individual i will find it strictly worthwhile to advocate after observing an argument of quality Q?  Well, it is simply

1-G(Q)

Since G is a  cumulative distribution function, it is a (weakly) increasing function of Q.  Thus, 1-G(Q) is a (weakly) decreasing function of Q.  Again, observing a higher quality pro-Obama argument should lower an Obama support’s motivation to engage in advocacy.

What’s the point?  Well, first, I think that information is a very interesting and important topic in politics—that’s “why” I wrote this. But, more specifically, it is ambiguous how to interpret the subsequent lobbying/advocacy behaviors of individuals with respect to varying qualities of information/arguments offered by others when individuals expect that the efficacy of their lobbying/advocacy efforts is itself a function of the quality of the argument.  In these examples, in other words, individuals might not be learning just about (say) Obama, but also about how effective their own advocacy efforts will be.  If this is the case, I humbly submit that the findings are not at all counterintuitive.

With that, I leave you with this.

Want It Now? Oh, We’ll Give It To You…Later

Did the Senate ironically kill (for the time being) an immigration deal by passing an immigration bill?  Arguably, yes.

Control of the Senate is up in the air in the 2014 elections. On the other hand, the GOP seems pretty likely to maintain its majority in the House. If the GOP wins control of the Senate and holds onto the House majority in 2014, then the GOP can control the finer points of an immigration bill in the 114th Congress.  The only practical impediments standing in the way of enacting the bill would be

  1. A filibuster by Democrats in the Senate and
  2. A veto by President Obama.

President Obama has come out strongly in favor of immigration reform, so let’s set that aside. The more interesting angle is the first one.  I wrote recently about the nuclear option, though it seems like we’re now at no worse than Defcon 2. Clearly, if the nuclear option is pulled in its strongest form and legislative filibusters are effectively neutered, then (1) would no longer present an impediment.  So, let’s presume that “the button” is not pushed.

It seems incredibly unlikely that the GOP will hold 60 seats in the Senate in 2015, so the Democrats could stand together and block an immigration bill that they “did not like.”  But, is this likely now?

I argue no, for two reasons.  First, every Democratic Senator voted in favor of S. 744, the Senate’s immigration bill. For some, this was a tough vote, at least in electoral terms.  Thus, these Democrats have already sent a high profile and potentially costly signal that immigration reform is “important” and (this is important) for the Senators who viewed it as “a tough vote,” the sensible implication is that they want their skeptical constituents to believe that immigration reform is important for policy reasons (not partisan ones).

Accordingly, imagine that the GOP presents these Democratic Senators with a modified immigration bill, similar in many respects to S. 744.  Voting against, not to mention pursuing what might end up being high profile efforts to block, such a bill would be arguably seen as partisan obstructionism.  To be succinct, such efforts are not typically viewed favorably by exactly those voters who were/are skeptical of a Democratic incumbent: these are voters who tend to vote Republican but presumably might give a Democrat the benefit of the doubt if the incumbent is perceived to be competent, faithful to the state’s/the nation’s interests, etc. 

But, remember, these incumbents will have already claimed that immigration reform is important and, arguably, faithful to their states’/the nation’s interests.  In a nutshell, the worries for the Democrats right now about immigration reform are actually focused on those Democratic Senators facing reelection in 2016.  If these members can’t stand the prospect of being seen as overly partisan (or, perhaps, as a flip-flopper), then the GOP can easily count on being able to get enough Democrat cross-overs to reach 60 votes if they control the Senate’s agenda through holding a majority of its seats in the 114th Congress.

Finally, Boehner and the House Republicans are probably thinking about exactly this possibility, given the meaningful possibility that the GOP might win control of the Senate in 2014.  Accordingly, in conjunction with the apparent security of their majority in the House, the House Republicans have little to no incentive to consider any immigration bill this Congress, precisely because the Senate Democrats passed one this Congress.  Finally, note that many Senate Republicans also voted for the immigration bill, which merely strengthens the argument.

With that, I leave you with this.

I Would Manipulate It If It Weren’t So Duggan: The Gibbardish of Measurement

A fundamental consideration in decision- and policy-making is aggregation of competing/complementary goals.  For example, consider the current debate about how to measure when the “border is secure” with respect to US immigration reform.  (A nice, though short, piece alluding to these issues is here.)

A recent GAO report discusses the state of border security, the variety of resources employed, and the panoply of challenges associated the the rather succinctly titled policy area known as “border security.”  An even more on-point report was issued in February of this year.

Let’s consider the problem of determining when “the border is secure.”  This is a complicated problem for a lot of reasons, and I will focus on only one here.  Specifically, the question is equivalent to determining the “winners” from a set of potential outcomes.

In particular, there a lot of potential worlds that could follow from (say) a “border surge.”  These worlds are distinguished by measurement, a cornerstone of social science and governance. For example, consider the following three measures of “border security”:

  1. Amount of illegal firearms brought across the border, and
  2. Amount of illegal cocaine brought across the border, and
  3. Number of (new) illegal aliens in the United States.

(Note that there are lot of ways to make this even more interesting, in terms of the strategic incentives of “the act of measurement.”  For example, if you want to believe that the level of illegal firearms brought across the border is low, an arguable way to do this is to stop “looking for firearms.”  But I will leave these incentive problems to the side and focus on the incentive to misreport/massage “sincerely collected” data/measurements. Furthermore, the astute reader will note that I could pull the same rabbit out of the same hat with only two measures.)

Before continuing, note that the selection of these measurements is left to the Secretary of the Department of Homeland Security (in consultation with the Attorney General and the Secretary of Defense) who is called upon in the bill to submit to Congress a `Comprehensive Southern Border Security Strategy,” which “shall specify the priorities that must be met” for the border to be deemed secure. (Sec. 5 of S.744, the immigration bill as passed by the Senate.)

In general, of course, there are multiple ways to indirectly measure—and no direct way to measure—whether the border is “secure,” (i.e., the notion of a “secure border” is one of measurement itself) and these must be aggregated/combined in some fashion to reach a conclusion.

On the one hand, it might seem like this is a simple problem: after all, for all intents and purposes, it is a binary one: the border is secure or it is not. End of story.  AMIRITE?  No, that’s not true.  Because the issue here is that there are three potential programs to choose from.

To see this, suppose that there are three possible programs, plans A, B, and C.

Now, think about how you will/should measure if a program will result in a “secure border.”

The question at hand is how one compares the different programs.  So, to make the problem meaningful, suppose that at least one of the programs will be deemed successful and at least one will be deemed unsuccessful (otherwise the measurement is meaningless).

The Gibbard-Satterthwaite (and, even more accurately, the Duggan-Schwartz) Theorem implies that such a system can/should not guarantee that one elicits truthful reports of the measurements of all dimensions (guns, drugs, illegal aliens) in all situations, even if the measurements are infinitely precise and reliable.

Why is this?  Well, in a nutshell, in order to elicit truthful reports of every dimension of interest (i.e., guns, drugs, and illegal aliens), the system must be increasing in each of these measures.  However, this is at odds with making trade-offs.  In the context of this example, there are programs A and B such that A decreases guns but has no other effect, and B decreases drugs but has no other effect.  In this case, which program do you choose?  Putting a bunch of “reduction” in the black box, one must “eventually” choose between A and B, at least in some situation, because otherwise the measurement of guns-reduction and drugs-reduction become meaningless.

So, suppose that A decreases guns by “a little” but B decreases drugs by “a lot.” How do you compare a handgun to a pound of China White? Choose a ratio, and then imagine, if plan B was just a peppercorn shy of the “cutpoint” in terms of the reduction of drugs relative to the decrease in guns…but (say) A is $100Billion more expensive than B…what would you report about the effectiveness of B?

Well, you’d overreport the effectiveness of B (or underreport the effectiveness of A, possibly). AMIRITE?  The measures are inherently incomparable until you choose how to make them comparable.

So…what does this mean?  Well, first, that governance is hard—and perpetually so.  But, more specifically “mathofpolitics,” it clearly and unquestionably indicate that theory must come before (practical or theoretical) empirics.  In a nutshell: every non-trivial decision system is astonishingly susceptible to measurement issues: even when measurement is not actually a practical problem. For the skeptical among those of still reading, note that I only “played with” elicitation/reporting—I am happy to assume away for the moment the very real and fun issues of practical measurement.

With that, I leave you with this.

 

A Byrd in the Hand, or the 3 R’s of the Senate: Reid, Rules, & Retribution

Forceful confrontation to a threat to filibuster is undoubtedly the antidote to the malady.
–Sen. Robert Byrd (D, WV)

Filibuster reform in the US Senate has once again begun to attract attention.  In a nutshell, Majority Leader Harry Reid (D, NV) is—ahem—upset that—in his opinion, at least—Republican Senators are unreasonably holding up executive branch nominations out of either animus towards the Obama Administration, hostility to the missions of the government agencies in question, or both.  As a result, Reid is contemplating “the nuclear option,” in which the Democrats and Vice President Joe Biden, as President of the Senate, would use the essentially majoritarian character of the Senate’s rules to clarify that the Senate is—particularly when a majority is ticked off—an essentially majoritarian body in which 51 votes wins.

Senate Republicans, who hold a minority of seats, are understandably upset about the possibility of “the bomb being dropped” and are threatening retribution if Reid goes nuclear.  I will not describe the procedural details of the nuclear option, which are easily found for those who, like me, enjoy parliamentary skullduggery. Instead, I will focus on the “mathofpolitics” of Reid’s situation.

Let’s set up the problem in a succinct fashion.  Going nuclear, Reid and the Senate Democrats can at least ensure an up or down on nominees.  While it isn’t clear (feel encouraged to clarify this for/update me in the comments or “offline/online” by emailing me)  exactly how “big” of a nuclear bomb Reid will/can drop in the sense of whether it would guarantee votes on all executive nominations, including judicial, or just those to executive agencies (or some other subset), I’ll keep it simple and just presume it’ll apply to all nominations, but not legislation.

Presumably, being able to get a timely up or down vote on nominations will be good for Reid and the Democrats right now, because President Obama is a Democrat.  So, there’s the easiest argument for why Reid should “go nuclear.”

However, there are at least two frequently forwarded arguments for why Reid should not go nuclear: the “scorched earth” argument and the “uncertain majority” argument.  The scorched earth argument goes as follows: if Reid goes nuclear and ensures votes on nominations, then Senate Republicans will find new ways to halt business, and retaliate by slowing the Senate down even more.  The uncertain majority argument, on the other hand, points out that the Democrats will not hold the majority forever, and their procedural victory will eventually get used against their interests by a subsequent Republican majority. I’ll consider these arguments in turn, and then summarize a “third way” that Reid might go.

As Ezra Klein points out, the scorched earth argument is (arguably, at least right now) less powerful than one might presume.  In a nutshell, this is because one might argue that the Republicans are currently “blocking everything” anyway.  (This is shorthand—as Minority Leader Mitch McConnell (R, KY) and others have pointed out, the Senate Republicans are not blocking everything, or even all nominations.)  Accordingly, from a game theoretic perspective, one might argue that this argument should not have much impact on the Democrats’ decision to go nuclear or not.  Indeed, it suggests a rationale for why Reid and the Democrats should at least threaten to go nuclear: if the Republicans recognize that the scorched earth argument does not have much pull on the Democrats, then they will take such a threat more seriously and, to the degree Republicans do not want the nuclear option used, they will have an incentive to offer concessions to avoid its use.  (See the great series of posts essentially about this dynamic by Jonathan Bernstein and Sarah Binder, (here’s Bernstein’s response, and Binder’s response).)

Perhaps as a result of the limitations of the scorched argument in the current stand-off, the uncertain majority argument has been quickly brought forward by Senate Republicans.  Indeed, while the scorched earth argument has been publicly forwarded, too, it has essentially been quickly replaced/backed up by the uncertain majority argument. For this reason, as well as the fact that the excellent exchange between Bernstein & Binder essentially focuses on the likelihood/credibility of the scorched earth response, I will move on to the uncertain majority argument.

First, the uncertain majority argument is not dispositive. (Note that the scorched earth argument, to the degree that the minority can credibly implement it, is potentially dispositive in the sense of its irony: vote to speed things up and instead slow things down.)  This is because a bird in the hand is arguably better than two in the bush.  Reid’s experience with the Senate Republicans may have shown him that there may not even be one “in the bush.”  The real worry here is that, to the degree that Reid and the Democrats are actually tempted by the nuclear option, the GOP will presumably also be tempted by it when it gains the majority.

Let’s think about this for a second.  Thinking about the strategic situation in a little offers some insight into the relevant factors all Senators should be thinking about.

Suppose first that the nuclear option is “popular” in the sense that the public will approve (or at least not disapprove) of its use.  Well, in this case, the majority party should realize that, if they go nuclear, then—ceteris paribus—they are more likely to retain the majority.  More subtly, if we presume that this popularity is likely to hold into the near future, the majority should realize that if the minority party gains the majority in the near future, the new majority party will face a similar situation and, accordingly, the current minority party is likely to go nuclear at that point.  Both scenarios suggest that the Democrats should go nuclear: it would be popular and the minority party would do it if they gain the majority in the future.

So, under what conditions would the nuclear option not be a good choice?  Well, it boils down to two questions:

  1. Would “going nuclear” be unpopular/electorally costly? (Republican Senators are arguing that it would be.  Extra credit: given that we have a two-party system, why should one be at least a little skeptical of this statement?)
  2. If one does not go nuclear now, will “going nuclear” be popular in the future?

The first question is more pressing than the second, if only because of “bird in the hand” reasoning.  I don’t know the answer, and it’s not clear to me that anyone does, because I don’t think voters care, per se, about this procedural move: they want the Senate to “play by the rules” and “get things done” (i.e. the “cake and have it too” syndrome).  What is clear to me, however, is that Reid’s actions will frame the issue and at least partially determine the popularity of “going nuclear.”  I return to this below to conclude the post but, in a nutshell, I think this dimension is what will ultimately prevent the Democrats from going nuclear and, to be clear, I think part of that is Reid’s fault (but maybe by design).

The second question—will going nuclear be popular in the future—is truly secondary for now, but this will potentially be less true in 2016, should the Democrats hold on to the Senate majority in 2014.  This is because President Obama is a Democrat, and the Republicans’ promised uses of a “51-vote Senate” (e.g., read to the end of this) include using “their majority control to jam through a repeal of the Affordable Care Act, a repeal of the Wall Street Reform Act and other GOP priorities.”  Indeed, Sen. Lamar Alexander (R, TN) “said the GOP conference could pass with a simple majority vote legislation to weaken unions, authorization to complete the Keystone XL oil sands pipeline and other items.”

Because neither party controls two-thirds of either chamber, I have a strong suspicion that neither “a repeal of the Affordable Care Act” nor “a repeal of the Wall Street Reform Act” will actually occur before January 2017: I just can’t see President Obama signing such bills (in fact, I kind of doubt that the GOP would pass such bills even if they had the numbers).

So, I think the primary question for Senate Democrats is how electorally costly going nuclear would be.  As I alluded to above, I think going nuclear would give the GOP a useful mobilizer for the 2014 midterm elections.  Right now, the GOP doesn’t have much of “an issue” to run on in my opinion (neither do the Democrats).  “Breaking the rules to ram through executive appointments” particularly against the backdrop of the AP wiretaps/PRISM/IRS/Benghazi scandals (not to mention the for-now-arcane recess appointments dustup), might have a lot of traction with swing voters.

Reid’s actions at this point are key.  For all of the parliamentary Dr. Strangelove’s out there, this is how I would go nuclear if I were Reid.  (I’m not the first by any means to make this argument, but sometimes it’s good to repeat things that seem to make sense.)

Reid should force the GOP to take the floor.  He should essentially set aside the “tracking system” in which the Senate moves from item-to-item in a parallel fashion.  He has said the GOP is being obstructionist.  Instead of trying to invoke cloture on the nomination of (say) Gina McCarthy (currently awaiting confirmation as administrator of the EPA), Reid should bring McCarty’s confirmation up for debate, and not let the Senate move on until a vote is taken.  MAKE THE GOP FILIBUSTER/OBSTRUCT IN PUBLIC

In equilibrium, voters should not believe Reid’s claims that the GOP is obstructing business.  We as the electorate could, I suppose, crack open the Congressional Record and try to discern the counterfactuals but (1) that is actually pretty hard to do in a sensible way—if obstruction is unpopular (which it essentially must be if going nuclear will be electorally popular), then obstructionists have an incentive to obfuscate their obstructionism (say that 3 times fast), and (2) as Anthony Downs famously made clear, we not only aren’t going to, it isn’t even clear that it would be rational for us to take the time to do so.  No…if overcoming obstructionism is a big deal, Reid and the Democrats need to sit down and send the costly signal of its importance to the public by (ironically) forcing the GOP to obstruct in a visible fashion.  I don’t think Reid is going to do this, and maybe that’s the right decision, given the facts, but if he doesn’t do this, I don’t think he’ll go nuclear.  Senate rules are kind of like an A-Team episode: sure, there’s a lot of fights, but nobody ever dies.

With that, I continue a theme and leave you with this.

 

Remuneration Of The Nerds, Or “Putting the $$ in LaTeX”

I’ve been thinking a lot about signaling lately. The central idea of signaling is hidden (or asymmetricinformation. A classic example of signaling is provided by education, or more specifically, “the degree.”

Suppose for the sake of argument that a degree is valuable in some intrinsic way: a college degree is arguably worth $1.3 million in additional lifetime earnings. (Let’s set aside for the moment the level of tuition, etc., that this estimate (if true) would justify in terms of direct costs of a college degree. I’ll come back to that below.)

Instead, let’s think about the basis of this (“market-based”) value.  A simple economic story is that the education & training acquired through obtaining the degree increase the marginal productivity of the individual by (say) $1.3M.  Well, I don’t even REMEMBER much of college (and probably thankfully…AMIRITE?), so this seems unlikely.

Another, more interesting (to me at least), explanation is that the value of the degree is through its signaling value.  There are a number of explanations consistent with this vague description, including

  1. College admissions officers are good (in admissions and the act of “allowing to graduate”) at selecting the “productive” from the “unproductive” future workers.  Maybe.  College admissions is hard, and I respect those who carry the load in this important endeavor.  But…
  2. Finishing college shows “stick-to-it-iveness” and thus filters the “hard workers” from the “lazy workers.”  Again, this is undoubtedly a little true.  But there are other ways to “work hard.”  So, finally…
  3. College does 1 & 2 and, to boot, adds a “selection cherry” on top.  In particular, the idea of the college major allows individuals to somewhat credibly demonstrate the type of work they find most appealing (controlling for market valuation, to which I return below).

Explanation 3, as you might expect, is the most interesting to me.  What am I thinking?  Well, back when I was a kid, going to law school was considered a hard, but not ULTIMATELY HARD way to score some serious dough in one’s first job.  Sure, it took some money, and some serious studying, but—HAVE YOU SEEN THOSE STARTING SALARIES?  HAVE YOU SEEN “THE FIRM”?  Oh, wait.  Wait…no, seriously…YOU CAN BE TOM CRUISE AND WIN IT ALL.

On the other hand, math (pure or applied) was considered a “very good, but…come on” kind of major.  In particular, a perception (not completely inaccurate) was that math was hard, but didn’t really “train/certify” you for any job other than, perhaps, being a math teacher.  But, this argument falls on its face after a bit of thought: you can be a math teacher without being a math major.  (I’m proof of this general concept, I am a “political science teacher” and was a math/econ double major.)  So, what gives?  Why would you be a math major?

Because you are intrinsically motivated (i.e., you “like math problems”).  In other words, you are signaling a true interest precisely because there are other, arguably easier, ways to get to the same moolah.  Which means that you’ve sent a (potentially costly) signal to potential employers that this is “what makes you tick.”  This is the information that your degree provides: you have shown them costly signals of what you actually like to “stay late” and work on.

The same argument goes for majors that are both demanding and relatively specialized, (e.g., petroleum engineering, actuarial sciences): employers can be more certain upon seeing such a degree that you really want a career in this—you like it (where “it” is the substance/drudgery of what the job entails).

In other words, to the degree (pun intended) that the value of college is just about selection (explanation 1), then admission to the “marginal school” (i.e., any school that admits every applicant) should be valueless (which I don’t think it is).  If the value of college were just about “showing you can finish something” (explanation 2), then the value of college would be no different/less than completing four years of (say) military or missionary service.  (And, maybe it is no different, but many people follow such admirable service by pursuing a college degree.)

Accordingly, the fundamental signaling value of a college degree is arguably not in its possession, but in the information contained about how it was obtained.  In other words, “the major.”  Of course, there are other, but in my mind ancillary, determinants of the value of a college degree.  As my Dad told me when I was growing up (which is kind of meta),

It isn’t all about the destination—half the fun is in “getting there.”

If that wasn’t true in terms of how one’s actions are interpreted, then one’s actions are even more easily interpretable.  Stew on that for a second.

Finally, in terms of the “math of politics” of this reasoning, note that costly signals are everywhere, and they are important far beyond college: legislative committee assignments, the development of reputations by “policy entrepreneurs” (I’m looking at you, Ron Paul, Ted Kennedy, & John McCain), the development of expertise/autonomy in bureaucracies/central banks, the emergence of “neutral independence” in judiciaries, and the credibility of “dying on the hill for a cause” necessary for policy bargaining by “fringe” political groups (see: Green Party, Pro-Choice/Life groups, PETA, Tea Party, Muslim Brotherhood, ACLU).  There are many, many applications of the notion that value is assigned by selectors (voters, employers, the unmarried) in signals that more precisely reveal hidden information about the tastes/predilections/goals of those vying for selection into potentially long-term, repeated relationships.

With that, I leave you with this.