Princeton Election Consortium

A first draft of electoral history. Since 2004

An Antidote for Gobbledygook: Organizing The Judge’s Partisan-Gerrymandering Toolkit

April 7th, 2018, 3:04am by Sam Wang


The Supreme Court appears to be at loggerheads in its search for a single standard for partisan gerrymandering. Here, Sam Wang and Brian Remlinger collect the many statistical standards into a single toolkit. Basically, all the tests fit into two categories: inequality of opportunity and durable outcomes. Read our working draft, which we have uploaded as an SSRN preprint here.

Here’s the law side of our argument, in a nutshell:

We propose that mathematical tests fall into two categories: tests of unequal opportunity and tests of durable outcome. These tests draw upon ideas borrowed from racial discrimination law, while extending that doctrine in directions that are unique to the category of partisanship.

Opportunity is easily defined and corresponds to a core principle of democracy: it should be possible to vote out a candidate or incumbent. While it is true that voters have clustered into enclaves that sometimes make an incumbent or party safe, it is equally the case that redistricters can manipulate lines to amplify the effects of that natural clustering. Just as members of a racial group can have the representational rights impaired through gerrymandering, so it is with partisans.

Test of unequal opportunity are easily conceptualized as an extension of racial discrimination. Where partisans comprise a small fraction of the population, the appropriate procedure is to examine individual districts. Where partisans comprise close to half the voters of a state, a statewide evaluation is necessary.

However, party is a more mutable characteristic than race. Therefore one may ask whether a partisan advantage is durable.  Our second standard, testing for inequality of outcome, addresses this problem by probing whether a particular arrangement is likely to be robust to likely changes within a redistricting cycle. This can be gauged not just by waiting for multiple elections to pass (which would vitiate the remedy) but by gauging the partisan effects of a map by examining likely outcomes under a variety of conditions. This is well within the reach of modern expert witnesses.

Tags: Redistricting

7 Comments so far ↓

  • LondonYoung

    Two more comments – (1) is there any way to “test the tests” for how much if due to voter clustering vs. deliberate packing?
    (2) The VRA tests were legislation and not constitutional law. Would it not be strange for the courts to take on rulemaking for partisanship they was left to the legislature for race?

    • Sam Wang

      Generally I agree that one would want not want to base partisan gerrymandering doctrine on statutory interpretation of the VR a. It was more our point that the procedures for examining single districts are well-established. One form of partisan gerrymandering, the type that occurred in Maryland, fits with that approach.

    • LondonYoung

      My point was that those single district standards are based (in large part) on the court’s interpretation of legislation, not of the constitution.

  • Leading Edge Boomer

    One stumbling block is that the Chief Justice of SCOTUS has an absolute phobia about the most elementary mathematics. In any case, SCOTUS can rule that Wisconsin and Maryland are illegally gerrymandered, but it could not prescribe a better solution without being accused of making law.

    I don’t think any totally algorithmic solution can be sold in the US. A NON-partisan, not BI-partisan, commission (Iowa, Arizona, California, a few others), aided by clearly understood mathematical tests, is a more comfortable fit in the country as we have it.

    • LondonYoung

      So, a thought on math. In the Voting Rights Act of 1965 congress decided that if there was some qualification to vote (like a literacy test or some such) and voting participation went below 50% then it was safe to say that the test was to blame. That was 1965.
      By 1996 there were no tests at all and turnout had fallen to 49%. Oopsie.
      This is part of the reason why judges don’t want to build constitutional rights based on threshold numbers.

  • Eric

    First, I think you’re missing the context in which Roberts made his comment. He’s not talking about himself or judges not understanding the math, he’s talking about the standard not being comprehensible to the average, informed citizen to which this standard will be applied.

    Second, the correct response to the court (and I think this was missed at the arguments) is that there are already many mathematica/statistical standards in use and this newfound concern of the chief justice’s is surprising in light of that.

    • Sam Wang

      Narrowly construed, you are correct. But his apparent disdain for formulas was palpable. Later at Whitford oral arguments, which I attended in person, there was general merriment among both liberal and conservative justices, with math as the whipping boy. So broadly construed, I stand by my interpretation.

Leave a Comment