**Monday November 28th, at 7:00pm.** At the Princeton Public Library, I’ll join a panel moderated by Stan Katz, and featuring Ruth Mandel and Charles Stile.

**Thursday December 1st, at 6:00pm.** In McCosh Hall room 50 on the Princeton University campus, I sit down with Jamelle Bouie, chief political correspondent of Slate magazine. Bouie has written remarkable pieces about the Trump campaign and its relationship to race and politics in America. Here’s one. I look forward to a great discussion.

If you have kids and can’t come out, you might find this relevant: Talking to Young Children about the Presidential Election.

]]>Thanksgiving brings the nation a win for fair representation, in the form of a way to deal with partisan gerrymandering. A three-judge court ruled that the Wisconsin state legislative map is a partisan gerrymander: a map drawn to favor one major political party over the other (decision: Whitford Op. and Order, Dkt. 166, Nov. 21, 2016). The court applied a mathematical standard created by Nicholas Stephanopoulos and Eric McGhee, the “efficiency gap.” The case is now headed for consideration by the Supreme Court.

If this standard, or another that addresses the same need, is adopted widely, it would resolve a major gap in election law. This is an important case: In this year’s House elections, Democrats would have had to win the popular vote by at least 9 percentage points to take control. That is the largest partisan asymmetry on record. It would be reduced considerably if districting were done according to principles that treated both parties equally.

Let me outline the state of play and some potential weaknesses in the proposed standard. As an additional approach, I have developed two standards, based on longstanding statistical practice, which could help overcome skepticism by the Supreme Court. My standards can be calculated automatically at gerrymander.princeton.edu, and are described in detail in the *Stanford Law Review*.

The current state of play on the partisan-gerryrmandering issue is as follows:

- Partisan gerrymanders are considered justiciable, which means that courts are empowered to strike them down. (Davis v. Bandemer, 1986)
- Supreme Court justices have not come to agreement on a manageable standard which could be applied in general.
- A majority of the current Supreme Court has expressed interest in the idea of partisan symmetry, loosely defined as the idea that if the parties switched statewide vote totals, they would also switch seat totals. (LULAC v. Perry, 2006)

Any solution must fit within a large body of existing law. Here are a few principles that have been considered but rejected:

- Odd shapes of districts are considered insufficient evidence. Indeed, sometimes odd shapes are needed to connect communities of interest, or to comply with the Voting Rights Act. Generally speaking, single-district gerrymanders are not justiciable, except on grounds of race.
- It is insufficient to point out that a minority of votes could elect a majority of representatives. Such an event can occur by chance.
- Sub-proportional representation is also out, e.g, 40% of votes giving less than 40% of seats. Winner-take-all systems generally do not produce such an outcome.

Justice Anthony Kennedy, who is necessary* for a five-vote majority for a possible gerrymandering standard, says that partisan gerrymanders are justiciable under the Fourteenth Amendment (equal protection) and the First Amendment (freedom of association). So the legal underpinnings are there for a standard to be adopted.

To me, “manageable” suggests that a judge should be able to apply the standard without too much help from expert witnesses. There is nothing wrong with expert witnesses. But this is a Constitutional question, and a judge might not want to outsource the critical thinking.

For example, one could take the approach of drawing thousands of possible maps, and make a statistical argument from the results. But to do this, experts have to start with some set of districting standards, which implicitly contain priorities that do not reflect the give-and-take of the legislative process or of requirements such as satisfying the Voting Rights Act or joining communities of interest. In short, redistricting is not a game of chance. A randomly-generated process only reveals the range of outcomes that are possible, not what is desirable. So this is shaky ground.

Now let’s turn to the standard used in the Wisconsin case. It revolves around the key principle that partisan gerrymandering must consider the statewide map as a whole. This is likely to be the basis for any successful standard.

In particular, here is the basic concept of the efficiency gap: Look at the statewide pattern of results. When one party gets just enough votes to win its races by tiny margins, it has used its votes efficiently. If a party’s wins are large, then votes have been wasted. If the two major parties differ in their total number of wasted votes, that is an efficiency gap.

Skipping over the details of Stephanopoulos and McGhee’s assumptions, they define the efficiency gap in a way that is equivalent to the following formula (see footnote 88 of the Wisconsin decision):

where S is the party’s seat share and V is the party’s vote share. Any point on the blue diagonal of the following plot has an efficiency gap of zero percent:

I have plotted Wisconsin elections from 2010 to 2016.The vertical distance between each data point and the blue diagonal is the efficiency gap. The points are approximately lined up from left to right, which means that across a wide range of outcomes, Democrats are held to a similar number of seats, less than 40 out of 99 total. In this way, Republicans in the Wisconsin Assembly is protected from changes in the will of voters.

The efficiency gap works because the diagonal line is close to where the relationship between seats and votes has been observed to fall historically, based on decades of elections in winner-take-all systems worldwide. (It is possible to derive the exact votes-to-seats relationship from basic mathematical principles. Today I will skip that.)

Despite its virtues, the efficiency gap has some weak points.

- It relies on “wasted votes,” a phrase that may rankle a judge. I can imagine Justice Alito or Roberts asking: how can a legitimately-cast vote be said to be “wasted”? There are other details of the mathematical argument that could be examined, though I do not think the judges will drill into them beyond what I have said above.
- A critical justice could call the blue diagonal a form of enforced proportionality –
*treif*. It’s not proportionality, exactly – but it could be argued that the efficiency gap establishes a norm of what level of representation is appropriate. If the Supremes don’t like such a baldly stated standard, they might instead want to see a definition of asymmetry that does not explicitly recommend a specific number of seats. - In some years the efficiency gap almost goes away. See the data point for 2014, during which Republicans won by fat margins in Wisconsin, and “wasted” as many votes as Democrats, giving a very small efficiency gap. So the Wisconsin gerrymander “wastes” more votes in some years than others. If the measure of gerrymandering is the efficiency gap, why not just wait until the next election, when it may very well shrink?

>>>

Older, simpler statistical tests might possibly resolve these difficulties. Here are two tests for gerrymandering based on textbook principles established nearly 100 years ago.

**1. The mean-median difference. **As I wrote in the *New York Times*, a partisan advantage in a closely-divided state is revealed by the fact that the median (i.e. the middle value) is different from the mean (i.e. the average). When these two numbers are far apart, there is an advantage to whichever party is favored by the median. And the statistical properties of the mean-median difference were discovered decades ago.

Wisconsin redistricting came under single-party control by Republicans for the post-2010 redistricting cycle. Previous rounds of redistricting were done by court order after failure of the two parties to come to agreement on a map. From 1984 to 2000, the mean-median difference was an average of 0.1% toward Republicans – basically zero. From 2002 to 2010, the mean-median difference averaged 3.5% toward Republicans. In the 2012/2014/2016 elections, the mean-median difference averaged 6.4% toward Republicans. This is a large difference, comparable to the most extreme Congressional gerrymanders in Pennsylvania and North Carolina.

(Figure 5B from my *Election Law Journal* article analyzing Wisconsin; asterisks indicate statistical significance.)

Note that the mean-median difference varies considerably less than the efficiency gap from year to year, and is a good measure of partisan asymmetry even in years like 2010 and 2014, when the efficiency gap was low. This is because any structural gap between the mean and the median is likely to persist, even if one party is lifted by a wave of popular support.

**2. Are individual district wins more lopsided for one party than the other? **The core strategy of partisan gerrymandering is to pack opponents into a few districts for lopsided wins, while spreading one’s own voters more thinly. We can just ask whether statistically, Democratic and Republican win margins are different. This can be done using the two-sample t-test, probably “the most widely used statistical test of all time.”

The p-values give the probability (one-tailed) that Republicans would have gained this advantage under chance conditions. The advantage arose suddenly in 2012, too fast to be explained by slow trends such as the accumulation of Democrats in high-density population centers.

As a technical note, Wisconsin does present special problems for statistical analysis. In 2016, nearly half of Assembly races were uncontested in Wisconsin. For doing statistical analysis, something has to be done to estimate voter preference in such districts. These details are discussed in my *Stanford Law Review*** and my *Election Law Journal* article; they usually do not have a major effect on the outcomes of the tests.

Overall, I am optimistic that the Supreme Court will at least give this issue a fair hearing. There are two similar cases brewing, one in Maryland, where Democrats perpetrated the gerrymander; and one in North Carolina, where Republicans are the culprits. Within the coming 12 months, we may know whether or not partisan gerrymandering will be allowed in post-2020 redistricting.

**I am assuming that whoever is appointed to the vacancy on the Supreme Court will vote as Scalia did, against the justiciability of partisan gerrymanders. In LULAC, Scalia opined that it was time to give up looking for a clear standard. Will the standards described here win out over the new justice’s likely political preference? I am skeptical, but then again this issue may seem like a technical one rather than a more emotional one such as voting rights. In my view, it is at least as consequential. I note that the Maryland case brings all motivations into alignment.*

***The SLR article also gives a third test, one that uses computer simulation to calculate how many seats were ill-gained. However, that is best applied to House redistricting schemes. It has one notable virtue: it can take into account the natural advantages that come from population clustering. If you want to try it out, it is available at gerrymander.princeton.edu.*

*I thank Stephen Wolf for reading and commenting on this post.*

Despite the importance of understanding this week’s cataclysmic events, I have been slow to write. There are other demands, especially my annual national scientific conference, which begins tomorrow.

The question of what went wrong in polls – and where I went additionally wrong – is an important one. I owe you a serious assessment, but it is not a topic to write about quickly.

This is a wrenching time in national politics. Most supporters of both Donald Trump and Hillary Clinton were surprised by the outcome. As Harry Enten at FiveThirtyEight points out, voters were more partisan than ever, with amazing party loyalty. Despite the few key upsets in close Rust Belt races, voting patterns are nearly identical to 2012. The state-by-state correlation between Romney-Obama and Trump-Clinton is +0.95 – right in line with post-Gingrich polarization.The parties are now fighting over mobilizing and turning out their own voter demographics.

In Politics & Polls #20, Julian Zelizer and I react to the results in episode #20, our first post-election recording. Among a host of issues, we discuss why the polls might’ve been off and what a Trump presidency means for the nation and possible implications for our democracy. Listen.

]]>Going into today’s election, many races appeared to be very close: 12 state-level Presidential races were within five percentage points. But the polls were off, massively. And so we face the likelihood of an electoral win by Donald Trump. At the same time, Hillary Clinton appears likely to win the popular vote. The Upshot’s model currently projects a Clinton lead of more than 1 percentage point. If that lead lasts, it means that more American voters preferred her to Trump.

At the moment, the NYT is projecting Trump leads of less than 1 percentage point in Pennsylvania and Michigan. Even without these states, Trump has at least 268 electoral votes (depending on some districts in Maine and Nebraska). We will see in the morning how these last few states and districts will be resolved.

In addition to the enormous polling error, I did not correctly estimate the size of the correlated error (also known as the systematic error) by a factor of five. As I wrote before, that five-fold difference accounted for the difference between the 99% probability here and the lower probabilities at other sites. We all estimated the Clinton win at being probable, but I was most extreme. It goes to show that even if the estimation problem is reduced to one parameter, it’s still essential to do a good job with that one parameter. Polls failed, and I amplified that failure.

This election is about to create shock waves that will make the last year of campaigning look mild. We are about to see both houses of Congress under Republican control, quite possibly with a President Donald Trump. This comes in the face of a reasonably growing economy and a popular Democratic President about to exit the White House. It is difficult to reconcile these different facts.

Thinkpieces that have been written in the last few weeks have to be re-examined in a new light. Ezra Klein at Vox has written about the weakness in U.S. democracy, in which a weak Republican Party could nominate Trump, and partisan polarization gave him a shot at the Presidency. This one-two punch appears to have landed, hard. I was correct in documenting Trump’s rise in the primaries, an easier task for polling analysis because there, his lead was considerable.

I have written about the role of partisan polarization in getting voters to choose up sides, to the exclusion of even considering a vote for the other side. The chickens have now come home to roost. Exit polls showed that most voters felt that Trump lacked the temperament to be President, and that Clinton was seen as more qualified. Yet Trump seems to have rallied enough support to get overcome these factors. All Presidential nominees have had lower and lower approval ratings, and Clinton was no exception to the pattern.

Now we see where that long trend has led. One consequence is that more voters refused to support either major candidate. Neither Trump nor Clinton is headed for winning a majority of voters in Pennyslvania or Michigan. In Pennsylvania, the NYT projects that over 3% of voters cast their ballots for Gary Johnson or Jill Stein. In Michigan, the minor-party total was over 4%. In both cases, these numbers are considerably greater than the Trump-Clinton margin.

The coming years will be disruptive ones, to say the least. Whether you are Democrat, Republican, or neither, it’s going to be a challenging time ahead. It’s Donald Trump’s Republican Party, and maybe his Presidency too. The nation belongs to all of us. Good night.

]]>I agree w/@NateSilver538 that there was high uncertainty, much more than I assumed. Median polling error 4% Presidential, 6% Senate so far.

— Sam Wang (@SamWangPhD) November 9, 2016

The American people have rejected the person they thought was qualified in flavor of the one they thought was not.

— Greg Dworkin (@DemFromCT) November 9, 2016

**11:12pm: **Using the projections of the NY Times, Donald Trump is outperforming his pre-election polling margins by a median of 4.0 +/- 2.6 percentage points (the 8 states in the Geek’s Guide). In Senate races, Republicans are outperforming by 6.0 +/- 3.7 percentage points. A five-percentage-point polling miss would be a tremendous error by modern polling standards. Undecided or minor-party voters coming home to Trump? Shy Trump voters? I don’t know.

GOP polls weren’t predicting this night either. Senate, House, Gov sources I talked to all expected Clinton would win per their polls

— Jessica Taylor (@JessicaTaylor) November 9, 2016

**10:38pm:** At the Senate level, the polling error is looking pretty substantial at the moment, maybe 5 points toward Republicans. A polling error of this size would be the largest on record, at least in a Presidential year. I was wrong to downplay this possibility.

We still have to see what will happen at the top of the ticket. But obviously, with a Meta-Margin of only 2.2%, an equally large across-the-board polling error at the Presidential level would suggest a Trump win of the Electoral College.

Trump owns the party whether he wins or loses

— Gideon Resnick (@GideonResnick) November 9, 2016

**9:31pm: **NYT presidential tracker showing things very close. Looks like a late night. And perhaps bug cookery for me.

**9:09pm: **The NYTimes Senate projected margins are running several percentage points more Republican than pre-election polls.

**9:04pm: **Here are some negative signs for Democrats: Trump’s ahead in Florida, overperforming his polls by several percentage points. Also, NH and PA Senate races leaning R at the NYTimes tracker.

I note that the generic House ballot swung toward Republicans by several points in the closing weeks, to D+1%. That is another piece of data suggesting that the GOP might overperform their polls. Definitely some mixed signals tonight.

**8:43pm:** Oh, this is awesome: the NY Times projection tool. So much better than TV. For now, it looks like control may come down to the New Hampshire and Pennsylvania Senate races. If Republicans take one of those, then they are likely to retain control.

WOLF: John, let’s take a look at Florida again!

JOHN: No.

WOLF: I’m sorry? Florida is very important–

JOHN: Fuck off.— Daniel W. Drezner (@dandrezner) November 9, 2016

**8:31pm: **Todd Young (R) wins IN-Sen. Not unexpected, but that’s one close race for the GOP.

**8:25pm: **According to you, television watching options:

- Red Skelton special is coming up
- Showtime: Stephen Colbert election night special
- Pop: the movie Dave
- El Rey: Twilight Zone marathon

**8:18pm: **Don’t ask me about any race closer than two percentage points. All comments on this topic will be deleted until 10:00pm!

**8:13pm:** I’m unaware of any advance indications of Trump overperformance. On the contrary, we have: (a) early voting neutral or more Democratic than 2012; (b) massive Latino voting; and (c) high turnout. If I had to guess, I’d say any error will favor Clinton.

**8:04pm:** Do yourselves a favor and turn off the TV coverage – it is basically worse than pre-election polling until 10:00pm. My friends here want to watch it though. Any suggestions of other TV stuff that is fun tonight?

**8:00pm: **Here’s something cool: an electoral-vote tracker from reader Ben Reich. Just fill in the cells in row 4 with “C” or “T”. It automatically calculates the electoral totals, and updates the paths to victory for Clinton or Trump. No map update, sorry! For that, use 270towin.com.

**6:30pm: **Here is Slate’s VoteCastr tool for forecasting state totals based on partial information. I’m a bit suspicious, but it’s certainly not worse than live news, which is basically worthless for the next 2-3 hours. Or follow real counts at the New York Times.

In other news, Buzzfeed has announced criteria for scoring the forecasters. They are using Brier scores and root-mean-squared errors.

]]>Here are the final snapshots. Four Senate races are within one percentage point: Indiana, Missouri, New Hampshire, and North Carolina. Partisans there may want to lawyer up for possible recount battles.

Soon I’ll put out a brief Geek’s Guide to the Election. Also, live blogging starting around 8:00 pm.

>>>

**President: Hillary Clinton (D).**

The Presidential estimates are based on the current snapshot in the right sidebar, except for the most-probable-single-outcome map, where variance minimization was done to give a more stable snapshot for North Carolina, Clinton +1.0 ± 1.0% (N=8 polls).

**Most probable single outcome (shown on map below): Clinton 323 EV, Trump 215 EV.** This is also the mode of the NC-adjusted histogram.

Median: Clinton 307 EV, Trump 231 EV. Meta-Margin: 2.2%. One-sigma range: Clinton 281-326 EV. The win probability is 93% using the revised assumption of polling error, +/- 1.1%.

(Why doesn’t this probability necessarily match the probability in the snapshot histogram?)

National popular vote: Clinton +4.0 ± 0.6%.

**Senate**

Where possible, variance minimization was used to identify a time window that gave lower variance than the standard time window.

Mode: 51 Democratic/Independent seats, 49 Republican seats; the most likely single combination is shown in the table below.

Median: 50 Democratic/Independent seats, 50 Republican seats. (average=50.4 ± 1.1 ; the 1-sigma range rounds to 49 to 51 seats)

>>>

**House**

Generic Congressional ballot: **Democratic +1%**, about the same as 2012.

Cook Political Report-based expectation: **239 R, 196 D**, an 8-seat gain for Democrats.