— Robert D Sullivan (@RobertDSullivan) September 17, 2014
Mr. Sullivan, this post is for you.
Even though Nate Silver has misinterpreted what PEC did in 2010 as representing how we operate today, I see this as an opportunity to explain how we make predictions in 2014. I will then come back to a point that many readers will care about more: the assumptions put into this kind of prediction can add hidden biases, whether intentional or not.
The Claim About PEC
First, to restate the claim: here at PEC, we are said to be overconfident in our probabilities. The example given was the 2010 Nevada Senate race between Senator Harry Reid (D) and Sharron Angle (R). Everybody, including FiveThirtyEight, was confident – and wrong. Angle led Reid in the last eight surveys before the election. And yes, we were part of that crowd. It was PEC’s only wrong Senate call in 2010 or 2012 (unlike FiveThirtyEight, which in addition missed two Senate races in 2012, Montana and North Dakota). [update: I made two wrong calls in 2010, Nevada and Colorado. These wrong calls were also made by FiveThirtyEight.]
The statistical error I made in 2010 – and have since fixed – is that I talked about snapshots vs. predictions interchangeably. With the polling margins where they were, it was basically certain that Angle led Reid in the polled demographic, i.e. the population of people who could be captured in surveys. But surveys are not elections. In retrospect, there were good reasons why the polls were off: a heavy cell-phone population, lots of people moving in and out of the state, and Hispanic voters who are hard to reach. Those reasons only became apparent afterward.
This kind of uncertainty can be captured in two steps:
- Acknowledge that there can be some small discrepancy between final polls and Election Day results. This is what scientists call a systematic error in the polling data.
- Use a probability distribution that is not bell-shaped, but has “long tails.” To capture the possibility of freak events, I now use t-distributions. They are much “tail-ier” than a bell-shaped curve, and capture that “hey, crazy things can happen once in a blue moon” vibe of real elections. I have come to love t-distributions.
In the 2010 case, Gaussian statistics gave an Angle win probability of >99%, which was OK as a snapshot of the polled demographic, but not as a prediction. However, using the two-step approach above, if we use a typical systematic error between Senate poll medians and election outcomes of 1.0%, and a t-distribution with 2 degrees of freedom, the probability would become 91%. This is more plausible.
How PEC Turns Snapshots Into Probabilities
Now, let me explain to you how we apply this to making predictions in 2014. Here at PEC, we delineate three issues:
- How to take an accurate snapshot. First and foremose, we need a way to see where a single race (or national campaign) is, right now. Although there is no election to validate the snapshot’s correctness, it is possible to take a snapshot of the polled demographic. We take a new snapshot every day.
- How to estimate the degree of movement between a snapshot today, and a snapshot on Election Eve. Now, how much and how quickly does that snapshot vary over time? Let’s call the amount of that movement “sigma_movement.”
- How to estimate the final accuracy of the Election Eve snapshot. This is the final validation: in the home stretch, how far is the last snapshot from the actual election outcome? Let’s call this difference“sigma_systematic.”
Using the terminology above, the outcome of the election is, by definition,
OUTCOME = SNAPSHOT + sigma_movement + sigma_systematic.
And if we can understand the sigmas, then we can make a prediction.
Silver has reasonably called out my 2010 writings, in which I mistakenly assumed that sigma_systematic was close to zero, i.e. much less than one percent. Today, my current approach is to estimate how sigma_movement varies and how big sigma_systematic is. Those estimates can then be used to make a November prediction.
To turn this all back to practicalities: PEC’s current approach is to suppose that the combination of sigma_movement and sigma_systematic can be learned from polling ups and downs in 2014, and analysis of past Election Eve poll snapshots. FiveThirtyEight’s approach is to use fundamentals to generate expectations for where 2014 “ought” to be. Implicitly, their assumptions for this year make the sum of these two quantities tilt slightly toward Republicans. They are probably not being purposely partisan – they just made assumptions that are a bit more biased than usual to favor one party.
Now, do the assumptions in our prediction add a bias? I think not: our core assumption is “the future will be like the recent past.” Of course, there could be something else. Commenters in yesterday’s thread started drilling into our methods and code in a constructive manner. That is a discussion worth having.
Finally, here is a great interactive: to see what the effects are of adding fundamentals to a model, over at the New York Times, The Upshot has provided a useful Make-Your-Own-Prediction online tool. Click “Polls Only” and see how their prediction changes. It is very instructive.