# Princeton Election Consortium

### A first draft of electoral history. Since 2004

Meta-Margins for control: House D+1.0% Senate R+4.2% Find key elections near you!

## Why is the PEC polls-only forecast so stable?

#### August 3rd, 2016, 4:30pm by Sam Wang

Brad DeLong explains some mechanics of the ESPN/FiveThirtyEight polls-only forecast:

• Takes recent past polls to estimate a current state of the race…
• Estimates an uncertainty of our knowledge of the current state…
• Projects that that current state of the race will drift away from its current state in some Brownian-motion like process…
• Calculates the chance that that process–starting from the estimated present–would produce 270 electoral votes for one candidate or the other.

This is also a near-perfect description of the “random-drift” calculation in the second line of the banner above. My main quibble here is that he says it’s not a forecast. Actually, it is. It assumes random drift from now to November 8th, Election Day. That is a forecast!

The issue is that it is a forecast based on current polls only. In my view, such a forecast has the conceptual flaw of overweighting the freshest data. It amounts to reporting a snapshot of today’s conditions, blurred out a bit. To make an analogy, this is like forecasting the temperature a week from now using a single temperature reading. You should not predict weather using a single reading taken on a hot day or in the middle of the night. Jon Stewart had a routine like this once to make fun of deniers who say that because it is snowing today, climate change must be a hoax. (at night: “The sun will never rise again!”)

People who get into a tizzy over post-convention bounces are a little like that. We are now in the middle of a post-DNC convention bounce for Hillary Clinton. What if her numbers come back down? In this sense, DeLong is correct – extrapolating from current conditions is not really a forecast. Though in that case, the “Brownian motion” (note: polling changes do not actually act like Brownian motion, so it would be better to just say random drift) plays no meaningful role, and should be dropped.

To generate a long-term prediction (their polls-plus forecast), FiveThirtyEight establishes a prior based on other factors such as the economy. (No, dear reader, I do not want you to tell me what those factors are!) This method can work – Drew Linzer at Votamatic and Benjamin Lauderdale have used it to make a long-range forecast, and have analyzed their model rigorously.

In my view, FiveThirtyEight’s polls-only approach does not take full advantage of this year’s polling data. It is possible to generate a long-term prediction using polls only. Here at PEC, that’s what we do to create a prior expectation for where polls will drift to. Think of it as “polls-plus-more-polls.” Here is how we do it.

As I explained in May, we have lots of data on how far polls can move during an election year. Here are time series from 16 Presidential campaigns. The red traces shows the ±1 standard deviation interval for the Democratic-minus-Republican margin, relative to the final outcome. Campaigns fall between the red traces about two-thirds of the time (68% of the time, to be exact).

Given this behavior, the optimal estimator for the final outcome is not a snapshot of conditions today. Instead, it is a weighted average of snapshots, where we would weight each margin using the square of the standard deviation to get “m bar”:

In this formula, m is the two-candidate margin (for example, the national polling margin; another example is the PEC Meta-Margin), Greek sigma is the value of the red trace at the corresponding date, and the summation is done over all dates d for which we have data. The result, m-bar, is an estimate of where voting on Election Day will end up.

One remaining challenge is to calculate the uncertainty on m-bar, which would allow converting it to a probability. That is a harder problem because on any given day, m is dependent on what it does on previous and later dates. It is a pain.

To escape this problem, I did something else: I used m-bar to establish a prior. This achieves the same effect as the “-plus” in “polls-plus” – but it uses older polls instead of econometric factors. I use m-bar for all of 2016 to estimate where the race is centered. I then estimate the range +/-S around m-bar, over which the final result may fall.

In the final step, PEC uses that range as a Bayesian prior. We take the random-drift calculation (see our banner) and combine it with the prior to get an estimate of where the election is likely to end up. That generates the “Bayesian win probability” in the banner above.

Because it is based on state polls and because the prior holds things in place, the resulting win probability does not move very much:

Now, this leaves the problem of how to estimate S. S is closely related to the red trace for the standard deviation in the first graph. Before 2004, S was large – up to 6 percentage points. Since 2004, S has been smaller, less than 3 percentage points. I believe this to be a symptom of polarization in politics, in which people choose sides.

So…which S do we choose? If 2016 is like 2004-2012, then S is 2-3 percentage points. If all bets are off this year and one party has broken the partisan logjam, then S could be larger, 6-7 percentage points, a value that gives Trump a fighting chance – but also opens the possibility of a massive Clinton victory. What will 2016 be like?

In 2012, I estimated S as the standard deviation of the Meta-Margin. This year, I am currently using national polls (current average, Clinton +4.7%) and a value for S of 6 percent (a “de-polarized scenario”). Sometime in Septemnber, I will transition over to using the average and standard deviation of this year’s Meta-Margin. As of August, that Meta-Margin is looking pretty stable (see the graph below). If it holds up, it will make the forecast considerably more certain.
In addition to the prior, the PEC November forecast is stabilized by the fact that it only uses state polls. That means that any change is integrated into the aggregate slowly, over a period of several weeks.

Broadly, I think it is a mistake to want a November prediction and a sharp snapshot of current conditions in the same calculation. To my taste, the FiveThirtyEight calculation would be improved by removing the random-drift step to give a more sensitive snapshot of today’s conditions, the same way that the HuffPollster national Clinton-v.-Trump average does. Then use a polls-plus-fundamentals (or in our case, polls-plus-more-polls) approach to give a true prediction.

One final thought, about econometrically-based priors (Ray Fair, Drew Linzer; but also Norpoth and others). In such a weird year, we may be past the point where an econometrically-based prior is useful. Linzer and Lauderdale have analyzed their own econometric approach, and found that the uncertainty on such a model was such that it could be useful for making predictions under normal, non-polarized conditions (high S). But we live in Polarized America (low S) and Trump America (highly unusual candidate). An econometric prior based on “bread and peace” does not take into account Hillary Clinton and Donald Trump’s personal negatives and positives.

In summary, I have doubts about whether anyone should be using an econometric model for prediction. However, an econometric prior can be useful as a research tool to calculate how the incumbent and challenging parties ought to be doing under normally extrapolated conditions – and give us a way of objectively evaluating whether Clinton or Trump was a particularly weak candidate this year.

Tags: 2016 Election · President

### 47 Comments so far ↓

• MarkS

I’m trying to understand why the orange band on the meta-margin is asymmetric. From what you’ve written, it seems like it should be symmetric around the current value. What am I missing?

• (corrected) If you look closely, the band is asymmetric: approximately 3.5% downward and 4.5% upward. This asymmetry comes from the Bayesian prior.

• Commentor

I posted a comment on 538 making the same suggestion, that they were simply taking a snapshot rather than a forecast, but I got the impression that unlike this site, such comments are just screaming into the abyss.

This post really helps clarify things. Still have one confusion, though. Isn’t the FiveThirtyEight approach (polls-plus) basically just choosing a different Bayesian prior? Their “fundamentals” say the race “should” be a tossup, while your poll-based prior says Clinton has been the stronger candidate for most of the race, so we should assume the race will stick close to that. What makes your prior better than theirs?

Or maybe to put the question differently, it seems like the PEC forecast is so stable because it’s based on a weighted average and the Bayesian prior that the race is stable. But how would it handle the race suddenly becoming unstable? Suppose, in the last few days of the campaign, there was a wild shift of 15% from Clinton to Trump. Would the meta-margin and prediction capture this, or would they be handicapped by the “stability assumption” programmed into the formula all along?

• I think a well-reasoned prior is okay. I do not want to get into their prior, except to point out that in such a weird year, we may be past the point where an econometrically-based prior is useful. For example, what prior would take into account Hillary Clinton and Donald Trump’s personal negatives and positives? It seems to me that model-based priors at this stage are risky.

If your description of their prior is accurate*, I do not think that is good because I do not know of a reason why the race should be a toss-up. Better would be to use a bread-and-peace-like model (pioneered by Ray Fair) to calculate how the incumbent party should do. I believe Linzer and Lauderdale take this general approach.

When the election gets close, there amount of random drift that can occur by Election Day gets smaller and smaller. As that happens, the prior (which is set to be pretty broad) has a smaller effect. If drift<<prior_width, then the prior basically has no effect.

*I. Do. Not. Want. To. Talk. About. Their. Prior.

• That sounds right to me. In effect, Sam has a strong prior that the race will be stable. For stable races, however, a variety of different aggregation methods give very similar results. What Nate has tried to do, admittedly in a somewhat klugey way, is to make a reasonably stable model that still is responsive to somewhat unpredictable events (Comey, botched RNC, Khan,..). I think his Polls Plus does a good job of that. If nothing much happens between now and November, it’ll be hard to tell which approach is better. If there are any major events (Trump shooting someone on 5th Ave,…) I think we’ll see why Nate chose his approach.

• (updated)

I think you have it backwards. The PEC prior isn’t that strong: if you look at historical trends, +/-6% is pretty wussy. That’s the only way to get Trump’s November win probability above 10%. Re-read my paragraph about breaking the partisan logjam, and think about it. If I had set S any lower, the Clinton win probability would have started out above 90%.

The FiveThirtyEight prior might be a little broader than mine, but not by a lot. If it’s a lot broader, then they may as well not have one, except to use the word “Bayesian,” which is good for impressing some people. It’s probably a small difference, and I do not think we are going to learn anything about anyone’s judgment.

Their prior can also be viewed from a business perspective. It is not in their interest to set it narrow, because that would alienate all their readers who are Trump supporters. They have to bend over backwards to give Trump a chance, especially after the egg that they laid in Fall 2015. This is why I harp on the estimate of S – it contains all of these issues.

Final note: read Linzer and Lauderdale for their calculation of the sigma on an econometrically-based prior. Compare with my first graph of polling SD. If I recall the values correctly, the econometric prior is not that useful at this point.

• Anonymous

Well, if 538 manipulated their model to appease Trump-supporting readers, then, judging the comments on their site, they’ve certainly failed.

• Jeremiah

If Trump supporters are commenting at all, at 538’s site, then the strategy has not failed.

• 538 Refugee

I think the term Brownian is the key to the author’s objection to the term forecast. He is using that as a ‘decoupler’. As you point out, things like political polarization will put some limits on random drift, though Trump seems to be ‘decoupling’ quite a few prominent Republicans. ;) I’ve had enough math to understand statistics if I get around to reading one of the many tutorials. If anyone wants to suggest one they know is good I’d appreciate an url. I got bogged down in trying to decide which one might be worthwhile.

• Chatham

I would also like to see the Bayesian band. The past three elections have created a huge increase in interest among my peers regarding Bayesian approaches, and yours is the best I’ve found for lifting the veil of statistical ignorance.

• Michael Slavitch

Isn’t Random Drift really Poor Man’s Bayesian?

• If we really must use Bayesian lingo, then I will take your word for that!

• Hi Sam, thanks for this explanation. How does the statistical error on the individual two-candidate margins derived from the pollster moe (the uncertainties on the individual m_d’s in eq 1) enter? Perhaps it’s a negligible effect?

• Sampling errors such as MoE are probably negligible; systematic errors are more important. For example, the Erikson/Wlezien data (top graph of this post) is based on national data, which has a several-point systematic error relative to state poll-based aggregation. This systematic error might affect how that red curve looks at times near Election Day. The actual assumption for S, which is an estimate of state-poll error, would therefore have to be smaller.

• FearItself

This is off topic, Sam, but FYI:
I follow this blog on my RSS, and several times over the last few months my feed has shown what looked like draft versions of posts that never appeared. (The most recent one was yesterday: the first 300 words or so of a post titled “Why.”)
When I click the link to take me from RSS (I use Feedly) to the original post, it sends me to a 404 page instead.
I just thought you’d like to know that sometimes those draft posts are going on your permanent record via RSS.

• Michael Fagan

I noticed this as well (via Feedly).

• JimM

Usually, weighted averages use the inverse of the variance as the weight. If that is the case here, the formula should use the square of the standard deviation, rather than just the standard deviation.
Also, a random drift/brownian motion assumption should result in an uncertainty increasing as the square root of time rather than linearly (or a linear increase in the variance).

• Agree on the first point. Thank you for the correction.

I actually did not state the form of forward drift over time. Empirically, the Meta-Margin calculated from the state-poll aggregate acts like sqrt(t) for about 3 weeks and then saturate. I documented this in 2012. As another example, see the form of SD in the first graph – it also doesn’t look like sqrt(days_to_election).

With an influx of state polls, I imagine we should see some movement in the Meta today. NH, PA, and NC (though that’s Civitas, it’s still a poll) all in the last 24 hours

• Reply to Sam’s reply- I see your point about your prior not being all that strong. Neither is 538’s. This year, they do happen to be slightly shifted, since his came out nearly neutral and yours is somewhat D. It cannot both be true that your prior is very broad and that you’re avoiding giving a snapshot, since these essentially mean the same thing. The question of how much to discount old polls is partly empirical, based on a small sample of recent elections, and partly based on the sort of subjective judgements you mention about polarization, etc.

There’s another difference between your “Bayesian” prediction method and his. By using national polls and by correcting for house effects of individual pollsters, he’s able to capture fairly rapid changes, if they occur, without introducing too much extra noise. By using rather sparse state polls and rounding to the nearest 5%, you tend to hide any actual shifts that occur. To me this seems like a cruder filtering technique.

• I enjoy the structural comparison between the two models, but as far as past performance, how do they stake? I am surprised that both PEC and 538 avoid quantitative comparisons. Thanks

• Lorem

It’s difficult to reliably compare results, since there are few data points (elections) and the models are tweaked over time. Perhaps the models could be tested on primary results, but predicting primaries is different from predicting presidential elections.

In addition, by election day, the models tend to converge to being more-or-less the same. I would personally prefer a comparison based on earlier predictions, but making one is rather tricky.

With all of that being said, I recall there was a reasonable analysis of the final predictions/results in 2012:
https://rationality.org/2012/11/09/was-nate-silver-the-most-accurate-2012-election-pundit/

• Joseph

Congratulations, Professor Wang! I just saw you mentioned in a BBC article. Your fame is now officially international!

http://www.bbc.com/news/election-us-2016-36947558

• But why is the *meta*-margin so stable? I assume there simply haven’t been many state-level polls of late, but I can’t figure out what feed you use for that. :/ (I assume it is a heavily parsed version of the Pollster feed in the left-hand sidebar, but it’d be nice if it were visible somewhere on the site.) Thanks so much for all the hard work!

State polling really started to appear last night and this morning, after about a 2-3 week nap.

Included in there is a Civitas poll of NC, which, as those of us here in NC know, is to never be trusted on either side. Once they had Clinton up huge. Now they have Trump winning — which wouldn’t be overly shocking, if they didn’t have 30%+ of the black vote going for him…yeah that is NOT happening. But we get lovely Civitas news here, a lot, in this state.

As predicted, a huge (20%) jump in the MM at noon today.
I also found it interesting to see an even bigger swing in the senate race MM

• Also Obama net approval and House generic, over the last few days.

• Michael Hahn

Sam: Can you explain the big jump in MM when the overall electoral map that you show did NOT change. It still shows IA, NH and FL tied, and no other states flipped as far as I can tell. Is the map behind the calculations in the banner? Or (more likely) am I missing something fundamental about the process?? :))

• Dunno, probably some margins got larger. The Pennsylvania and New Hampshire polls may have moved the median for Clinton. Therefore a larger overall shift is needed to create an electoral tie.

• Froggy

Michael, yesterday’s moves in the MM were driven by Pennsylvania and Florida. First PA moved from Clinton +3% to Clinton +9%, which is pretty significant because it’s difficult for Trump to get an EV majority without PA, so putting it out of contention forces him to pick up a couple of other states that he otherwise wouldn’t need.

Then later in the day (after your comment) FL moved from a tie to Clinton +4%, which is an even bigger event, since it’s almost impossible for Trump to win without winning FL.

• Michael Hahn

Thanks, Froggy. That makes sense!! The Pennsy shift would not have registered on the map, and as you say, Florida came in later. It will be interesting to see if this trend holds up

• Arman Barsamian

Sam, do you have an easy list (somewhere on this site) that shows the input polls used by PEC? I’m curious about state level polls and their coverage and frequency.

Thanks you and PEC for your rigorous approach!

• Ash

This is an unrelated comment. I noticed the meta-margin changed, but the number of polls decreased by one(from 90 to 89). There were a number of new state polls released on Tuesday night/Wednesday morning. Do you drop state polls that are certain days old?

• JRau

Just a quick question, could you point me to the state by state probabilities? The link in the sidebar sends me to an empty page. Thanks.

• check it now

• Ken

Sam, FYI the presidential race link for Florida on the right side bar (currently shows tied) goes to a 404 page at HuffPost.

• Rob in CT

FYI, 404 for Oregon as well.

• Thanks. Those are not how we scrape data but it is useful to know what is broken so we can fix it.

• Lee

Yikes, maybe it’s an outlier but McClatchy has Hillary up fifteen:

http://www.mcclatchydc.com/news/politics-government/election/article93763582.html

• This week’s outlier is next week’s median.

• Michael Levinsohn

Is there data of an effect such that good polls bring in other voters who seemingly want to back a “winner?” It seems that one reason Trump cited polls so often was to accomplish this effect, but is it statistically real?

• Bob Flood

Sam, a couple of years ago I encouraged you to use “surprise covariances” in your probability calculations. Is that built in now?

• Deb

So what does history and the math tell us about how quickly a convention bounce normally evaporates?