Princeton Election Consortium

Innovations in democracy since 2004

Outcome: Biden 306 EV (D+1.2% from toss-up), Senate 50 D (D+1.0%)
Nov 3 polls: Biden 342 EV (D+5.3%), Senate 50-55 D (D+3.9%), House control D+4.6%
Moneyball states: President AZ NE-2 NV, Senate MT ME AK, Legislatures KS TX NC

Sharpening the Presidential Forecast – summary of comments

August 24th, 2016, 10:00am by Sam Wang

Comments on my sharpening of the Presidential forecast were helpful. The outcome is that I will keep the key new assumption, which is to cap future standard deviation in the Meta-Margin at 3.0%. Since Hillary Clinton’s Meta-Margin (effective popular lead, measured through Electoral College mechanisms) is 6.3%, that means that she is 2.1 standard deviations ahead. That is a lot of standard deviations.

A summary of the discussion follows. I will start with the key graph, which I produced in response to Joel. Like all my analysis of the 1952-2012 elections, this was made using Wlezien and Erikson’s data. Prof. Wlezien has helpfully provided the original dataset on his website.

Is past performance this year a predictor of future dynamics? Joel wanted to know about “in-sample variance”: is variance in the earlier part of a campaign predictive of what happens in the closing months? That would tell us whether it is kosher for me to use this year’s Meta-Margin history to estimate volatility from now until Election Day.

Jeremiah’s reaction says it well: “I think of all the discussions this is the critical chart to consider….I think the way to look at this chart is to ask oneself what scenarios would point to upsetting the prediction? Even with all of the data the maximum SD for 1-90 days before the election is 4 percent and the average is much less than this. A SD assumption of 3 percent would therefore seem conservative. Also, there are no data points in the upper left quadrant of the chart and there is only one data point where the SD got much larger closer to the election and that was still less than 3 percent.”

Bottom line: there’s no good justification for assuming that future variation will be greater than 3 percent. So I will keep it there.

In retrospect, for purposes of prediction, the graph above would have been enough. However, I think my point that polarization has come with entrenchment of opinion is still useful.

Is 2016 different? This leads to Mike’ general concern to my classifying 2016’s data as being similar to 1996-2012. “I think a lot of people share an intuition that there is something about this race that should discourage us from grouping it with the other post-1996 elections in terms of volatility. It seems like it would be worthy to look for numerical support for that intuition, if only to see what the strongest argument is against the low-variability assumption.”

Certainly I see the point of this objection. Donald Trump’s candidacy is so obviously freakish that surely 2016 is different…right? Actually, not really, from a data standpoint. The strong state-by-state correlation between Trump 2016 and Romney 2012 suggests that not all that much has changed, except that Trump is quite weak within his own party.

I see Trump as a culmination of a 20-year trend in the priorities and culture of the Republican Party. His tactics are familiar to the party base. For example, the questioning of legitimacy: of Obama’s birthplace, and of other Republicans, and even the November election itself…the list goes on. And yet he always had at least 40% of Republican primary voters on his side. I offer the following synthesis of data (2016 has been really stable) and events (crazy Trump): the U.S. is suffering from a near-fatal case of polarization, and Trump is a consequence.

The Gary Johnson factor. Several readers, for example NHM, raised the concern that this year, there are a lot of Gary Johnson supporters. Various hypothetical scenarios were laid out for how that could affect the race.

Here is a general way to think about Gary Johnson, who is currently polling at about 8%. Also, undecided plus alternative-party votes add up to 20.5%. The Clinton+Trump total is 79.5%, compared with 91.0% Obama+Romney on the same date in 2012. Because third-party votes are especially fluid in the home stretch, that could lead to more uncertainty in 2016 than in 2012. This is especially important because many of those voters are Republicans who might break toward Trump.

The maximum plausible range of what Gary Johnson supporters will do ranges from all going for Trump (i.e. 8% toward him) to maybe a 5%-3% split toward Clinton (i.e. net movement of 2% toward her). The approximate SD of such a range of possibilities is one-fourth of the total span. So SD_3rd_party =10%/4 = 2.5%. That’s still within the range of the 3% assumption.


There’s more good discussion. I encourage you to read it.

Tags: 2016 Election · President

74 Comments so far ↓

  • Tony

    I do think the chance for the race to shift wildly is higher than in most elections because of Trump, but it only has that potential in an anti-Trump way. Trump is so erratic that he may end up purposefully killing his own campaign, which would obviously shift the election massively in Clinton’s favor.

    For example, there are stories of the GOP being on the verge of dropping their support for him, which might piss him off enough to tell his supporters not to vote to spite the Republican party. He may also realize that he has no chance of winning and do something like that to protect his ego (i.e. tell his supporters that it’s going to be rigged against him anyway so there’s no point in voting).

  • 538 Refugee

    If Johnson is the default protest vote of the Republican party he might actually gain some votes down the stretch at the expense of Trump.

    • Mark F.

      Johnson seems to be doing really well with under 30s who might normally vote for Clinton. So he’s drawing many votes from both Clinton and Trump, I think.

  • NHM

    If you ignore the x axis and look at just the y axis – it looks like the highest SD of ANY election (90 days out) is somewhere around 3.8%. If I were PEC king, I would make a running calculation of the SD of the meta-margin for ALL elections (90 days away, 89 days away, 88 days, 87 days, 86, 85… etc) and choose the highest calculated value for ANY previous election for which you have data (at 90 days out it was somewhere around 3.8%) for the day that you’re calculating a probability.

    • Sam Wang

      Technically, that approach leads to an overestimate of uncertainty, and in particular makes the 1-sigma range too wide. I have already addressed this issue by a method described by Amit Lath: I use a t-distribution with 3 d.f., for which a >2-sigma outcome happens 7% of the time — 3 times as often as a >2-sigma outcome in a normal distribution. If we really wanted to get obsessive-compulsive here, we could sum all such t-distributions over the various past values of SD from 1952 to 2012, and see what that looks like.

      Probably I will do that because I can’t help myself.

    • Amitabh Lath

      NHM, picking the highest discrepancy and assigning it as your current uncertainty is waaay too conservative. Your manuscript would be returned by any respectable peer-reviewed journal. Some might consider it an attempt at masking bad data.

      An acceptable uncertainty metric might be to find the range and take the halfway point.

      Also, the t-distribution that Sam uses to account for black swans would probably come in for some comments. In favor of Herr Gauss, sure one can discuss possible cataclysmic events but is there any data showing they are not adequately described by a normal dist? The Central Limit Theorem is a powerful beast.

  • Olav Grinde

    There is, as you say, a possibility that the 20.5 % third-party/undecided/independent voters may break for Trump – even though there currently seems to be growing Republican voter despondency over Trump being the GOP Presidential nominee.

    However, I think there is another factor that may shift the election result more strongly toward Hillary Clinton than expected: GOTV!

    The Clinton Campaign is building on Obama’s hugely successful get-out-the-vote efforts. In contrast, repeated reports indicate that the Trump Campaign’s ground game ranges between non-existent and mediocre.

    Question: To what extent do the likely-voter models of current polls, or your own calculations, factor in these huge differences in GOTV efforts?

    • Jon Wiesman

      I’m curious about this aspect as well. I think I remember that in 2012 Obama *very slightly* overperformed the polls partly because the Obama turnout machine was superior. The GOP’s ORCA GOTV software had some problems on election day.

      It wasn’t the reason Romney lost obviously, but it probably had some effect on final numbers, right?

      Everything we’ve heard about the Trump campaign is that they are woefully unprepared for the election, with tales of campaign offices being run by 12-year olds and only one office for the entire state of Florida. Obviously the GOP will fill in some of that missing infrastructure but with the Trump campaign being so dysfunctional, would it be surprising if Clinton outperforms the polls on election day? And if so, by how much?

    • Sophia

      Olav, I agree with you. I worked locally on both Obama campaigns and I cannot count how many people did not know if they were registered, where to vote or even what day to vote!! Trump’s own daughter who is college educated could not even vote for her own father in the primaries and had no idea until it was too late. Imagine an uneducated rural voter wanting to vote for Trump only to find out that they are not registered. I can imagine that person saying that the election was rigged. Obama did something that I think historically people will look at during his campaigns. He got so many young people and African Americans to register to vote and they registered as democrats. Then he got people to go out and vote. That is a more complicated task than just getting registered people to go out and vote. Trump’s campaign is really not up to this task. These are the practical parts of a good campaign and yes I agree Hillary Clinton has the benefit of building on what Obama organized.

    • NH Guy

      I agree too Olav. Like Sophia I have also worked on GOTV for many election cycles. In my case, it has been as an attorney in the “voter protection” area–keeping watch over polling places to ensure interested voters aren’t turned away. In a swing state like NH, this boils down to protecting students in 4 college towns from spurious Republican challenges or simple dirty tricks (like misdirecting confused 19 year-olds). In 2008 and ’12 the Obama campaigns were incredibly well-organized at every level and I witnessed the results. One example: because we had met with the Town Clerk ahead of time, we knew we needed clipboards on hand to allow students to fill out the required residency affadavit while standing in line; that’s what it took to prevent bottlenecks and long lines. I expect the same with HRC.

    • Andrew

      I’ve forgotten I read or heard this, but supposedly there’s a debate among political scientists about how important actually are things like running TV ads, having lots of staff in swing states, etc. Well, 2016 is shaping up to be a great test of that, since one candidate has a huge ground game and the other not so much. So we’ll find out what happens with GOTV and likely-voters in 70-some odd days.

      There’s also the idea out there among some Democrats that they have a natural majority of voters, but their problem is getting that majority to show up. If true, it suggests a different strategy to what the DNC candidates have historically done, which consists of pivoting to the center in order to attract moderate voters. The new strategy would be more focused on getting already left-leaning people to vote. That means more policies that get people on the left excited rather than a pivot to the center. My understanding is that Clinton is trying the latter strategy in that despite pleas to moderate Republicans, the domestic DNC platform is pretty progressive, along with her huge ground game.

      Supporting a big GOTV effect is that the hypothesis that the convention bounces exist not because people are changing their minds, but because people are more enthusiastic about responding to pollsters after a convention. So the polls reflect some convolution of voter preference and enthusiasm.

      One thing that may pull down a higher margin than what polls suggest for Clinton due to GOTV, etc. may be that since Trump has a bad reputation in the general media, people polled may say they are undecided because they don’t want to admit they agree with Trump. (E.g., if you ask people if they go to church, you get much a much higher church attendance than if you ask people what they did last Sunday and see if they listed church. It’s for the reason: because of their cultural norms., people don’t want to admit to a stranger that they didn’t go to church.) I have no idea how big that effect might actually be however, I’m just speculating. :)

    • Josh

      To respond in general: GOTV efforts will not have much, if any, effect in terms of national polling numbers, simply because GOTV efforts are highly localized and will only potentially make a major difference in the handful of states to which those resources are allocated.

      In talking with Obama ’08 and ’12 folks, my takeaway is that GOTV probably got them an extra 1-2% in some key states–good enough to win places like VA and NC in 2008 and FL in 2012. But nationally, when there are something like 140 million voters, getting those extra 50,000 or 100,000 people to the polls isn’t going to show up in polling.

  • Robert P Wolff

    Is there a chance that a significant share of those who choose Johnson in a poll are Republicans who cannot vote for either major candidate and just stay home, thereby disproportionately skewing the down ballot races Democratic? Many of them are probably voters who do not vote in off year elections anyway.

    • Sam Wang

      I think this is an interesting possibility. Such a turnout gap would account for the similar trajectory for generic Congressional ballot and Clinton-v-Trump margin. See my comment.

      If you think about it…for a Republican who is unenthusiastic about the top of the ticket, 2016 is like a midterm election. And midterm turnout is lower than Presidential-year turnout.

  • Marty Schiffenbauer

    2016 may be different not so much because of Trump but because of Hillary. She’s the first woman running as a major party candidate, she provokes inexplicable antipathy among a large segment of voters and there’s always the potential of another Clinton “scandal” before November.

    • Amitabh Lath

      She also has hardcore support among female voters, especially older ones. Every election has seen various groups go hard for and against the nominees. The question becomes is there a reason that the polls would not be picking up whatever factor you are discussing?

  • Michael K

    How will you know when to revisit the variance assumptions again in the future, except after the fact (when a new trend emerges showing variance tightening even further, or widening back towards pre-1996 levels once again)? At which point it would then be apparent that the win probability predictions for the recent prior cycles must have been either too low or too high…

    This seems like a fundamental paradox of prediction making…

    • Sam Wang

      Probably in the future I would assume 1952-2016 dynamics, making the curve smooth rather than introducing such a conspicuous discontinuity that attracted such a fuss these last few days. Estimating the curve is somewhat difficult because the Wlezien/Erikson data use national polls, which contain systematic errors that are not present in the Meta-Analysis.

      Such an approach would still lead, after T-minus-90-days, to an SD of 2.5%, the median of the points in today’s post. I’d set that to 3%, both to be conservative and to allow for a systematic error in the home stretch, which is up to 0.5%. Finally, the distribution would be a bit long-tailed – the exact shape T.B.D.

    • Ketan

      This might be too ambitious, but instead of relying entirely on historical data, is it possible to use something in this year’s polls?

      For example, the pct of undecided voters, pct of the polls which are likely voters, pct of switchers (dems voting rep and vice-versa) might be useful proxies for “how much change is left.”

      I say this because the timing of “when the voters stop changing their minds” doesn’t feel like a law of nature i.e. conventions/debates/scandals change from year-to-year. (Plus if something happens where polls start saying more people are undecided, it would be great for the uncertainty to increase.)

  • David Cutler

    There is undoubtedly a large difference between pre-1994 and post-1994, and I agree with you that it is very, very tempting to ascribe that distinction to the political changes associated with Newt Gingrich, etc. However, the distinction (and one really should look to our colleagues in the professional polling community) might be purely a technical one.

    As both of us are surely old enough to remember, 1994 was before the release of the Pentium2, about the time of the Linux 1.0 kernel, before Windows 95, and about the time of the release of Windows NT. OCR was at best in its infancy, and America Online was trying to convince people to dial-up. Kermit was a popular transfer protocol, and HTTP didn’t exist.

    Why did I say all that? The reason is, I am not clear to what extent it was even possible to implement cross-table based likely voter models in the pre-1994 era. Without a networked computer in front of you, can you even ask detailed questions about demographics and past voter history, record that data in a useful fashion (and pencil and paper is barely usable for this purpose) and incorporate that data into an LV model, all in a reasonable time frame at a reasonable cost?

    I don’t know any of this for a fact, but I would certainly be the opposite of surprised if the largest contributor to the lower variance in the post-1994 era was that pollsters got a lot better at dealing with their house effects, because it was suddenly a lot easier to make a decent likely voter model.

    • Sam Wang

      It is well known that in good circumstances, house effects are not huge compared with sampling error. However, they are larger than aggregated sampling errors.

      The pre-1996 variation is not due to house effects, since it is measured as change over time. This change can be much larger than house effects. Prominent examples include 1964, 1980, and 1992.

    • G Washington

      I think the bigger question is simply one of measurement error vs. intrinsic scatter. Was the standard deviation larger because the races were inherently more volatile or is it because the individual polls were less accurate? One could look at how large the average “miss” was for a given poll in each year to have a measurement of the error associated with each poll and compare that to the overall scatter to determine the intrinsic scatter.

    • Amitabh Lath

      David, we could do all that on our VAX/VMS (best OS ever). For most polls, N isn’t that large, all you need is some spreadsheet software which even the old IBM VM systems had. Then in the early 80’s there was Visicalc etc.

    • Dan Jacobs

      David, I think you could start with the first two paragraphs of your post but go in a different direction:

      As computing technology went mainstream, starting in the mid-90s, it’s possible that the decision making process is more efficient or accelerated for many voters. News and opinions now reach people much faster than they did before. We may still have undecided voters in similar quantities, but the decided voters–in the context of polling–are now more rigid.

      If there is less flexibility, it would make sense that sigma would shrink over the years.

    • Philip Diehl

      Dan Jacobs: Similarly, the emergence of cable news, online news sites, and the 24 hour news cycle has “pushed” information about candidates back into the campaign calendar, leading many voters to align with a candidate earlier in the campaign. They would then be cemented into place by the self-selection of confirmatory news sources and the reinforcement of like-minded friends and family.

      I suppose I’ve simply described a couple of the forces driving higher polarization.

  • Ketan

    Upshot just did a Senate article. And they show PEC as saying it is a tossup. (53%)

    Makes no sense.. and makes me doubt the rest of their model.. any one know what’s up?

    • Matt McIrvin

      Maybe just a little out of date–PEC was showing something like that not long ago.

    • Sam Wang

      I wasn’t expecting them to go live so soon. That will be updated soon…

      …fixed now.

    • Ketan

      I think there might still be a problem or I’m misinterpreting the histogram. NYT is updated now to say “69%” for PEC but PEC’s current Senate histogram is exactly 86% chance of blue (looking at senate_histogram.csv and summing.)

      I don’t have a plausible guess where they get 69% from.

    • Sam Wang

      snapshot of today (our site) vs November probability (deep in our files, not yet served up on the homepage).

      It is exactly analogous to the Presidential calculation. The probability of a Clinton victory “today” has been essentially 100% since May. It is usually the case that the calculation gives a definite answer about who would win the election on any given day.

    • Ketan

      Makes sense. Thanks for explaining.

  • LMB

    Oh. I just refreshed again, and in the last few minutes the Senate Meta Margin changed to +2.2% Democrat. So that’s fine.

    Do you have a feel for the correlation between the MM and the likely # of Dem. seats in the Senate? I.e., how high would the Senate Meta Margin have to be before we get to 52-48, 53-47? Or is that even a fair question? How predictive is the MM for the # of seats in the Senate?

    • Sam Wang

      It will tend to be “lumpier” than the EV calculation because there are fewer key races. Ballpark, 1.5-2.0% of Senate_MM per 1 seat change in Democratic+Independent caucus.

  • Stuart Levine

    Sam–Is there any way to compare the numbers, even episodically, to the 2012 numbers. By way of example, right now (August 24) the Bayesian probability of a Clinton win is at 96%. What was that number 4 years ago and when did it rise to 96% in the 2012 election? The reason that this is important is that, at some point, there may possibly be a total collapse of Trump support which might make this into a wave election. And that, as you’ve said before (at least I think that you said that) could put the House into play.

  • Daniel

    Do you have any plans to write a book on your methods and insights re: political behavior and political forecasting?

    • Jay Sheckley

      I’m just another avid reader. That said, I’d bet he has no current plans for a PEC book, as he’s pressed for time on his neurology research, and many of his student assistants just graduated.
      Dr Wang already has several acclaimed books out about how our minds work. And that has a surprising lot to do with politics.
      Want to know more about Sam Wang’s forecasting methods? There’s plenty of detailed info here on this site. Dig deeper. Enjoy.

  • Eric

    The key argument for stability to me is the lack of movement in the EV calculation. Despite the changes in the metamargin over the last few days, Clinton has remained at 341 EV. To me, that’s evidence of the polarization in the race. It’s hard to pick a reasonable scenario where Clinton tops 383 EV, or one where she drops lower than 300 EV. While an outlier scenario could happen, it would be just that.

  • Remi

    Re polarization: have we reached a point where it doesn’t much matter who is at the top of the ticket, that 45% will vote GOP, 45% will vote Dem and the rest is simply GOTV?

    Or do we still have a situation where the top of the ticket provides the winning push or drag?

    It doesn’t seem to be a healthy turn of events since we don’t have a parliamentary system. The checks and balances become roadblocks.

    • alurin

      Well, this year will certainly provide a test of that hypothesis.
      The parliamentary v Presidential system is an orthogonal issue to that, however. A polarized electorate does not necessarily lead to legislative gridlock.
      even having a president and congress of opposing parties does not necessarily lead to gridlock; we had that many times throughout the 20th century without the current level of inaction. Our current predicament is the result of a calculated decision by congressional republicans to abandon normal political compromises.

  • Some Body

    Just a note (which doesn’t really belong to this comment thread; Sam, feel free to delete/not publish): Among the recent updates on the 538 model there is a very large group of state polls from Ipsos (looks like a state-by-state breakdown of their national polling results, or something like that). These don’t seem to appear on the Pollster website (so, I suspect, also not in their feed), but would be very useful for the MM calculation. Maybe you should have a look. Or maybe Pollster will add them later, and then no rush…

    • Some Body

      More on those Ipsos/Reuters polls. Apparently, it’s indeed based on their tracking poll results, broken down to have a result for all 50 states, and they are going to update that regularly.

      Here’s the link (you probably want to use the “Show table” option, not the map, shown by default, if you want readable results):

      And here’s the accompanying “explainer” piece:

      Also, I saw 538 gave sample sizes for each state poll (and apparently ignored sample sizes under 100), which I didn’t find in the table on the Reuters website (but in Sam’s calculation a poll’s sample size does not affect the result anyway).

  • Michael J. McFadden

    You may have seen this already, but, if not, it might help in refining your model:

    it would appear that in most cases you are showing results much further from 50/50 than most of the other polls. Generally that would lead me to doubt your results, but it will be interesting to see how it works out in the long run!

    The Times’ changeover in the first week or so of August in the top Presidential race graph was dramatic: I don’t recall EVER seeing a change that strong in such a short period at such a level. It shows how volatile things are though. If both Hillary AND Trump have disasters in October we might actually see a surprise win by Johnson — unlikely maybe, but looking at that graph and extrapolating, it’d certainly be possible.

    – MJM, who’ll vote against anyone who wants to tax chocolate…

    • GM

      Is a win by Johnson is ‘certainly possible’ – I guess, in much the same way that an alien invasion is ‘certainly possible’. I think most reasonable people don’t spend much time thinking about either outside of popular entertainment and Elon Musk’s dinner parties.

    • Matt McIrvin

      Obama’s gains after Wall Street melted down in September 2008 were about on the same scale. What that had in common was that it was close on the heels of a Republican convention bounce that had brought the race to its closest point.

      And in both cases, the sudden change was restoring and then overshooting the norm that had prevailed earlier in the summer, so it wasn’t as if the race was entering an entirely astonishing new realm.

    • Matt McIrvin currently has links allowing you to go directly from today’s page to the page for the corresponding day in 2008 and 2012, and there’s also a nice page directly comparing their EV poll aggregates for every election going back to 2004.

      What emerges is that, going purely by state poll aggregation, this presidential election cycle is less of a nailbiter than any in the past 12 years. Clinton’s current standing doesn’t look any better than Obama’s final totals (in fact, coincidentally, the states gives to Clinton today are exactly the Obama 2012 map). But if you compare it to how Obama was polling on this day four or eight years ago, she’s doing better.

      What it really looks like is a magnification of 2012: the volatility is higher, but it’s mostly because Clinton has higher highs and Trump lower lows.

  • bks

    Slight tangent:

    Probably the best estimate comes from a recently published piece by political scientists Ryan Enos and Anthony Fowler. They show that the effect of the 2012 presidential campaign on voter turnout was quite large, about 7-8 points overall.

    They arrive at this estimate by analyzing a sort of experiment: media markets that span state boundaries, such that part of the market falls in a battleground state and part doesn’t. Voters in one of those markets would be potentially exposed to the same amount of televised political advertising but different amounts of other campaign activity. In particular, you would expect that the battleground state voters would be far more likely to be contacted by campaign fieldworkers, who generally aren’t going to contact voters outside of battleground states.

  • Frog Leg


    I was wondering what your opinion was of Andrew Gelman’s note of caution in interpreting poll results, given here:

    • Sam Wang

      He sounds so confident. I am skeptical.

      A philosopher once told me he thought scientists were gullible, falling for a single study. So far, I have the same feeling about this meme.

    • Mike

      Frog Leg – I think the point of the Gelman papers is just that individual polls would be more accurate with proper party-ID post stratification. From his comments:

      “to not poststratify on party ID is to implicitly assume that the best estimate of party ID in the electorate is just . . . whatever the responses happen to be on your latest survey. It shouldn’t be hard to do better than that!”

      The papers find that partyID + demographics works better than post-stratifying on demographics alone. But I don’t think that would affect the PEC approach to poll averaging. If more pollsters took this approach, it would likely just reduce the volatility of individual polls and make the PEC averages even more stable.

    • Sam Wang

      Post-stratification by Party ID is a risky move. It leads to imposing Party-ID share on a sample that was not there to begin with. This is, in part, why Rasmussen gives odd results. I am not convinced that this is a well-thought-through suggestion.

      Note that Party-ID is affected by other answers. For example, if you first say you are a Clinton supporter, and then are asked for Party ID later, the second answer is biased slightly toward saying you are a Democrat.

      Pollsters, don’t do it!!!

    • Frog Leg

      Mike, Andrew’s statements are stronger than this. He claims that the primary driver of poll movement is polling differential non-response, and he aims to correct by the post-stratification procedure. He seems to claim that this non-response pattern has a strong time component, and its comes across all polls, so it would impact any averaging scheme.

    • Michael Coppola

      Unskewing by a different name. How’d that work out the last time?

    • Mike

      Ahh — I just skimmed the papers. Didn’t realize he was claiming it was the primary driver.

    • Jeremiah

      I think Andrew Gelman is getting confused as to how polling works. I am pretty sure that most polling agencies correct for party ID and demographics. They usually have built models for many variables and adjust the responses they get to reflect the population they are trying to estimate. It would not be that hard to assess the effect of Party ID anyway and see if it is really reflected in the responses. His example of a 7 percent swing towards Clinton after the convention could have been a real effect regardless of the constancy of Party identification in the population at large. The ultimate proof though is the universal success of all the aggregation prediction sites over the last decade or so.

      Note: Gelman’s complaint would not shed any light on the poor success of mid-term election polling because the major problem there is determining who will ultimately vote in the election, not party identification.

    • Sam Wang

      Others, correct me if I am wrong, but I believe it is unusual for pollsters to weight by Party ID.

      I agree that the bigger technical problem is identifying who will vote in an off-year election. The false-bounces effect is very trendy among polling nerds, though.

    • Jeremiah

      I think you are right they don’t weight for Party ID. Here is a very good article about reasons not to:

    • Sam Wang

      That is very good. Set in boldface.

    • Jeremiah

      That article is from Pew Research and it is truly excellent. They discuss why party affiliation/identification is hard to distinguish from what they are trying to measure because it is not independent. They also go into why they report on registered and likely voters and how those models have performed in past elections 1996 through 2008.

    • Matt McIrvin

      Pew’s detail about LV screen being five points more favorable to Republicans than RV screens (in elections up to 2008) is an interesting one–that’s a huge difference, much bigger than I thought.

      And, curiously, I don’t see any trace of it in Huffington Post’s 2016 data; if anything, Clinton does slightly better in the aggregate of LV polls.

      I can see a difference there in 2012 but nearly all of it was because of Rasmussen, which provided most of the polling with an LV screen early in the campaign.

    • Mike

      I’m still trying to get a handle on whether everyone is talking about the same thing here. It seems the Pew article and most of the arguments in this thread are against simple PartyID weighting. But Gelman et al ( is not arguing for that. They make a larger case for a hierarchical model that does well at predicting not only overall vote portion but also sub-groups (i.e., hispanic college grads, liberal white women, etc). It’s a fairly compelling case, considering how biased their data is: xbox owners! The main weakness in my mind is that they only look at one election. But they find that taking party ID into account (note: not simple weighting) improves results. So I think any objection would have to address their specific findings.

      It’s believable that differential non-response could drive a substantial portion of volatility, so why not attempt to take that into account if you can do so in a principled way?

    • Jeremiah

      Reading through that Gelman paper we find that Party ID is modeled by asking a onetime set of questions when they first join the panel. They are asked:

      1. Their 2008 vote
      2. The party identification
      3. Their ideology

      The researchers normalize (using a multilevel regression analysis) to the party ID based on the 2008 election exit polls (and of course to demographics.) They confirm that the Party ID question is a good predictor by comparing the predicted results of responses over all the panel to cross-sectional data from just the first-time responses (that include the Party ID question.)

      Thus they conclude that pollsters could ask about Party ID and get good results by adjusting for it.

    • Jay Sheckley

      Frog Leg: You’re trying to compare and contrast PEC to Gelman, nu? Perhaps this can help. Gelman says a few things about ignoring headlines, and his teaching rather than profiting from forecasting. These remind me positively of this site’s Sam Wang. But I’m leerier of AG’s stats. One of his main sentences says: “Remember, survey response rates are around 10%, whereas presidential election turnout is around 60%, so..”

      Um, wait. First we cannot predict turnout, and second, Pew calls 2012’s turnout 53%… Some say 57, but it could go lower in November. Bear in mind Sam has too much of what the crass call class to put down someone who _seems_ to do what Sam does. But nobody does what he does. This year the New York Times has finally noticed that!

      And even so, I’m not sure these are the crucial figures. No numeric Olympian; I excel at spotting unusually high proficiency. I could namedrop who I saw sparkle when, but nah. This site back to the day it began has been unparalleled. Once again Sam has been on top of this election a year before anyone else had a clue. Because science.

      Maybe it’s because he’s a neurology researcher / lecturer: Sam is more grounded than anyone re: bias. Gelman is over-interested in himself and his notions. That’s understandable; physically, using the mind is the healthiest way to get high. But for a scientist, there are pitfalls.

      Conversely, as a statistician Dr Wang [rhymes with song] fits the classical profile of the Greek half-god: He has the strength of ten men….because his heart is pure.

      My advice, which you’re welcome to review after the fact, is if you want the best info to noise ratio, stick with Princeton Election Consortium. To come to this view on your own, read more of this site.

      BTW Trump can win. Nothing here says he can’t, just that according to all current signs –which at this point are highly predictive– a Trump win remains highly unlikely.
      I love this:

    • Olav Grinde

      @Jay Sheckley, I couldn’t agree more. At Professor Gelman’s website, I repeatedly challenged him to post his own election prediction, incorporating whatever predictive optimization he deems appropriate. Tellingly, he has failed to do so.

    • Dave Wink

      Just tonally, I doubt the Gelman blog. Hey, friends, this is something those brainiacs haven’t thought of! The impression is that a decent thought or two (that may or may not be valid) overrides true scientific rigor. You wanna argue the points, come to the grown ups’ table. Otherwise, you are a blogger.

  • Bao

    Prof. Wang, I’m wondering if you could add a +/- for any changes to the MM or win probability, just so casual readers can see where the race is trending.

    • Josh

      Unless I’m misunderstanding you, the +/- at any time is represented by the gray area on either side of the MM.

    • Adam

      I think I see what you are asking – what is hard for me to understand, with the MM, is what events changed the MM over a day or so. For example, it dropped by 1.1 the other day, and I dont know why. Was it a calculation change due to formula discussion or a poll(s)?

      That’s intriguing to me…to see how singular polls results can impact it.

  • George

    I know this is a little late in a somewhat old thread, but thinking about the volatility or lack thereof this year vs. past years. If you take Sam’s Meta Margin uncertainty triangle(s) and mentally or physically (like I did – I made two prints and cut the triangle out of one of them) slide the triangle back to mid- July, just pre convention time or thereabouts, you will see that at almost any point along the ever-changing meta-margin its future tracks above the 95% line. Sometimes the entire track – when started at low points – and sometimes just its peaks – when started at high points. Wondering 1) is this “fair” to do, i.e., does the triangle stay the same shape (same acute angle at the left edge) over time, and if so, what does that say about reduced volatility for this election, and if not, how/why does that shape/angle change? Thanks.

  • daddyoyoq

    In 1996, for example, in one of the last polls I could find by CNN/Gallup, Clinton was judged more dishonest than honest by 1 point, and trusted less to keep promises than Dole by 5 points. He beat Dole by 8.5% in the election. In regard to the right track/wrong track question, that’s highly ambiguous. I and most of my friends would say wrong track because of the GOP Congress, while others might blame Obama. I wonder if there were a measurement that would separate out those two possibilities? Well, we have polls of that: Republicans in Congress are rated at 23% approval, Democrats 7% higher. Obama is currently at +6% approval. In other words, all of the factors you mentioned seem to be confirmed by current head to head polling vis a vis Clinton and Trump.

  • Michael Hahn

    Hi Sam: I wondered if you would care to comment on what to my eyes is an extraordinary stability in the median EV estimator and the Meta Margin this year. I seem to recall far more volatility in both of these in past elections. And I wonder if the larger undecideds this year is leading to a false sense of where the election is headed?

  • David vun Kannon

    Since the Clinton EV prediction has been stable at 341 for a while now, I was wondering if you could do a post on what set of states that represents her winning.
    Thanks for all your great work!

  • DougJH

    It seems to me like a more important tweak to your methodology right now would be scraping state polls from 538 rather than Huffington Post. HuffPo is missing *a lot* of polls. For instance, from the period August 1 – September 1, 538 includes 15 polls from Florida, HuffPo just 9 polls. The average of the six polls HuffPo missed are 3.5% (a bit lower than the +3.9% average for HuffPo as a whole for Clinton in Florida; a bit higher than 538’s adjusted average).

    While it may not make such a huge difference for Florida, it does in places like Michigan where the four mission polls average just +1% for Clinton where the overall average for Michigan for HuffPo is +8%. This would make a significant difference in polling average (and perhaps your forecast). There doesn’t seem to be any rhyme or reason for most of HuffPo’s exclusions either (other than the Ipsos state polls, which they are either unaware of or don’t like).

    Sometimes they include PPP polls; sometimes they don’t. They included a Florida Atlantic U poll from January, but not the one from a week and a bit ago. This kind of inconsistency actually appears to be why HuffPo’s averages nationally and by state are a bit higher than everyone else’s for Clinton right now.

    It would be fine if there was a clear and defensible reason for including some polls and excluding many others. I don’t see that reason right now. May influence accuracy enormously for your model.

    • Sam Wang

      That switch is inadvisable. Broadly, PEC’s policy in the past has been to trust HuffPost’s judgment as to what constitutes a competently performed poll. Their policy has been to cast a broad net; I have some concerns about whether they are maintaining that policy after a turnover of management. However, they are closer to the professional polling industry than you or I. I am satisfied that this approach does not add undue bias and is well suited to my statistical tools.

      As an example of a survey excluded by HuffPost, Ipsos has a large national sample with lots of tiny state-level subsamples. These may not have the same accuracy or weighting that a state-specific poll would have. I am okay with a policy that excludes these.

      If you are dissatisfied with HuffPollster’s data, they have a mechanism for readers to report missing or wrong data: email to See the right sidebar of this website for a link.

Leave a Comment