Princeton Election Consortium

A first draft of electoral history. Since 2004

Senate Democrats are outperforming expectations

August 28th, 2014, 9:42am by Sam Wang


[Note: this  is a work in progress. I'm basically seeking comment as I develop a November predictive model. Please give your feedback... -Sam]

I’ve been asked why the PEC Senate poll snapshot is more favorable to Democrats than forecasts you’ll find elsewhere: NYT’s The Upshot, Washington Post’s The Monkey Cage, ESPN’s FiveThirtyEight, and Daily Kos. All of these organizations show a higher probability of a Republican takeover than today’s PEC snapshot, which favors the Democrats with a 70% probability.

Today I will show that in most cases, added assumptions (i.e. special sauce) have led the media organizations to different win probabilities – which I currently believe are wrong. I’ll then outline the subtle but important implications for a November prediction.

If you want to get caught up on the major approaches to Senate models, start by reading my POLITICO piece. There I categorized models as “Fundamentals-based (Type 1)” and “Polls-based (Type 2)”. The major media organizations (NYT, WaPo, 538) have all gone with a hybrid Type 1/Type 2 approach, i.e. they all use prior conditions like incumbency, candidate experience, funding, and the generic Congressional ballot to influence their win probabilities — and opinion polls. What does that look like?

The first data column is the current PEC poll median. The next two columns show what a polls-only win probability looks like. Finally, the last three columns show the media organizations’ win probabilities. All probabilities are shaded according to who is favored, the Democrat (blue) or the Republican (red). “sum6″ is the sum of probabilities (converted to seats) for six key races: AK, AR, CO, IA, LA, and NC.

Let me make some general points.

Senate Democrats are doing surprisingly well. Across the board, Democratic candidates in the nine states above are doing better in the polls-only estimate than the mainstream media models would predict. This is particularly true for Alaska, Arkansas, and North Carolina. In these three states, Democrats are outperforming the expectations of the data pundits (The Upshot’s Leo, Nate Silver, Harry Enten, John Sides, etc.). Why is that, and will it last?

“Fundamentals” pull probabilities away from the present. For PEC and Daily Kos, the win probability is closely linked to the poll margin. The Daily Kos model was created by Drew Linzer, of Votamatic fame. Both are based on polls alone.

The mainstream media organizations are a different story. They show a general tendency to be more favorable to Republicans. For Alaska (AK), Arkansas (AR), and North Carolina (NC), the discrepancy between PEC/DKos and NYT/WaPo/538 is rather large. Where PEC shows an average of 4.02 out of 6 key seats going Democratic, those organizations show 2.75 to 3.16 seats. This key difference, 0.86 to 1.27 seats, is enough to account for the fact that PEC’s Democratic-control probability is 70%, while theirs is between 32% and 42%.

Longtime readers of PEC will not be surprised to know that I think the media organizations are making a mistake. It is nearly Labor Day. By now, we have tons of polling data. Even the stalest poll is a more direct measurement of opinion than an indirect fundamentals-based measure. I demonstrated this point in 2012, when I used polls only to forecast the Presidency and all close Senate races. That year I made no errors in Senate seats, including Montana (Jon Tester) and North Dakota (Heidi Heitkamp), which FiveThirtyEight got wrong.

In 2014, these forecasting differences matter quite a lot. This year’s Senate race is harder than any electoral forecast that the other forecasters have ever had to make. To be frank, 2008 and 2012 were easy. My own experience is guided by 2004 Presidential race, which was as close as this year’s Senate campaign. In 2004, I formed the view that the correct approach is to use polls only, if at all possible.

The present is more sharply focused than the future. DailyKos, the Upshot, and FiveThirtyEight have win probabilities are closer to 50% than PEC’s in 18 out of 27 cases. This mostly reflects the fact that they are trying to predict November races on an individual basis. Generally, this drags their total expectations toward randomness. This reason also contributes to why their expected number of wins in key races is closer to 3.0 than PEC’s 4.02.

The exception is The Monkey Cage, which shows higher certainty than PEC in 6 out of 9 races – but sometimes in the opposite direction. This suggests that their model must be heavily weighted toward fundamentals. That is very brave of them.

Can a prediction be made from polls alone? This is the question I am currently wrestling with. Using the 2004-2012 campaigns as a guide, I suggest that the Senate campaign’s future ups and downs can be gleaned from looking at what’s already happened since June 2014:

The PEC polling snapshot has mostly favored Democrats. Starting from June 1st, Democrats have led for 61 days and Republicans for 26 days, a 70-30 split. During that period, the Senate Meta-Margin has been D+0.24±0.57%. Assuming that the June-August pattern applies to the future, I can use this Meta-Margin, and the t-distribution with 3 d.f., to predict the future, including the possibility of black-swan events. The result is that the November Senate win probability for the Democrats (i.e. probability that they will control 50 or more seats) is 65%.

Finally…I note that this is all a work in progress. I’m using PEC as a sandbox for kicking around ideas. With that, I invite reader reactions.

Tags: 2014 Election · Senate

84 Comments so far ↓

  • Jack Farrell

    Voter turn-out in off year elections has been a bigger problem as a fundamental for Democrats than the 6th year curse. Recall how well the Democrats did in the 1998 election. GOTV will determine whether your prediction comes true.

  • Karl Hudnut

    I would like to draw attention to Electoral-vote.com ; another polls only site. EV Uses a different, yet clearly stated, set of rules for what polls it includes in it’s calculation than the rules you, PEC, use. But it is polls only. So I think it is worth mentioning.

    I won’t go on about how I favor PEC, DKos, EV approach (oops I just did :-).

    • Sam Wang

      Of course! electoral-vote.com (Andrew Tanenbaum) is an old-school poll aggregator, right up there with RealClearPolitics. They don’t compound probabilities, nor do they calculate individual state win probabilities. I like what they do a lot.

      Also, don’t forget HuffPollster, another great organization that opens their data up, making it possible for hobbyists to participate.

  • 538 Refugee

    So the basic idea here is if you want to know what people think and how they will vote the best way to find out is to ask them? What a strange concept. I remember Silver chiding one prominent pollster that had a “house lean” who would always come back to the norm right before the election so he could claim accuracy. How is that different than phasing out fundamentals (your preconceived concept of how people will vote) as the election gets closer?

    • Mr. Universe

      You’re thinking of Rasmussen. They did have a house lean. Nate was actually able to demonstrate it at one point

    • Christian Schmidt

      Concerning pollsters house effects, it would help if all pollster would clearly state their methods (and any changes), like members of the British Polling Council do in the UK – see ukpoolingreport.co.uk for background

  • Sam P

    A couple of unordered thoughts, not wholly related:

    Polling gets a bit confused between measuring snapshots of popular sentiment, and predicting future results. The specific question wording plays a role, but how the sample is weighted is the most important thing here. I’m amazed at how much guesstimating, massaging and ‘unskewing’ US pollsters do when it comes to modelling turnout, as opposed to trusting what people say they will do.

    If we accept some sort of ‘Efficient Polls Hypothesis’, we’d expect polling to make, on average, equal and opposite errors compared to actual results (i.e., all information known at the time is already ‘priced in’ to the polling numbers and carries no predictive value).

    Here in the UK at least, though, there’s been a historical tendency for the party in government to recover as an election approaches. A lot of people are making predictions for the 2015 election on that basis, and it’s going to be exciting (in the most nerdy way possible) to see if the models stand the experimental test.

    I’m not sure I buy the EPH, especially considering that the EMH doesn’t really work in practice, but the data is so noisy and there’s so much danger of over-fitting I’m not sure I trust the information-adding approaches either.

    • Sam P

      A few things I forgot w.r.t. the EPH: environmental factors are likely priced in (the economy, relative party strength, etc), but what about fundraising or foot-in-mouth disease? If a candidate has a huge war chest in September, should we expect the advertising blitz to move polls in their direction? If a candidate is just Some Guy, should we expect them to make more poll-moving gaffes than a candidate who’s won elections before?

  • Joseph

    Lately, I’ve invested a goodly chunk in a “volatile” stock, so for me your chart looks very familiar. I would characterize this election as being highly volatile. But that may end up being a good thing for Democrats. Consider that the economy is improving, Obamacare is getting better play, and while there’s no shortage of wars and rumors of wars, we aren’t physically taking part in them. Meanwhile, the Republican jerrymandering is backfiring on them when it comes to the Senate, since it disenfranchises voters who now have a chance to punish those who disenfranchised them.

    In such a volatile environment, polls may indeed be the only tool of substance that can show a general trend. Presently, the trend is back toward the mean you spoke of. Time will tell if that mean is flat or trending upward for the Democrats.

    • Sam Wang

      Largely agree. At some level, the question is where the “natural” mean is. My view is that this year’s historical pattern of polls can tell us that.

  • Amitabh Lath

    All the analyses that use non-poll information do the following: They study the correlation of some variable X with election results (X can be GDP, unemployment, consumer confidence, superbowl winner, phase of the moon….)

    Then OF COURSE the X variable (now known as secret sauce) has to go in. It would be silly not to make use of such a beautiful correlation, no?

    Needless to say, correlation does not mean there is any extra information in the secret sauce that the polls have not already identified.

    In order for this variable X to add information, people being polled would have to be ignorant of this information when formulating their answer to the pollster, but somehow still taking the effect of it into account when voting.

    • Sam Wang

      I haven’t published it yet, but in my hands, the generic Congressional and Obama net disapproval only account for, at most, 12% of the variation in the Senate snapshot. Do they add more predictive information than noise? I rather doubt it.

  • Steve K

    The stats are beyond me at this point so a general comment ….

    At some point in time, for completeness if nothing else, it would be interesting for you to address the differences between the three pure poll predictions and of course why you think your method is superior. Heck; doing this might even clarify your thinking some.

    • Kevin

      Why would Sam waste his time with BS externalities that don’t actually show a direct impact on candidate polling?

      Yes, more people now realize that the ’08 crash resulted in a permanent economic change rather than a temporary one. That doesn’t have any direct causal relationship to polling that Vox has shown, or even hinted at.

      The Vox story is one of those “funadmentals” that pundits use to avoid actual numerical analysis.

  • Gabe

    Interesting take. One would expect an incumbent to outperform the President in the state they represent. However, a model that relies solely on polls seems flawed. Especially one relying on polls before Labor Day. Consider, and I am not advocating this will be wave election, in 2010 at this point Portman trailed in Ohio, Johnson in Wisconsin, Ayotte barely led in NH. In short, things changed very, very quickly. So a model that dismisses everything but polls which can be highly flawed no matter how well they are done seems unrealistic.

    • Craigo

      Portman took the lead in June and never relinquished it. Johnson took the lead in July and never relinquished it. Ayotte never trailed in a single public poll that I can find. Those are, to say the least, bad examples.

      More broadly, the idea that fundamentals should EVER be favored over actual polling when the latter is available is pretty bizarre. When the data conflicts with the theory, you don’t throw out the data.

  • David Goldstein

    Invoking 2004 is fascinating to me. While it was a presidential year, the key Senate fights did focus on red states where the Democrats held up well in the polling up until Election Day. Of course, the Dems got wiped out but they kept it close. I wonder if a similar dynamic is shaping up this year? That is, Democrats in red states doing well in the polls but who will ultimately be defeated as the fundamentals of the electorate take over. Any way to sensibly compare the 2004 Senate polling in key states with what is happening now in 2014?

    • Sam Wang

      See RCP. In cases where 2004 Election Eve polls pointed toward a leader (SD, CO, OK, NC, LA, SC, KY, AK), that leader did indeed win. The Florida median indicated a tie, and the final margin was R+1.2%. There could have been movement then – I think I’ll let you delve into that!

    • David Goldstein

      Thanks for the great response! Taking a look at the 2004 data, the Democratic nominees were indeed polling much better in late August/early September than their performance on Election Day. The polls weren’t predictive until October. In addition to the presidential vs. midterm dynamic, however, these races were for open seats (except KY and SD), as opposed to 2014 where it’s all incumbent Dems (except IA). While I agree with your Election Eve analysis, it does feel as if in 2004 polls at this point in the cycle gave Dems a false sense of strength. It’s extremely hard to see how a similar dynamic isn’t at work now in 2014, though that is an admittedly qualitative guess.

  • Amitabh Lath

    Another interesting tell is that some (most?) of the estimates that include non-polling “fundamentals” slowly remove the effect (lower the coefficients towards zero) as the election approaches.

    So by November they are polls-only. That should tell you they don’t really believe in their own secret sauce, it’s just there for cleverness.

    • Sam Wang

      If the idea is to phase out the fundamentals, my view is this: just rip off the damn Band-Aid at the start.

      Anyway, I agree with the idea that data-pundits want to think they have a role to play. Otherwise it’s just a bunch of automated scripts, which end up being right. Like mine. Where’s the fun in that?

    • Craigo

      Someone can correct me if I’m wrong, but I think some models (Linzer?) use fundamentals as a Bayesian prior, which is totally defensible and perhaps even preferable where polling data is thin.

    • Sam Wang

      You are wrong. If you read the documentation at DKos, you will find that it is based on polls alone. I have confirmed this in direct correspondence with Linzer.

    • Amitabh Lath

      The “fundamentals” crowd is not selling an election prediction as much as their own special expertise that will get them invited to Meet The Press and Wolf Blitzer. They can talk about what fishermen in a specific parish of Louisiana are thinking.

      Admitting that a half page of MATLAB could do as well if not better would be like a New York sharp admitting he palmed the marble before he started moving the shells around.

      You have tenure. At Princeton. Of course you can point this out freely.

    • Craigo

      My mistake, Sam. I was thinking of Votamatic, not the dKos model.

    • shma

      Craigo, you are thinking of Linzer’s Votamatic model (used in 2012), which is different than his DKos model. Votamatic definitely used fundamentals as a Bayesian prior

      http://votamatic.org/how-it-works/

    • Drew Linzer

      My presidential model uses fundamentals as a starting point, then updates from the polls. The senate and gubernatorial models I’m running for 2014 are polls only.

      There’s a reason for this. Using the fundamentals for presidential races isn’t a gimmick; it fills in information where polling is thin, and stabilizes the state-level forecasts on a day-to-day basis from random fluctuations in (sparse) polling data. There’s an argument in political science that the polls increase in accuracy as Election Day gets closer. Months before the election, people haven’t really made up their minds yet, so the fundamentals models have information to give. But the only reason this sort of hybrid modeling works at all is because the historical presidential models are pretty decent. Not great, but good enough.

      By contrast, the historical models for senate and gubernatorial races aren’t nearly as reliable. They rely on data that’s less comparable across states and years, use specifications that aren’t as well-tested, and are predicting outcomes that are arguably subject to much more noise. That’s why for 2014 I waited until mid-August to start calculating trendlines and predictions, and by this point, a polls-only approach suits me fine. You’re welcome to see exactly how we’re doing it here: http://www.dailykos.com/poll-explorer/how-it-works

    • Some Body

      Question to Drew (and anybody else who cares answering): How come presidential models are more reliable than senate/gubernatorial ones?

      The number of data points for presidential elections is vastly smaller, which implies models should be less reliable, not more reliable (and to the extent they show a good record—that might well be accidental).

    • Sam Wang

      No, but Presidential races are composed of 50 contests, and polls at least are very well validated. In Senate races, polls are also validated.

      But Senate candidates are all different, and the validity of Senate fundamentals-based models is a new question. Also, one would have to carefully quantify how much signal they add – and how much noise. To me, it seems simpler to just squeeze Senate polls dry of information. That’s more accurate in the home stretch, and as far as I know, nobody has proved that the special sauce helps at earlier times in the year.

    • Some Body

      You’re right about the 50 states (and about me forgetting about it), but on some thought, I still see a bit of a puzzle there.

      First of all, we still have fewer data points (3 presidential contests for every 7 Senate + gubernatorial; and I’m wondering what would be the picture further down the ballot, by the way).

      Secondly, the results in the 50 states are more closely correlated. In a sense, given the realities of US politics, we can use a strong prior for presidential races (results remain the same as 4 years ago), but the same holds for the Senate and Governor too (incumbent always wins; open seats follow latest presidential result in the state).

      And the greater variety of personalities and what not should actually mean a good model for Senate and gubernatorial elections should be more robust (and conversely, that an apparently-good model for presidential elections is actually less robust than it appears).

      What am I getting wrong here?

    • Drew Linzer

      There’s a few things. The main one is that political scientists have studied and experimented with presidential fundamentals models much more thoroughly, so we have a better idea of which (types of) predictor variables are most important, and how accurate the models should be expected to be in general.

      Beyond that, presidential elections are considered to be more consistent manifestations of the same underlying process from election to election. So while it’s true that there are more gubernatorial and senate elections, they’re less comparable to one another — different candidates, different state characteristics, different effects of predictor variables, maybe more noisy as well. Pretty much what Sam already said. Combine that with our poorer theoretical understanding of what drives senate/gubernatorial election outcomes, and chances are decent that your big senate/gubernatorial model is probably badly mis-specified and not going to make accurate predictions out of sample.

    • Kevin

      Amitabh, you’ve hit the nail on the head except for one minor sticking point:

      These electoral prognosticators need something compelling to say, or their story won’t generate the clickthrough revenue it needs to justify its own existence.

      So while you chalk it up to the appearance of cleverness, I chalk it up to cynical business decision.

      “Election likely to maintain status quo; Most key races still within margin of error” is not a sexy headline.

      Even Nate Silver appears to have sold out his principle of cautious understatement, opting instead to hand the reins over to Harry “Inflammatory headline with easily-debunked ‘data’” Enten.

      It’s the nature of media-as-business, and it’s why I came to PEC over other sources.

  • Michael Sweeney

    I’ve seen concern expressed about the quality of polling this cycle: An increase in the proportion of partisan and questionable-methodology polling, low response rates and a general lack of polling in key races like Alaska.

    Do you think the “fundamentals” models may be trying to compensate for what they see as a weak polling data set and if so is that compensation reasonable?

    • Sam Wang

      Fundamentals are useful for missing-data problems. There is plenty of data for most races at present, except for Alaska maybe. By October there will be no shortage of polls.

    • 538 Refugee

      It would be interesting to know who decides to do what poll and when. I would expect that more uncertainty would lead to more polls so that it would be a self correcting process though. The problem with fundamentals is that it is ‘pick and choose’ smörgåsbord of options. Correlations of past events are fine but when you get to super bowl winners and some random octopus opening one of two boxes being as accurate as some of the fundamentals ya really gotta start wondering.

    • Craigo

      Most fundamentals models have a poor track record compared to poll only models for the reason you cite. Either past success is a coincidence due to small sample size, or the high correlation variables that are isolated are actually dependent on a third variable that goes untracked.

      The old 538 had a very good rundown on this which I can’t find at the moment, which makes one wonder why Silver still insists on including fundamentals in his modeling.

  • Richard Newman

    At some point, solely considering polling results subsumes the sampling algorithm that the poller used to try to compensate for how representative their samples are (landlines, demographics). Isn’t that just another way of introducing bias in the conclusions?

    On the other hand, using data from disparate sources would tend to center to a mean that probably is indicative. If I recall EV did try to measure (probably in 2012) how far from mean that each poll was to clarify the distribution curve that aggregating poll data would represent.

    So you could use poll data only, but should present the distribution represented by polls relative to each other so you can determine the confidence of your prediction. For instance, a 70% likelihood with a 40% confidence (or other rep. of std. dev.) of the 70% being correct is different than 50% with an 80% confidence one has measured well.

    Personally, I think claiming “fundamentals” are like claiming a business decision is “strategic”: it’s what you say when the numbers don’t make sense. It’s a fudge factor for rationalizing the conclusion you’ve already made.

    Just my 2 cents (but with fundamentals, might be worth a nickel).

    • Sam Wang

      We already do what you suggest. The variation in polls is used as part of calculation of win probabilities for each state. This has been central to the PEC method since 2004.

    • Richard Newman

      Then you’ve probably done what you can. Any model where you introduce your own suppositions (no matter how carefully judged) is going to skew the results further from the collected data, perhaps usefully, perhaps not. Only empirical testing of your forecast against results will bear that out. However, since the empirical results in this problem domain are non-reproducible due to constantly evolving exogenous factors (same problem as in testing economic and market models), you’d more be reacting to previous factors, not pending ones (i.e., a general fighting the last war).

      Personally, I think you leave it as you have it and don’t try to compensate. Let the pollsters do that as they will. Just use your aggregation methodology to wash out bias between them as you already do and make visible any variance as you can.

  • AThornton

    Most “secret sauces” are worthless due to the Dunning-Kruger Effect and the Known/Unknown matrix. Mathematically it’s trying to assign a Probability to “Duh?”

    Other “secret sauces” are as secret as a thunderstorm: incumbents usually win, voter participation will (more-or-less) be the average of the three previous equivalent (off-year/presidential) elections, national voter turn-out will be less than 42%, percentage of white voters to total voters will drop approximately 1%, & etc. & etc., all of which will be reflected in the polling. So. They’re worthless too … or, more accurately, pointless.

    The one thing that may be of interest in 2014 is the year the extreme GOP House seat gerrymandering starts going sour. It may be some of the seats they conjured into existence will flip. Or not. Again, polling aggregation will detect it.

    Thus, eschewing mathiological abductive obfuscation seems, to me, to be the best plan. Plus it’s easy. (Which is rather nice, actually.)

    • Matt McIrvin

      To justify these adjustments, one would really have to identify some fundamentals-based difference that affects poll results but not election results, or vice versa.

      And people who spitball political predictions love these theories: the Bradley Effect, the Shy Tory Effect, the supposed gap between polling and referendum votes on gay marriage, etc.

      Some of these have even been real at various times in history, or so it seems to me. These days, when it comes to election-eve polling, there’s little evidence for any of them. (Exit polls, on the other hand, can have significant systematic errors… though the public perception is the other way around, that they’re the most accurate ones.)

    • Sam Wang

      Well put – totally agree. It’s not just that the parameter should add information that is *not* in polls. In addition (perhaps redundant), it should also not add a larger amount of uncertainty.

    • Phoenix Woman

      “The one thing that may be of interest in 2014 is the year the extreme GOP House seat gerrymandering starts going sour.”

      Yup. Gerrymandering stops working once you lose enough voter base; after a certain point, it hurts you because you literally don’t have enough base to be stretched all over the place while you try to contain your opponents to a few small congressional districts.

      That’s the problem Texas Republicans are facing, even with Tom DeLay’s fiendish (and illegal) fancy voting software doing its worst.

    • BruceMcF

      I find it more likely that the evolution of extreme gerrymanders into dummymanders takes a bit longer, and with mid-terms still more favorable to R’s than Presidential election years, the evolution since 2012 is more likely to just be offsetting some of the mid-term / Presidential swing …

      … so I expect its more likely to really show up in 2016, unless swamped by a severe shock, and the continued evolution in 2018 to “confirm” the 2016 result.

  • ArcticStones

    A key question, in my opinion, is whether Democrats can mobilize a get-out-the-vote campaign that would be worthy of a Presidential election year.

    If so, then I think surprisingly many of those close races will end with a Blue victor.

    • xian

      I agree. Occasionally I’ve caught a whiff of a below-radar Dem GOTV effort that is more like a presidential-year approach. We’ll see.

  • Barry. Rubinowitz

    You may be right about AK, but I think your confidence level there is a bit high. I am a little leery of pre-primary polling there, as I suspect less enthusiasm for Sullivan among those voting against him. Unfortunately, the only post-primary poll is Rasmussen, which is, well, Rasmussen, but had Sullivan +2. I would need to see fresher polls, most notably PPP, which has been steady for Begich, before I would be so exuberant about that seat.
    I’m also curious why your super six includes KY but not AR.

  • Billy

    I think the central point to make is that in the past few months, polling has on average remained favorable to Democrats. On average the polls have continuously showed they will not lose enough seats for Republicans to lose the majority and that in itself is a pretty good predictor.

    If it favored Republican takeover, then you’d see the needle jump the other way and stay there. But the polls never reflected that so I’m scratching my head at sites who are >60% confident about a Republican win. You could argue about “skewed polling” but 2012 was a good datapoint against that.

    • 538 Refugee

      I’ve just been looking at some polling and most I looked at are already using “Likely Voters”.

  • Anthony Greene

    Clearly, those pollsters who rely on fundamentals or a hybrid of fundamentals and polls do so because voter turnout is not well predicted by polls alone, but it is approximated by prior voter behavior, viz. fundamentals.

    The hybrid model therefore seems sensible to me, if only we know how to combine fundamentals with polls. Currently, fundamentals are used to just give a bigger average (for lack of a better descriptor). That is, poll data and fundamentals are simply combined, and the amount of each is determined by guesswork.

    But what if fundamentals were used only to approximate voter turnout (e.g, 43% of those who favor a democrat will make it to the polls in this district), and the rest relies entirely on polls.

    That would allow current preference to be the predictor while prior voter behavior would be used not for preference but only to model voter diligence.

    Of course there are a lot of ways that polls, fundamentals and other factors can be modeled; the idea is to implement something that has face validity, makes as few assumptions as possible, but also includes that polls differ in some systematic way from the way votes are cast. It seems to me that the principal way that votes differ from polls is in turnout. Voter preference is more a matter of the candidates and the current political climate; the role of the continuity of regional political preferences is already counted in the poll, it doesn’t need to be double counted in the fundamental too. Yes, some districts vote consistently red or consistently blue, but that is captured more accurately in the poll than as an historic fundamental.

  • Dean

    “Clearly, those pollsters who rely on fundamentals or a hybrid of fundamentals and polls do so because voter turnout is not well predicted by polls alone, but it is approximated by prior voter behavior, viz. fundamentals.”

    There was one major poll prediction failure earlier this year in Illinois. In the GOP gubernatorial primary, candidate Bruce Rauner led one of his opponents, Kirk Dillard, by something like 17 points.

    During the primary campaign, Rauner viciously attacked public sector unions. The unions endorsed Rauner’s primary opponent, Illinois State Sen. Kirk Dillard. Many union members crossed party lines and voted for Dillard. Rauner almost lost the primary and won by something like 2 points. Incidentally, his anti-union rhetoric ceased immediately.

    The Rauner primary is probably an anomaly in regards to poll accuracy. There is possibly no bombshell that will happen in November to skew the polling averages.

    Though the focus is on the Senate, the Rauner-Quinn gubernatorial battle in Illinois might be worth watching. Right now Rauner is up by single digits on average, but one recent poll came out showing his lead decreased, possibly due to negative ads showing him as a super-rich tax dodger.

    • ArcticStones

      What you write about the Illinois gubernatorial race is extremely interesting. I know most union members are expected to vote Democratic – but not all, and I do wonder whether conservative union members will punish Rauner in the general election.

      I think this Illinois race will tighten and not be out of reach for the Democratic Party.

  • 538 Refugee

    Since we are already getting likely voter poll results it might be more productive, if not more fun, to predict the spin on the ‘unexpected/dramatic’ electoral shift as those who aggregate polls using fundamentals begin to phase them out. I’m trying to work out the psychic octopus effect myself. Something more mundane like the improving economy or military action we didn’t take might be more mainstream.

    • ArcticStones

      How about the more than 9 million people who bought private insurance thanks to Obamacare? How about the 20 million additional Americans who now have health care coverage (including Medicaid expansion)?

      How about statistically translating that into number “number of lives that will be saved”, “number of healthcare cost related bankruptcies that will be prevented”?

      And how about statistically translating blocked Medicaid expansion in GOP-controlled states into “number of people that will die”?

      Those are pretty concrete. Voters will be able to relate – despite the propaganda that tries to paint anyone receiving even a dime in federal subsidy as a “moocher”.

  • Catalyzer

    “Power of your vote” is fun, but I seem to remember that in a previous election you gave “power of your contribution dollar” or something to that effect, which is obviously a lot more useful. Any plans for that this year? (Btw, as a fellow Garden Stater, I am saddened to see the jerseyvote lose its status as standard currency, but I guess this year it would be a bit like a 1920′s German mark.)

  • Ebenezer Scrooge

    It might be worth noting that the polls themselves rely heavily on their own “fundamentals”–corrections for peculiar cross-tabs, randomization errors, questions calibrated to determine turnout, and the like. Most of them seem to be pretty good at it.

    It seems to me that non-pollster “fundamentals” are only worth something if a.) the polls are scant, b.) the fundamentals predict changes of sentiment over time or c.) you think that your fundamentals somehow correct for errors in the pollsters’ fundamentals (mainly turnout). Obviously, the first two of these arguments fundamentals become weaker as you get closer to election day.

  • Amitabh Lath

    Thank you Drew Linzer for the comment above. If I understand your point, fundamentals allow you to predict how undecideds will break.

    This works in presidential elections because (probably) undecideds can be modeled with a small set of parameters (GDP, unemployment, etc). And undecideds are similar across states (at least across the very few states that matter in presidential elections).

    But in Senate (and even worse, House) races, undecideds differ from state to state and thus fundamentals are of limited/no use in figuring out how they will break.

    • Sam Wang

      Could be that undecideds differ, though another way to put that is that the candidates differ. My question is whether attempts to model this are useful in close races, the only places that matter.

    • Amitabh Lath

      Yes. Voters expect the president to control things like the economy, unemployment, wars. One could say those things are fundamental to the presidency.

      But what exactly do we expect a Senator or Congressman to do? Some become known for a specific issue like Sam Nunn was on defense, but one wonders if that was a fundamental concern of Georgia voters choosing their Senator.

    • Drew Linzer

      Not just how undecideds will break, but how people who tell pollsters something today might change their minds in the future. Still, the logic for the rest of what you’re saying is the same, I think.

    • Amitabh Lath

      Is that a significant effect, people telling pollsters one thing (or rather, being correctly assigned by pollsters as having one opinion) and then changing their minds come November?

      By significant I mean big numbers, and more going one way than another.

      This could throw off any poll-based estimate. I can see it happening for a real foot-in-mouth catastrophy, but those tend to be rare.

  • Eric Lewis

    Hey Sam,

    I see today’s snapshot graph says Dems have a 72% chance of retaining the senate. (Yayyyy!)
    In your article yesterday, you say the snapshot was at 70%, but then at the end of the piece, you say Dem chances are 65%.
    Is the 65% your overall prediction, as opposed to a snapshot? Or am I reading stuff wrong?
    Anyway, big fan here – thanks for all you do. Xo

    • Sam Wang

      For now, that topline up there is today’s conditions. Long-term prediction will go up there shortly…now that we have chewed it over in this thread. Thank you for the nudge!

  • Lojo

    The secret sauce seems to have the same problem as the secret sauce financial markets used and believed was infallible prior to 2008. Essentially, in financial markets, the belief was that you could accurately predict price future behavior (prices) based on correlations derived from past behavior. This was the case with Black Scholes and other supposedly irrefutable financial laws. Then 2008 hit and it – turned out – as Barton Malkiel predicted a long time ago – markets are random and you can’t really use past behavior to accurately predict future behavior (stock prices) (and accuracy is important in successful investing and campaigning). But, this hasn’t stopped many from financial economist just ignoring this basic problem – namely that portfolio theory is not science it is statistics.

    My guess is that political pundits (and journalist) who are generally scientifically and numerically illiterate are still trading on the idea that – just like in financial markets – we can also use data to predict behavior (in this case voting instead of pricing). So, they love the secret sauce and don’t like polls because polls seems old fashioned. And, I guess the idea of “fundamentals” in politics makes them feel like Warren Buffett and Charlie Munger. The problem is that presidential approval is not the equivalent of “cash flow” for a local Senate candidate and that the guys declaring the fundamentals have not proven that they are able (within a truly tight range) to predict future behavior. It turns out that the real fundamentals are the number of people who have declared the intention to go and vote from a certain candidate (which you measure via polling).

    Like Sam said, the use of the secret sauce is when there is a lack of current behavior (pricing or voting preferences) information makes sense. It’s going to have a wider probabilistic outcome but it is at least directionally predictive. It is – however – not determinative.

    What I’m curious about is if this fallacy and resulting news coverage declaring – in this case – that the Democrats are going to lose the election ultimately impacts the outcome (a self-fulfilling prophecy). But, I’d guess we’d see that in the polls.

    Anyway, thanks for sticking with this Sam. It helps keep me sane when watching elections.

  • Art Brown

    “DailyKos, the Upshot, and FiveThirtyEight have win probabilities are closer to 50% than PEC’s in 18 out of 27 cases. This mostly reflects the fact that they are trying to predict November races on an individual basis. ”

    I wonder if another factor is your desensitization to outlier polls by using medians rather than averages. Besides affecting the mid-point, this practice can also substantially reduce the “SEM” used in your calcs, increasing the “z-score” and so pushing probabilities away from 50%.

    Appreciate your work!

  • Amitabh Lath

    Huffington Post/Pollster.com just put up their own Senate probability list.
    http://www.huffingtonpost.com/2014/08/29/senate-polls-2014_n_5731552.html?utm_hp_ref=@pollster

    Basically similar to your values. But they do a lot more massaging of the data than you do.

    • Art Brown

      Hmm, if I’ve got this right (34 Dem hold-overs + PEC’s 7 safe seats + 3 added safe seats (MN, OR, VA)), Huffpost’s contest probabilities work out to a 57% probability of Republican control. Not sure why they didn’t post this number, since specifying the individual contest probabilities (and assuming independence) is all you need. (I think.)

    • Sam Wang

      It might be a bad idea for them to do that. Such a calculation requires precise estimation of each individual race, with very little margin for error. Their close-race probabilities cluster near 50%, which seems too uninformative to give a good overall estimate.

      I myself am apprehensive about reporting the estimate as a probability, because people tend to interpret probabilities with too much certainty. Most sites seem to low-ball the probabilities, maybe to reduce misinterpretation. But when it comes to predicting the overall outcome, low-balling the confidence in a win probability can lead to a significant tilt toward the trailing candidate’s party (e.g. they get credit for 0.4 seat when they should get 0.1 seat). Under current conditions, in which Democrats hold narrow leads in many races, this starts to add up.

      For this reason I prefer the Meta-Margin as a measure of the race. The Meta-Margin tells us how much all races would have to swing, across the board, to create a perfect 50-50 probability split for control by either side. Today, the Senate Meta-Margin is D+0.7%. I think you would agree that your side being ahead (or behind) by 0.7% is not a cause for celebration (or throwing in the towel).

      In my own calculations, an increase in average expected seats from 49.5 to 50.0 is linked with a change in D/I control probability from 50% to 70%. The difference between PEC (control probability 65%) and Nate Silver’s unofficial estimate (control probability 40%) would be explained by a difference in our estimates of 0.6 seat. In other words, a net error of 1 seat would be enough to account for the difference between our predictions. Recall that he made two Senate prediction errors in 2012 (MT and ND), while I made none. A small mistake, as I claim the use of fundamentals to be, would have a large impact on the final race.

      Another way to look at it is that one serious gaffe by one candidate can be game-changing. Or one candidate dropping out in Kansas. That’s why I wrote about that.

    • Art Brown

      Unfortunately, the other folk don’t provide enough info to work out a meta-margin. (I think.) So if I want to compare them to you, I think it’s either EV or probability. Since Huffpost put out their individual race probabilities, it seems fair game (and fun) to work out their implication. I look forward to their “official” number.

    • Art Brown

      For the record, now that Huffington Post is publishing all their individual race probabilities, not just selected ones, the “generating function” probability swings slightly Democratic: 52% on 8/28, and 56% on 9/3, matching Huffington’s reported Monte Carlo result.

  • fred

    I think it will all come down to grassroots get out the vote initiatives just like it did in 2012. So far I don’t see a lot of that going on but sounds like Democrats are planning a major effort of some sort. Seeing as how voter turnout in non-Presidential elections is their biggest problem, you would think they would be throwing everything they had at that.

  • Crygdyllyn

    A Fundamentals approach is based on the assumption of an unchanging political environment.
    For instance, they assume a certain turnout pattern. However, both parties were dominated by moderates since WW2. It did not matter as much who won. Now, the parties are polarized, you better vote or the enemy will win. This changes the dynamic.
    I think turnout will be higher than predicted for this reason.
    Fundamentals using patterns going back to WW2 and before I don’t think apply very well today. j
    A polls only approach avoids this problem.

  • Bob Grundfest

    Thanks for the master class in theoretical polling. I’ve learned more reading the comments than in all the classes I’ve taken over the years.

    Just looked at the HuffPost/Pollster chart. All of the leaders are presently under 50%. Does your model, Sam, take into account when a leader crosses that threshold? Does it matter?

    Have a great weekend all.

  • Howard Pepper

    I’m curious about this: Do you put any stock in what money either party has going into the last 2 months; similarly whether one candidate (or party overall) has any significant edge in campaign or get-out-the-vote strategy and resources? Maybe these things are too difficult to assess with any predictive value?

    • Sam Wang

      Not “too difficult.” That kind of analysis is the point of the piece you just read. The factors you list (except for GOTV maybe) are called “fundamentals.” Fundamentals seem to fail to add information, except in cases where polls are totally unavailable.

      In cases where polls are available, polls have a near-perfect record. My current hypothesis, which we will test by comparing PEC with NYT/WaPo/538, is that the use of fundamentals is a step backward.

      In the case of GOTV, there might be a small effect, in the vicinity of 1%. It would be worth pursuing as an activist (i.e. you should do it). But it would be a mistake to add it to the polls. Such advance “unskewing” of polls usually does not turn out to be accurate.

  • Art Brown

    Dr. Linzer claims his how-it-works page shows “exactly how we’re doing it”, but as far as I can see he provides much less detail than PEC. In particular, I can’t make out why D-Kos’s polls-only method is yielding a 0.8-seat delta EV from PEC’s polls-only method.

  • Joe Gawlica

    From my layman perspective, it would seem that voter enthusiasm would factor heavily. A “likely voter” is not a certain voter. Enthusiasm and motivation may be very important in a person’s turnout. Does this model factor in voter enthusiasm?

    • Sam Wang

      The general assumption is that as a community, professional pollsters know what they are doing. Thus we take likely voter data when it’s available, registered-voter data in other cases. This approach works rematkably well – better than many more complex methods.

      My general stance is that sniffing over the details of polls adds little or no value, so long as aggregation is done properly – using median-based statistics and so on.

  • John Kostas

    Sam. It looks like many of the pollsters are spinning a GOP Senate win this November just like in 2012 when they were completely wrong. I would like you to go on CNN,FOX ,USA Today and straighten them out. There is no doubt that this November the dems will once again win the Senate and the main stream pollsters will once again be wrong big time just like so many times before.

Leave a Comment