Princeton Election Consortium

A first draft of electoral history. Since 2004

The 2016 Senate Forecast

August 29th, 2016, 12:00pm by Sam Wang


Update, August 31: The prior for this model is based entirely on the history of past Senate polling trends: Presidential coattails this year, and “throw the President’s bums out” in midterm years. PEC offers, once again, a pundit-free prediction. The original version of this post is archived here.

Close readers of the Princeton Election Consortium know that we calculate not only a snapshot of current Senate conditions, but also predictions of final outcomes. Last week, Josh Katz at The New York Times’s The Upshot started publishing a comparison of models, including PEC’s. Today, I start featuring it in the banner above.

Today, according to our model, Democrats have a 77% chance of winning the Senate. Because the probability of Senate control is in the 20-80% midrange, it is currently an important place for both sides to put resources, Democrats through ActBlue and Republicans through the NRSC.

I will explain PEC’s Senate model. It uses the same math as the Presidential forecast, and consists of three steps:

  1. Taking a snapshot of current Senate conditions and calculating a Meta-Margin (MM);
  2. Projecting random drift in MM; and
  3. Filtering that projection through a separate prior to make a November prediction.

This probability calculation depends most strongly on one parameter, the overall movement in conditions between now and Election Day. This approach has the virtue of simplicity. In models of greater complexity, future change has to be estimated in multiple ways, which may excessively compound the uncertainty of the prediction. In the PEC approach, good estimation of a few parameters gives a prediction that is as confident as possible based on data. However, these few parameters must still be estimated accurately!

Step 3 above is new and represents a substantial improvement over the 2014 forecast. I will describe why that election’s lessons have led me to use an asymmetric prior. This is going to be technical; the most gory details, which can be skipped, are in italics.

In the left sidebar, you can see the current Senate snapshot. This is done using the same strategy as the Presidential snapshot:

  • In each state, take the 3 most recent pollsters or N days, whichever gives more data;
  • Take the median margin and estimated SEM to calculate a current win probability; and
  • Calculate the compounded distribution of all Senate races to get a histogram of all possible outcomes (i.e. 13 tracked races, or 2^13=8192 outcomes).

These three steps give a sharp picture of conditions today.

We also calculate a Meta-Margin, defined as how much all poll margins would have to shift to make Senate control a perfect tossup between Democrats-plus-Independents and Republicans. Because a 50-50 split in the Senate is highly likely to be tiebroken by Democratic Vice-President Tim Kaine, the Meta-Margin calculates a dividing point that usually lies somewhere between 49 and 50 Democratic-plus-Independent seats.

The Senate Meta-Margin is the key parameter for making a November prediction. I use it to ask: on Election Day, will the Meta-Margin be positive (Democratic control) or negative (Republican control)?

First, I make a prediction of where the Meta-Margin will go. This prediction requires knowledge of how fast the Meta-Margin will drift, and whether the average amount of drift has a limit.

Comparing late August with October in past years, the median state-by-state change in Senate polls was

  • 2008: toward Democrats by 4.0% (SD across states=4.9%)
  • 2010: toward Republicans by 2.3% (SD across states=4.7%)
  • 2012: toward Democrats by 3.6% (SD across states=5.1%)
  • 2014: toward Republicans by 1.7% (the Meta-Margin; no SD was calculated)

The SD of these four median changes is 3.4%. Combining this with the additional possibility of an error between final polls and November outcomes, the overall 1-sigma range of movement is +/-4.5%. For modeling change over time, I let drift grow to this ceiling at a rate of 0.6%^2/day.

Notice a pattern in the 2008-2014 data: In Presidential election years, the movement was toward Democrats – and President Obama was re-elected. In midterm years, the movement was toward Republicans. This type of pattern is driven by two causes: national elections roughly follow the Presidential race in on-years, and penalize the President’s party at midterms; and Democrats have bad turnout in midterm elections.This year, with Hillary Clinton favored to win the Presidency, Senate Democrats are likely to do better than today’s polling conditions would indicate.

This rosy outlook for Democrats is the mirror image of 2014. Two years ago, I assumed symmetric random drift, leading me to make a prediction in August 2014 that was excessively favorable to Democrats. So…how should I introduce a bias into the calculation to favor Democrats this year?

The prior has to be asymmetric: a “coattail” effect in Presidential years, and a “throw the bums out” effect in midterm years. The coattail effect can go in either direction, depending on who wins the Presidency. Our Presidential model’s prior has a Clinton win probability of 71%, and a Trump win probability of 29%. So we have a 71% probability that the Senate Meta-Margin will move toward Democrats by 3.8%, and a 29% probability of moving toward Republicans by 5.5%. The latter number is greater because Vice-President Mike Pence would be the tie-breaking vote. The weighted sum of these possibilities gives an average post-August move of 1.1% toward Democrats.

Throughout August, the average Senate Meta-Margin was D+1.8%. The Senate prior would then be 1.1% above this, or D+2.9%. Using an overall SD on the prior of 7.0%, the prior probability of a Democratic-controlled Senate is 65%.

This prior matters when the poll-based uncertainty is large, early in the campaign season. It has less impact as the maximum amount of random drift diminishes in the weeks ahead.

Note that today, the prior doesn’t matter at all, since it aligns very well with the polling snapshot. Of course, conditions can change.

Finally, I combine the random-drift calculation and the prior using the MATLAB script Bayesian_November_prediction.m. The ultimate output is PEC’s November Senate control probability, which you can see over at The Upshot. It is drawn from the second column of our file Senate_D_November_control_probability.csv.

Separate from the overall party-control prediction, I also calculate individual November Democratic win probabilities. These are given in the second column of Senate_stateprobs.csv, and reflect random drift only. They are also listed at The Upshot.

Tags: 2016 Election · Senate

77 Comments so far ↓

  • Veronica

    The website Daily Kos has the Senate’s chances at 46%, whereas here at PEC, it’s 71%. What should I look for as far as the reliability of these numbers?

  • AySz88

    FYI, apparently that IPSOS state-by-state polling is available in a JSON: http://jsonviewer.stack.hu/#http://www.reuters.com/statesofthenation/projection

    See response.states[n].responses[m].cnt, where it looks like m=1 is Trump and m=2 is Clinton.

    I don’t know if it’s worth grabbing if Pollster et al aren’t doing it, but it can be done. (In MATLAB, grab the free JSONlab toolbox.)

  • Matt McIrvin

    Was there a poll showing Vermont going to a tossup? I can’t find any evidence of it elsewhere.

  • A New Jersey Farmer

    Washington Post released some interesting state polls today. I’m sure the polliscenti will be chewing over the Ohio, Georgia and Texas results.

  • Paul Ruston

    Would it be possible to color code (Red or Blue) the state abbreviation in senate races with which party currently holds the seat. I suspect most of your readers already know which party currently holds the seat in competitive races but color coding this would still make seeing potential seat changes easier.

    State Margin Power
    NH Hassan +2.0% 100.0
    NV Heck +1.0% 50.6
    AZ McCain +2.0% 14.8
    NC Burr +2.0% 7.6
    MO Blunt +4.0% 5.5
    AK Murkowski +12.0% 5.4
    PA McGinty +4.5% 3.5
    WI Feingold +7.0% 3.0
    LA TBD +8.0% 2.1
    IA Grassley +9.0% 2.0
    FL Rubio +4.0% 1.8
    IL Duckworth +6.0% 1.6
    IN Bayh +14.0% 1.1
    OH Portman +8.0% 0.7
    CO Bennet +15.0% 0.2

  • Amitabh Lath

    Discussion below about comparisons between PEC, Upshot, 538, and DailyKos brings home the difference between polling and other sciences.

    Usually in a science paper you expect the author to grapple with both the statistical and systematic uncertainty. It’s her data, she know it best! Reducing systematic uncertainty is where the creativity and flair of experimental design comes in. There is no machine or algorithm to do systematics.

    Polling is strange. The data collectors give you just their statistical uncertainty, and leave it to people like Sam, Nate Silver, Drew Linzer et al to estimate the systematic. And they have to do it without any access to the raw data!

    An obvious systematic is bias in sampling. Silver tackles it the way a bright undergraduate would, honing coefficients for each pollster, then other terms for the economy, etc. The more knobs to turn the merrier. Sam’s way is that of an experienced experimentalist, the fewest assumptions possible. Sample enough pollsters and they will cover both sides of the true value. Sam takes the median to reduce the effect of outliers.

    It’s not surprising that Silver’s estimate gives a lower probability, all those coefficients are extra nuisance parameters with their own uncertainties that have to be integrated over.

    But it’s important to remember that given the overall fudginess here, 70% and 90% are not really that different.

    • Olav Grinde

      Use of the median reduces the effect of outliers far more than taking the mean would. In fact, this is a key reason why PEC’s Meta-Margin and probabilities have been so stable.

    • Amitabh Lath

      Overall Silver’s method has a lot more outputs as well (think of his pollster rankings for example). Personally I think that’s asking too much of meager data of unknown quality. Sam asks only one question, and his elegant construct marginalizes a lot of the uncertainties.

    • Matt McIrvin

      If there’s one thing that drives me nuts, it’s people talking about “statistical ties” if most polls are within the stated MOE of one poll, even when they’ve got many polls. You can basically ignore the reported MOEs in that case! The systematics will be the only errors that really matter, and they have nothing to do with the MOE.

    • Matt McIrvin

      …Also, some of the worst polls have gigantic sample sizes, so they can report a tiny sampling MOE.

    • Matt McIrvin

      (The “statistical tie” fetish isn’t a sin of Nate Silver, though; Andrew Tanenbaum at electoral-vote.com does it, and, of course, a lot of mainstream media pundits do too.)

    • Anthony

      Matt McIrvin, that is exactly why it grates me to no end to watch pundits talk about polls. Every day it’s cherry pick this poll, cherry pick that pool to make the race seem more “horseracy” than it really is.

    • Michael Coppola

      Not to defend the media. They absolutely do cherry-pick and mis-report polls. But the recent national polls really have shown a shift.

    • AySz88

      Amitabh – I am not sure your explanation for the greater uncertainty on 538 is quite complete. From a basic information theory standpoint, there’s no reason why adding additional data necessarily increases uncertainty. And their graphics seem to show that “polls plus” is actually less uncertain than “polls only”. (Plus, others that combine data, like Linzer, seem to be doing just fine.)

      Rather, I’d like to cast a bit of skepticism at the state-to-state covariance matrix that they sometimes mention but haven’t publicized or explained in detail. There seems to be an awful lot of propagating information from one state (or national polls) into other states, using things like regional and demographic information to explain election-to-election changes (and by extension, intra-cycle “trend” over time).

      (As an aside: I feel like there needs to be some sort of factor analysis / PCA to really justify doing that. In particular, I’d like to see some analysis of the statistical significance of those factors. It shouldn’t just be a selection of demographic categories kinda seem to explain prior elections’ shifts in state votes – if they picked demographics and regions by hand (or using “conventional wisdom”), that’s just begging to get called out as an overfit.)

      Anyway… Sam (ironically?) did some of the work justifying high covariance a few weeks ago, posting about how the state numbers this year are still very correlated with 2012.

      My hypothesis is: this does something a little counterintuitive in Silver’s model. The more the, uh, “cake gets baked” in the relative positioning between states, the less independence there is between states (and with the national polls). That decreases uncertainty for each state’s individual vote estimate, but increases covariance between the states (and national polls). But higher covariance increases the impact of any potential changes over time (because movements are less likely “cancel out” in different states). That will increase the impact of random drift-over-time for the final EV projection. On net, the increased variance from drift over time overwhelms the improved “snapshot” individual-state estimates.

      Olav – one disagreement I’ve noticed is that they actively don’t *want* to reduce the effect of outliers (at least, not more than taking the mean would). I don’t know if Silver was directing this criticism at PEC per se, but he did present an explicit argument for using the mean over the median. IIRC, there were accusations of pollsters “herding” and he pointed out the theoretical loss of information (which I guess is true – the SE of the median is about 25% larger than that of the mean – but that assumes a normal distribution).

      Defensible, if maybe not wise.

  • Markyd

    Sam, one of my favorite things about presidential elections is following your analysis. Always straightforward and insightful. (I’m not sucking up!) I noticed that your model correlates very closely to the Upshot — both have Clinton at or near 90% probability, and both have similar results in the state-by-state contests. The other two main forecasters that state odds in % — FiveThirtyEight and DKos — correlate closely with each OTHER but both generally show about a 15 percentage point spread from yours and Upshot’s (with Clinton in the mid-70s). Is this strictly a difference in the methodologies used to reach the overall probability? Would you expect all of the models to begin converging by, say, early October?

    • Sam Wang

      I like flattery as much as the next guy. More, probably. Anyway, thanks.

      I have been avoiding the task of comparing models, in part because I do not think I know enough about the other models. Witness Silver’s attempt to take me down in 2014: I made a mistake, but he was unable to detect it. It’s always a challenge to critique other people’s work competently. I will do so, but I could use some comments.

      I can think of two basic reasons for the discrepancy: (1) not having an optimal way to deal with uncertainties (for instance double-counting uncertainty by treating states as independent, or introducing extra uncertainty by using regression to correct pollster bias); and (2) using a long baseline of elections that fluctuated a lot (including 1952-1992), when the current state of polarization (1996-2016) is much less variable. Matt McIrvin and other commenters have brought up related points too. My guess is that Josh Katz at The Upshot has managed to avoid one or both of these problems.

      I know many of you read FiveThirtyEight and are familiar with whatever they have disclosed about their model for 2016. Let me know other major points that come to mind.

    • GM

      It’s interesting to think of the two models philosophically (taking recent election trends as indicative vs. the long approach). Sometimes, obviously, Things Change, otherwise we wouldn’t have the post-1992 shift to ideological polarization, and if this is indeed a year where Things Are Changing, a broader view should be more accurate.

      I must admit I don’t see any evidence of that – the electorate seems at least as polarized as the previous recent elections – and so I’d guess that Sam’s model of taking recent trends and extrapolating them is going to prove more accurate. But it could very well be that other factors at play (ie, the ‘priors’ that Linzer et al are so fond of) exert strong enough influences that the election becomes close despite the political polarization of the electorate. Time will tell.

    • Olav Grinde

      I can think of one more, albeit cynical, reason for FiveThirtyEight’s lower Clinton probabilities: They have an inherent interest in giving the impression of a closer horse-race. That generates more traffic!

      Furthermore, Nate Silver is creating a brand. A one-sided, predictable race simply isn’t as interesting.

      However, what really bothers me is that Mr. Silver’s methodology is inherently opaque. It can neither replicated nor comprehensively critiqued by any of his peers – quite simple because he is never willing to tell us the recipe of his “special sauce”.

      To me that’s a problem. If you have a finger on the scale, you should be honest to tell us why and precisely how. He does not.

  • Keith Romig

    Not totally relevant to this discussion, but several people on this thread, including I believe Sam have predicted that as the election season proceeds some disaffected Republicans among the undecided are likely to break towards Trump. If the Huffpost Pollster feed is to be believed, it may already be happening. The margins there have closed from just over eight points to just over six, not by any change in Mrs. Clinton’s toplines but by a couple of point increases in Trump’s numbers.

  • Greg

    Dr. Wang,

    I noted the margin for the senate race in Indiana is still listed at Bayh +20%. It looks like there’s a more recent poll on HuffPost from Monmouth that has him +7%. Is it not updating or is it including the older poll that has him +21%?

    • Sam Wang

      Sorry…there is a kludge in the code that deals with the case where there are only 2 polls available. It is fixed now, and should be okay going forward. The median, as you would expect, is now correct at Bayh +14%.

  • Kevin King

    I was listening to FiveThirtyEight’s election podcast, and they said their Senate model is coming out soon. Looking forward to seeing how theirs differs from yours. I’m sure it will be a lot more cautious, if that is the word.

    Incidentally, I’m pretty sure that Dr. Wang doesn’t care, but Mr. Silver made a backhanded slap at PEC during the podcast, as well as the NY Times, by commenting that he doesn’t believe a 90%-95% win probability for Clinton is justified. He mentioned that he thinks this election has more inherent variability that the last 20 years. I think he must read this site, because some of what he said mirrors the discussions here. His model is showing a steady deterioration in Clinton’s position over the latter part of August. But he did note there has been a scarcity of solid state polls lately. I do notice that Obama’s approvals on this site have gone down. I guess we’ll see if the meta margin goes down soon.

    • Sam Wang

      If one goes back to 1952, it is possible to find election year variability that, if it occurred today, would give a Clinton win probability as low as 87%. However, those were years where there was also a lot of variability in the interval of 91-180 days before the election. Such variability has not been apparent this year. Again, see this:
      I agree that there are more undecideds than usual this year. The extra undecideds are likely to be mostly Republican. They are currently showing up as 7-8% in national surveys. To my thinking, the question is what fraction of them (a) come back to Trump in the remaining 70 days, (b) vote third-party, (c) switch to Clinton, or (d) stay home. If they go down the (b)/(c)/(d) path at all, the remainder would not seem to be quite enough to elect Trump. Anyway, I agree that this will all become clearer in the coming 30 days.

    • Amitabh Lath

      Speaking to poll variability, DailyKos elections (Drew Linzer I presume?) is worth looking at. Currently P(Clinton)=0.77 with 298 EV. In 2012 Linzer did not publish a probability but his EV count did not vary by more than a couple from June onward; one would be hard pressed to identify the effect of the first Obama-Romney debate. So is 2016 significantly more jittery than 2012?

    • Matt McIrvin

      Silver’s now-cast claims that Trump would have a 24.4% chance of winning an election held today, even discounting any movement in the race between now and November. Does that make any sense at all? It doesn’t to me.

    • Matt McIrvin

      As far as I can tell, the variability for 2016 is higher than 2012 but it’s proportionate with Clinton’s larger lead. That is, the fluctuations manifest in the form of higher highs for Clinton, not lower lows.

    • Kevin King

      It is curious, Matt. Considering where the polls are today, unless there is some massive, uniform error, it would seem Clinton would be almost certain to win. I don’t think FiveThirtyEight’s models are open source, but without ascribing motives to Nate Silver, I would be curious as to why Trump has such a big chance in the Nowcast, given the current polls. I guess the model assumes a large error. All of the FiveThirtyEight models seem very responsive to current events.

      I also wonder if the FiveThirtyEight model is too complicated. One of the things that draws me here is that the method does not have many moving parts, hence there is less chance for hidden biases or unrecognized inaccurate correlations to affect the results. To me, using medians rather than averages is a simple and brilliant way to handle the differences in the various polls.

    • Matt McIrvin

      I think 538 just over-hedges, going all the way back to 2008. Their assumptions about uncertainty work pretty well for midterms but not so much for presidential elections.

      The best argument in their favor is probably that the higher third-party/undecided contingent this year, the craziness of Trump and the long-term demonization of Clinton all mean that this is a fundamentally different election from the ones that poll aggregators have been modeling since about 2004, and that the uncertainty just has to be manually adjusted upward to take that into account. Which is more or less a long-winded way of saying “everything’s changed, nobody knows anything!” But there were good reasons to believe that in 2004 and 2008 too. (Maybe not so much 2012.)

    • Sam Wang

      I believe that both the long-term demonization of Hillary Clinton and the Trump phenomenon are examples of polarization, a dominant theme of 1996-2012. That hypothesis would be consistent with ever more stable Presidential races. I agree that one could imagine a move over the coming two months, but empirically, no such phenomenon has yet emerged.

      For some reason this comes to mind:

  • Anthony

    Not directly related to the Senate but I found this single tweet very interested in summarizing this race compared to what Sam considers relevant priors (1996-2012).

    https://twitter.com/DrewLinzer/status/770765663317078016

    This single graph shows the massive difference in undecideds compared to 2008 and 2012. And the fact that it’s increasing rather than decreasing. Sam, how do you factor this in considering your 95% probability. You are way smarter than me, but the fact that undecideds are a huge part of the electorate makes it seem that this race is a lot less certain than 95%.

    • Sam Wang

      I have also noted this phenomenon. For one discussion, see this comment. Basically, the difference here, which is about 8 percentage points, is almost entirely composed of disaffected Republicans. For the Presidential race, over three-fourths of them would have to come back to Trump to flip the election. That is possible, but unlikely.

      The bigger question to me is for downticket. If they turn out and vote third party, that might be okay for the GOP’s downticket chances. If they stay home…that’s a real problem for them.

    • Jay Sheckley

      I went to the link and into Linzer’s twitter, where he responds that that graph’s based on national polls. We know from PEC charts here [and Sam's methods] that national stats matter less in electoral votes than they might seem to.
      If they vote for a 3rd party presidential candidate, aren’t they likelier than usual to vote 3rd party downticket too?
      http://f.tqn.com/y/politicalhumor/1/S/1/l/6/Trump-Tail.jpg

  • MarkS

    I also applaud the change to a no-pundit prior.

    Question: where is N defined, re “In each state, take the 3 most recent pollsters or N days, whichever gives more data” ? I couldn’t find it.

  • Josh T.

    Ironically, two days ago my brother said, “I find the 538 podcasts more entertaining than PEC– what would you tell someone to convince them that PEC is more accurate?” So I said that this cycle Sam predicted Trump’s nomination quite early, and that Nate missed it entirely. Then I added that Nate HAS to be entertaining and therefore makes the election seem like more of a horse race than it really is… and that Sam’s premise is that the polls tell the whole story and that punditry just introduces more noise (ha! Unintentional pun off Nate’s book title)… Anyway then he went to your site and saw this whole debate about the Senate forecast… So I’m glad that “polls only” won!

  • Amitabh Lath

    A thought about the magnitude of the coattail: This is based on ’08 and ’12 elections where the lead was due to increased participation from the Democratic base.

    Is this the case in 2016? If Clinton’s lead is due to traditionally R-leaning suburban white college educated women (or whatever) then wouldn’t ticket-splitting be more likely? The probability of coattail would probably stay at 70%-ish, but the magnitude would be lower than the 4%ish calculated.

  • dk

    I am confused. You currently give Clinton a win probability over 90%. Shouldn’t that be your prior for the Senate model? Why set your prior at 71% instead?

    • Sam Wang

      That is a good point. My original motivation was to give maximum weight to the tail outcome of a Trump win, a cautious move. Maybe too cautious, but my wrong prediction in August 2014 makes me want to be careful.

      If I use P(Clinton)=0.95, then the Senate prior becomes D+5.1%, corresponding to 52 or 53 D+I seats and a prior probability of 70% for Democratic control. The combined Bayesian probability becomes 82%, a 10% jump over the initial posting of 72%.

    • dk

      I would vote for internal consistency. You have a poll-based result in the Clinton-Trump contest that you have decided to ignore (adjust?, hedge?) out of caution. That’s what your competitors do.

    • Lorem

      I feel like this choice is arguably about how one interprets the nature of the coattail effect.

      We might interpret the ~75% prior as the natural set point of the presidential campaign (driven by e.g. long-term attitudes towards the candidates, their platforms, party incumbency + economy, etc.), and the current 94% prediction as a “deviation” from that prior (due to shorter term effects of gaffes, etc.)

      Then, we might theorize that the set point affects the senate race more than the actual current headline numbers. This might be reasonable because the effect on the race is one of movement in a particular direction rather than just general strength of a particular party, and so, one might suppose it’s driven by more fundamental factors, which the set point might better capture.

      In addition, the headline probability measures the likelihood of the presidential outcome after accounting for electoral college rules. This doesn’t feel like it should be directly relevant to the senate – I would assume senate numbers are instead affected by national and/or state-specific presidential preferences. To be fair, I’m not sure the prior captures these any better than the headline number does, but it might.

      With that being said, using the headline number has the virtue of simplicity, which is a rather crucial virtue in these matters. So, I am fairly uncertain about which choice is better overall.

  • Paul

    Hooray for the updated model.

  • newalgier

    Thank you. I think it’s a good decision. Is another way to look at Senate as being somewhat correlated with presidential voting in prez years? Not highly, but something like 0.3? Hey, wait, I can do this myself!

    In 2012, the correlation between Democratic margins for the presidential vote and the senate election (n = 14 without incumbents) was 0.84. With incumbents, the correlation was 0.46.

    So in a presidential year, I would expect that a reasonable prior for non-incumbent seats is simply the expected presidential meta-margin for that state. Where there’s an incumbent, the dependence would be smaller.

  • Amitabh Lath

    Is the coattail effect the same across all states or is it weighted by that states polls? Is the effect the same for PA as it is for IA?

  • GM

    Huzzah! Thanks, Sam. In my very inexpert opinion, this was the right thing to do.

  • Sam Wang

    (All comments below this point were made before the switch from the pundit-based prior to the polls-only prior. Read them accordingly…)

  • Olav Grinde

    I have a question: In the event of a Trump victory, but assuming polls are accurate for the Senate races, what is the likelihood of Republican control of the Senate?

    You currently give 73 % odds of 50+ Democrats in the new Senate, while the histogram indicates roughly a 33 % chance of a 50-50 split. I assume that means a 40 % probability of a 51+ Democratic result. (Correct me if I am wrong.)

    Given a Trump upset, does this mean the Republicans then have a 60 % chance of retaining control of the Senate (50+ R)?

  • Mike

    The main difference between the PEC Senate prediction (51 dems) and Upshot’s (50 dems) seems to be Indiana. On the NY Times summary page, PEC gives dems in Indiana a 97% chance, while Upshot has 68%. That’s a pretty huge difference. Any ideas for what is driving this discrepancy?

  • Paul

    I’ll echo others unhappy with using expert opinion as a prior.

    Prognostication certainly has its value, but I’ve always appreciated PEC precisely because it provides an”observation only” perspective. Seeing how that does or does not deviate from expert opinion is illuminating.

    I’d much rather the model use some model of the coattail effect, either using priors about the presidential race as priors for senate skew, or using the current presidential snapshot as an adjustment to the senate snapshot.

    • Tony Asdourian

      I completely agree with Paul. I understand there are competing values– the simplicity of only using polls vs. the need to be as accurate as possible– but to be honest, I think that your “brand” (not that you want one!) revolves around trying to predict using objective data only. Using pundits starts to blur what you are doing with the methodology of 538 and others. I really admire the fact that you were working hard to be “polls only”. I sort of took it as a prior (hah!) that you were the guy who worked hard to avoid all that other stuff.

    • Sam Wang

      This is an excellent point. Also, I know a reader trend when I see one.

  • Forrest

    Hi Sam,

    Is the prior going to be applied just before the election as well?

    Put another way, if the experts and the polls diverge the day before the election (they presently are in alignment I understand), will PECs model be pushed toward the expert opinion?

    As polling information becomes more prevalent, it would seem that some sort of relative downweighting about the strength of the prior should be included, but I don’t see a particularly elegant way of doing that.

    Hopefully the prior and the polling snapshot won’t ever be too divergent and this won’t matter, but I’d hate to see the desire to have an accurate ‘forecast’ actually corrupting the accuracy of the ‘prediction’ on election day.

    • Sam Wang

      The prior will have an SD of 7%, which sets a broad range of expectations. The predicted distribution is the convolution of this with a polls-plus-drift distribution. When the election is far off, drift is large, and so the prior can have a strong effect. When the election is near, there is not much time for drift, and the prior can’t affect the prediction much. On Election Eve, the prior will have nearly no effect.

  • Ilan

    Although I don’t think there is any data yet to support it, I wonder if it is worth investigating something like Google correlate similar to the primaries.

    It will be interesting to see how much, if at all, this year differs from the previous trend with split tickets.

  • Ketan

    I think the Histograms in the sidebar indicates “if the election were held today” probabilities.

    It might be more clear to give the November probabilities instead. (I imagine the peak would be lower, and the results more spread out.)

    And it would naturally sharpen as we got closer to November. (If I understand your system correctly.)

  • 538 Refugee

    Maybe the polling data just doesn’t justify trying to put a meta-margin tag on it like the presidential race since it is all about individual regional races. Just go with the likely number of seats and be done with it. Maybe link to something akin to ‘power of the vote’ so people can see the margins of the contested races with a few lines about things like the differences between the presidential vs midterm election differences. Just give the facts as you know them. This reminds me of feeling a need to call Florida when the data just didn’t support a call. Make us flip our own coin. ;)

  • Amitabh Lath

    The prior holds up if 2016 is like 2008 and 2012, in that the Democrat wins the presidential election. Currently that is looking likely but we’ve all seen 2 sigma “certainties” disappear. So the prior should be convolved with 1-P(Trump). Maybe you do already?

    • Sam Wang

      One way of incorporating this thought goes as follows:

      Take the Presidential prior (not the snapshot), and calculate P(Clinton win) and P(Trump win). Then calculate a weighted sum of drift-favoring-Democrats and drift-favoring-Republicans, in each case using +3.8% (the average of 2008 and 2012) compared with August polls. Use this weighted sum as the Senate prior.

    • Sam Wang

      The average Presidential Meta-Margin for January through August is Clinton +3.8%. Our prior assumes SD_MM=6%, which gives P(Clinton win)=0.71 and P(Trump win)=0.29. The weighted average is therefore 0.71*3.8 – 0.29*5.5 = D+1.1%. The 5.5 arises from the fact that in the President Trump scenario, Vice-President Mike Pence would break ties.

      For the same period in August, the average Senate Meta-Margin was D+1.8%. The Senate prior would be 1.1% above this, or D+2.9%. Using an overall SD on the prior of 7.0%, the prior probability of a Democratic-controlled Senate is 65%.

    • Eric

      I like this approach, and Sam’s proposed implementation sounds reasonable. I am not a fan of an “expert prior” at all.

    • Amitabh Lath

      Does the +3.8% effectively assume the uncertains break Dem? Or equivalently some number of Republican LVs stay home? I guess I worry about such a large value without a concrete mechanism to pin it to.

    • Sam Wang

      Could be done with +3.8 +/- 3.8%, which is like saying that 1 out of 6 times, the President’s winning party will not improve between now and Election Day. Given the data currently available to me, that is a guess.

    • Amitabh Lath

      Yes, good, the 100% uncertainty works. There definitely IS an effect because Clinton is unmistakably ahead, and it would be wrong to ignore it altogether.

      But I worry that a lot of the pro-D-bias dialed in either by experts like Sabato or news articles discussing fundraising, advertising, staffing, or debate prep asymmetries are selective reporting or irrelevant to the election.

      Not only is it difficult to put numbers to this type of reportage, it is also difficult to not be swayed into big pro-Clinton priors.

  • Maredudd Dyfed

    My one note here would be that, considering the recent activity of the Senate, it might be more accurate to refer to your probability above as Dem+Ind probability of “advantage” or “effective majority” rather than “control.” Practically speaking (personal or partisan opinion of this practice notwithstanding) literal control of the Senate requires a 60 vote supermajority to enact legislation or confirm nominees; while 50 votes plus the VP tiebreaker is traditionally regarded as control, it is plain that in our hyperpartisan environment, this cannot be regarded as active control of usual Senate activity in a meaningful sense.

    • Bernd

      Two words: nuclear option. If the party having the Senate majority wants to abolish the 60 votes requirement for invoking cloture, they may simply do so. Whether this is a smart political move is a different question, but with the possibility of several Supreme Court vacancies during the next term, it is quite likely to happen.

    • Froggy

      The need for a supermajority to overcome a filibuster to vote to confirm nominees for executive branch positions and for judicial positions other than the Supreme Court was done away with in November 2013. If the Democrats take control of the White House and the Senate this election, it is beyond belief that they would let a Republican minority block them from filling a Supreme Court vacancy, so the filibuster is effectively dead for all nominees.

    • Stuart Levine

      I disagree with “Froggy.” Remember, in 2018, there will be at least 25 D senators up for re-election. It is too easy to imagine that the D control of the Senate will only be for 2 years. It’s quite possible that the D’s see themselves in a Senate minority, thus want to preserve a supermajority filibuster rule.

    • pechmerle

      Apart from the transitory nature of which party holds a majority in the Senate, never forget that the interest of the majority caucus as a group and the interest of the individual senator always diverge. An individual senator’s vote is always worth about 20% (math?) more to him or her with the 60% non-speaking filibuster rule in place. Few senators will willingly give up that extra weight to their vote.

      Also an aside on “many” Supreme Court vacancies to be filled during the next presidential term: That overstates the case. Turning political-science pundit-y for a moment, presumably at 83 (currently) Ruth Bader Ginsburg would step aside so that a younger progressive justice could take her place. Powerful centrist Anthony Kennedy at 80 (currently) might well continue for quite some time, barring health issues of which I’m currently not aware. All of the others are young enough (assuming no sudden health changes) to serve well past the four-year term of the next president. So this is another reason for a senator not to be too eager to give up that extra leverage that the filibuster rule gives his/her vote.

      The replacement for Scalia is of course a vital choice in the immediate term. Will enough Democratic senators give up some individual voting leverage to support the nuclear option to make that happen? Not so clear.

      Also, if the Senate majority does change at this election, then the Republicans might well let Garland be confirmed in the lame-duck session (already some murmurings to that effect), and he is emphatically not a progressive. Closer to a Kennedy center-right position than to any of the four current liberals on the Court.

  • Richard Vance

    Dr Wang,

    You provided the answer for the prior in the dialogue. It is different in presidential and mid-term elections. Use that history as the prior.

    In presidential years a bias towards a winning presidential party is called the coat tail effect. The presidential meta-margin drives the size of the effect.

    In mid-terms senate elections are driven more by incumbency than any other factor. Incumbents rarely lose thus open seats are the only seats to provide a prior and would be historically away from the president’s party.

    In all I am uncomfortable with estimating based upon experts. The experts said the Titanic was unsinkable. There is an expert confirmation bias clique.

  • Dennis

    80% for likely and 60% for lean seem way too low – for a first guess I would think of them more like standard deviations and say 95% and 68%. Really, the scientific way to assign these probabilities would be to look at the historical track record of the predictor. Really, I doubt there is such thing as “expert opinion” separate from polling – my bet is that 90% or more of the prediction is based on polls, with past history as the next most important indicator and other information only a small part.

    • Sam Wang

      You are correct on the first point. But for setting a prior, it is better to leave things uncertain.

      Your second point is not correct at the start of the campaign, before polls are available. At that point it involves assessments of candidate strength. I do agree that the ratings change to follow polls, but this is just to seed the calculation at the start. Alternatives include (a) analyze Senate data to 2004-2006, when polls became dense enough to be useful to extract some general rule; or (b) drop the expert prior.

      The basic issue here is that in 2014, random drift in late August slightly favored Democrats to retain Senate control. This year, it is unlikely that I will be dinged because the bias acts to enhance an already-existing lead. I guess I could just assume random drift and act like I didn’t learn anything two years ago…