Princeton Election Consortium

A first draft of electoral history. Since 2004

Just how steep is that climb in 2014, anyway?

October 24th, 2013, 10:52pm by Sam Wang


With four parameters I can fit an elephant, and with five I can make him wiggle his trunk.

-John von Neumann

If we want to forecast House control in 2014 without drilling into individual Congressional districts, we need to know two things:

  1. What national popular vote margin is needed to flip House control; and
  2. What the likely range for the national popular vote margin will be.

For #1, last week here at PEC I estimated that a margin of D+4% to D+5% (i.e. Democrats win popular vote by 4-5%) would be necessary. My estimate is not far from other analysts.

However, a notable outlier is Alan Abramowitz at the Crystal Ball, who claims D+13% is needed. This appears to be an error of overfitting, which I have previously mentioned in relation to The Monkey Cage* and FiveThirtyEight. That is not bad company…but seeing as how this problem is a recurring one, I would like to get into the details. It might reduce the possibility of similar future missteps.

First, let me show you the nature of the problem:

This is a graph (previously explained here) of House elections from 1946-2012, with the Democratic-Republican seat margin plotted as a function of the Democratic popular-vote margin. The shaded gray zone indicates a maximum region for all data not including the Great Gerrymander of 2012. The X’s indicate the PEC analysis (red, with error bar) and the Abramowitz prediction (green).

How could Abramowitz have come up with a prediction so far from post-WWII norms? He reports that his result comes from a best-fit equation of the form

PCRHS = 127.0 – (.54*CRHS) – (1.35*PRPM) + (1.73*RGBM)

This is a fit to predict PCRHS = predicted change in Republican House seats, based on 17 midterm elections. The three free parameters are CRHS = current Republican House seats, PRPM= previous Republican presidential margin and RGBM = Republican generic ballot margin.

All those coefficients give the appearance of precision. However, there are no uncertainties (error bars) given. And as a general rule, adding more free parameters captures more of the variation…but also increases the uncertainty of the prediction. I don’t know what his error bar is, but I imagine it’s at least 6%. In other words, I think this elephantine fit is waving its trunk a bit.

(Update, to reflect comments: We have to know the error bars are on the coefficients to allow an estimate on the uncertainty in the “D+13%” estimate. For example, if the coefficient on the most sensitive parameter, CRHS (0.54), has an uncertainty of +/-0.07, that contributes an error of +/-0.07*234 = +/-16 seats. If you look at Abramowitz’s table, that corresponds to a +/-9% error in his estimate of the necessary value for the generic House ballot estimate. And  ”D+13+/-9%” is not what readers think when they see “D+13%.”)

Uncertainties also get worse when a model is driven out of the range of data used to generate the fit. The parameter CRHS is in a highly unnatural place this year relative to PRPM, due to the Great Gerrymander of 2012.

I think this goes to show the difficulty of using linear regression, which sounds simple but has hidden problems. My view is that for this kind of data, fits are of limited use. I never do anything more complicated than single-variable regression if I can help it. Even then, I only do a linear fit if I have a clear and fairly simple idea of the reason for the relationship. And, of course, error bars are a must.

Long story short: Democrats face an uphill climb in 2014…but it’s not the north face of the Eiger.

*In comments, Sides points towards this reply from last year regarding the uncertainties arising from multiparameter models.

Tags: 2014 Election · House

18 Comments so far ↓

  • Olav Grinde

    This is all very encouraging.

    Sam, I wonder whether you expect the upcoming state-level elections to provide additional data points? For instance the Virginia elections on November 5th for governor, lieutenant governor, attorney general, and (perhaps first and foremost) their representatives to the House of Delegates.

    The Republican Party currently controls 65 of 100 seats (it was 67, but two seats are now empty). If I understand correctly, the gerrymandering of Virginia districts most likely places a Democratic takeover of the House out of reach. But it should be interested to see the size of the Democratic gains in two weeks — and your analysis of this election.

    By the way, the Virginia Senate, which is not up for re-election in 2013, is split 50-50 (or rather, 20-20). Which is why the tie-breaking lieutenant governor’s race is deemed crucial.

  • Amitabh Lath

    From the Abramowitz article:

    “The model was highly accurate in predicting Republican seat gains or losses. For the 17 midterm elections since World War II, it explained 94% of the variation in Republican seat change with a standard error of 9.8 seats.”

    (No, it’s not a real error analysis. If he was a student in a first year lab course he would get an F on this module)

    Given his model’s prediction of +1D, probability of a Dem takeover would be not much more than 1.5 sigma. If you add the uncertainty in the generic polls in quadrature, it’s an even smaller deviation.

    But, the biggest problem is this “I’ve post-dicted all elections since WWII so my model is sound”. Reminds me of an XKCD cartoon.
    http://xkcd.com/1122/

    Seriously, why focus on the generic poll to the exclusion of state and district level polls when they are available?

    • Sam Wang

      The general program in this area for political scientists is:
      (1) take past data, fit it to a many-parameter model;
      (2) test it with post-dictions (as opposed to predictions);
      (3) extrapolate to a future case.
      I would not ding him so much on that, since it’s a commonplace in fields like political science…and neuroscience, for that matter. I think the real difficulty is the combination of overfitting and not having a crisp idea about mechanism.

      Speaking of uncertainty in the generic polls, I have an analysis of that coming up. I think all of this stuff needs to be laid out clearly before one can get to work.

      In regard to state/district polls, the problem is that’s a missing-data problem, especially in cases where the general-election challenger is not identified. Generic-preference poll is not such a bad way to go. However, for what you want, Charlie Cook and Larry Sabato are good places to look.

    • Amitabh Lath

      Post-diction is fine, but should not be used as motivation for the model. And from the Abramovitz article, I don’t see any discussion of mechanism other than “it gets elections right going back to WW2″. Perilously close to using correlation to imply causation, the original sin of statistics.

      But even worse, he assumes there is NO correlation between his variables. I would assume all 3 to be correlated. Especially CRHS and RGBM I would assume would be highly correlated, and the presidential margin maybe less so but still significantly correlated.

      So even if you give credence to his “model”, his error estimate of 9.8 seats is junk.

    • Sam Wang

      Whether there’s a problem of the type you assert depends on the details of the fit. He is likely to have performed multiple regression, which I believe does not have a problem when the input parameters are correlated. If it’s done in Excel, there’s plenty of output to drill into the questions you and I are raising. Mainly I want to know what the error bars are on the coefficients, which would then allow an estimate on the uncertainty in the “D+13%” estimate.

      For example, if the coefficient on CRHS (0.54) has an uncertainty of +/-0.07, then that contributes an error of +/-0.07*234 = +/-16 seats. If you look at Abramowitz’s table, that corresponds to a +/-9% error in his estimate of the necessary value for the generic House ballot estimate. And “D+13+/-9%” is consistent with my current estimate which is “D+4.5+/-0.5%” for the moment.

      Your concern maps to mine in the following way: CRHS and PRPM are usually correlated…but with the post-2010 redistricting, they have come apart. That creates a hazardous situation for a predictive model.

      Anyway, I think we agree that more details would be clarifying.

  • John Sides

    Sam: We fit a multilevel model to district-level results from 1952-2010. We have four variables measured at the election year level and two at the district level. That is an extremely sparse model specification, hardly one that’s designed to chase idiosyncratic noise in the data.

    In any case, we hashed all this out in 2012. Here was our response:

    http://themonkeycage.org/2012/09/24/do-democrats-have-a-75-chance-of-taking-back-the-house/

    And here is more about the performance of our model vs. others:

    http://themonkeycage.org/2012/11/14/how-did-our-house-prediction-do/

    • Sam Wang

      John, that misses the central point I am trying to make. The sparseness does not matter so much as the sensitivity to the fit parameters. As stated in my reply to Amit Lath, it is necessary to report uncertainties on fit parameters, then see how that translates to prediction uncertainty.

      In the current case (Abramowitz/Crystal Ball), I believe the error bar may end up too large for the estimate to be useful. I would be glad to be corrected on this front by a quantitative statement.

  • Amitabh Lath

    If no errors are quoted one may assume +-5 in the least significant digit.

    So the constant term becomes 127.0 +- 0.5
    the other three get assigned an error of +-0.05

    But now the question is, do you add all these errors in quadrature (as if they are all uncorrelated), or linearly (totally correlated), or something in between.

    • Sam Wang

      Speaking as a fellow physics-type, I agree with your assumption…but I do not think that convention is necessarily being followed. It is better to simply find out what error bar came out of the model.

      How to add the errors is a good question, but that might be a secondary issue. Think of this: typical parameter values are in the range of CRHS=200, PRPM=5 (assuming units of %), and RGBM=5 (again assuming %). With those coefficients, the CRHS-associated error will dominate. That is why I focused on it. I’d have to learn more, but based on what I know, the CRHS parameter might be killing the validity of the model.

    • Some Body

      Can I get an explanation here? I don’t see what you mean by error in CRHS. As of present, there are exactly 232 Republican-held seats in the House (and 3 vacancies), and there doesn’t seem to be much room for error in determining that number.

      What am I missing?

    • Sam Wang

      All the coefficients have uncertainties. That uncertainty when multiplied by CRHS contributes to the uncertainty in the fitted estimate.

    • Some Body

      I see. So it’s the .54 that “carries” the error, not the CHRS itself (and that’s the one that contributes most to the overall error because CHRS is a much larger number than the other two). All clear now. Thanks!

  • John Sides

    “it is necessary to report uncertainties on fit parameters, then see how that translates to prediction uncertainty.”

    You mean like we did here?

    http://themonkeycage.org/2012/09/19/how-certain-can-we-be-that-democrats-will-gain-1-seat-in-the-house/

    • Sam Wang

      Yes, like that, plus your preceding essay. The magnitude of uncertainty in multiparameter models is underappreciated, and it helps to spell these things out.

  • Jim Bristow

    Statistical arguments aside, it’s surprising that Mr. Abramowitz conclusions don’t include at least some comment on the fairness of a system in which one side needs to win 13% more of the total vote to gain a majority. One has to hope that is not the case.

  • bks

    Getting steeper:

    Now Democrats are shaking their heads over signs that much of any advantage they might have gained has been effectively neutralized. Their concerns stretch beyond the current HealthCare.gov website problems and reflect fears that other political mines in the implementation of the ACA could make things even worse for their party. They are working against the political clock, as the shelf life of the government-shutdown problems for the GOP runs out (assuming no additional shutdown/debt-default scares, which is hardly a safe assumption).

    http://www.nationaljournal.com/off-to-the-races/will-aca-problems-hurt-democrats-as-much-as-the-shutdown-hurt-the-gop-20131104

    –bks

  • Chad Brick

    My model is pretty simple, but I’ve concluded that Democrats need to win the two party vote approximately 53-47 in order to take the House.

    http://sustainablestate.blogspot.jp/2013/11/on-gerrymandering.html

    You have to be careful blending pre-1994 and post-1994. A sea change happened that year. Not only did the gerrymandering advantage Democrats had been carrying for years disappear, but the sensitivity of the number of seats won as the vote varied dropped by more than half. Since 1994, a simple linear model results in about 3.7 seats switching for each percent of the vote gained. Republicans currently have about 20 seats more than they should given the results of the last election, implying Democrats will have to win by more than five percent in order to capture a majority of seats. As far as I can tell, this is the worst gerrymandering we have ever faced.

    I do find it ironic that the founders originally intended for the House to be chosen by the people and the Senate by the states. Due to the 17th amendment and gerrymandering, we now have the reverse!

Leave a Comment