Princeton Election Consortium

A first draft of electoral history. Since 2004

The mailbag

In my evaluation of how the predictions turned out, Reader AA offered the following:

It is meaningless to compare your personal prediction to the output of other website’s models. I believe 538 was reporting the mean of all simulations? What was the mean of your EV distribution? Alternatively you could compare the mode and median of your two models.

This is a brave sally, but is not quite right. My reply is rather technical, but perhaps a few will be interested.

First, let us consider the idea that one should consider “means vs. means.” Depending on the nature of the calculation, the mean of a probability distribution is not an appropriate parameter to report. EV outcomes are discrete. When the distribution is spiky, a mean may not be close to any likely outcome.

Models that assign intermediate win probabilities to individual states yield a distribution that is sufficiently smoothed that the mean and median are indistinguishable. This can come from using fewer polls (therefore more uncertainty) or by blurring out the polling data. In contrast, a pure poll-based distribution gives fewer states that are uncertain and a sharpness that allows the mean and median to differ. Because I regard the median as predictive, I made a “median vs. median” comparison.

But if you really want it:

Mean vs. mean: The PEC probability distribution’s last-day mean was 352.8 EV. By this metric, we were closer to the final outcome (365 EV) than FiveThirtyEight’s 348.5 EV.

Median vs. median: PEC, 352 EV (single-day), 364 EV (final prediction), FiveThirtyEight 348.5 EV.

Mode vs. mode: This is the game of guessing every state. At 353 EV, we tied.

So that’s two out of three, with the third one a tie.

Aaron appears to think that the specific prediction I made lacked a rationale. This is not true. For a final prediction, I could reasonably have chosen any large spike that fell within the 68% CI. As I mentioned to another commenter, for most of October the median spent most of the time on one of three values: 353, 364, and 367 EV – all quite close to the final outcome. I was highly likely to choose one of these values. In the end, I followed two lines of reasoning.

1) The EV estimator was fluctuating more than I would like to see. I wanted to resolve this by integrating over a longer period. The problem was how to determine that period. To resolve this I examined the SD of the medians from day X to November 4th, where X varied from mid-September to late October. I found that this SD was fairly constant, but increased for values of X earlier than approximately October 4th.Therefore it would be reasonable to use all data over that period, during which the median of the medians was 364 EV.

2) The other argument is the one I gave at the time. The question was whether polls were accurate indicators of actual Election Day behavior. Since several Missouri and Indiana were basically tied, the median was not stable. Assuming a cell-phone correction of 1% was sufficient to bring the daily snapshot toward where it had been for the past month. I should also point out that this correction is considerably smaller than the 2.8% claimed by Silver. Also, the final prediction met the aforementioned criterion of still being inside the 68% confidence interval of the daily snapshot.

That’s the entire analysis. The corresponding assumptions in the FiveThirtyEight model include: 1) assignment of weights to pollsters, 2) correction based on national polls, 3) demographically-based assignment of undecided voters, 4) an inverse Bradley effect based on primary-season voting patterns, 5) an overall drift in sentiment between poll day and Election Day, and 6) a variable half-life of polls, all topped off with 7) Monte Carlo simulations that only approximated the exact distribution. The summed effect of all of these assumptions led to a difference of 3.5 EV from our final snapshot of last-week polls. Also, a cumulative effect of the assumptions is that simulation is no longer necessary. With many uncertain states the same answer can be gotten by taking a simple probability-weighted average of EV.

It’s a shame to see an otherwise very engaging web resource marred so. But this intricacy was very much a part of the attraction. As Observer writes:

Your methods don’t generate a lot of debate, ergo far lower traffic and far less discussion….A simple clear picture wouldn’t have helped newspaper sales, and it apparently doesn’t drive traffic here either. Sad, but that’s how it is.

FiveThirtyEight does have other merits that make the site quite appealing. Silver and Quinn are fun and brash. They did original reporting, unusual for bloggers. Conventional wisdom is settling in that the Model (I think they call it that) did well. Ironically, it’s the weakest aspect of the site. Luckily for them it was an easy year for analysis.

35 Comments

35 Comments so far ↓

  • Observer

    Sam, you are of course correct in almost every respect. (I’ll get to the — quite minor — quibbe in a moment.) Your estimate for EV’s on election day was closer. Your popular vote estimate was — slightly — closer. You both called the states the same way.

    I also thought at the time, in the last few days before the election, that Silver’s elaborate modeling was understating the visible momentum in the polling toward Obama. Silver’s persistent belief, and applying weights accordingly, that the election would regress toward multi-cycle averages for the two parties by state seemed to me to stop being meaningful (at the latest) when there was essentially ‘no time left on the clock.’ That (and thinking hard about the free analysis you and Silver provided on these sites) got me to predict 379 (it’s there on the prediction thread). If Missouri had come in for me (it might yet, but probably not), that would have been closer than Silvers 348.5, and yours! (But foolishly my number was predicated on MO + GA and not IN. Can’t begin to tell you how happily surprised I was that my redneck brother’s state of IN went blue.)

    The point you have best been making is the efficiency of your methods. You and Silver were only a hair’s-breadth apart on the popular vote, and you were closer on the EV’s — all with straight-forward poll-averaging. No muss or fuss, no endless arguments over assmptions, no (credible) claims of thumb on the scales. The numbers, as you have crunched them, apparently don’t lie.

    Oh, the one quibble: your methods don’t generate a lot of debate, ergo far lower traffic and far less discussion.

    Of course that’s because little is needed, to get an accurate result. But then the social aspect of the site is considerably damped down. I did eventually get tired of, and cut way down on, reading interminable mindless ranting posts over there though.

  • Sam Wang

    If I follow your last point, the very complexity of his model generates discussion (and enthusiasm). That’s an interesting problem – it implies that there’s no reward for being simple in the analysis.

    Through this election season I made a commitment at the start to stick with the simpler methods. It was partly because the calculation seemed most useful as a pure snapshot, but also partly because unsupported assumptions led to problems in 2004.

    It’s too bad that we don’t have that kind of back-and-forth here. But thanks for coming all the way over to this hidden post!

  • Observer

    “If I follow your last point, the very complexity of his model generates discussion (and enthusiasm). ”

    That is pretty much right. In the earlier months, complexity (and uncertainty via lots of time left on the clock) did drive considerable debate and discussion on his site. Some of it quite sophisticated statistical commentary. As the clock wound down and the picture got clearer, the ratio of ranting to discussion soared higher. Peaking after the ‘Palin uptick’ for the Repub ticket.

    ‘no reward for being simple in the analysis’ — that’s another version of the ‘media obsession with close horse race narrative’ — regardless of the real state of the contest. A simple clear picture wouldn’t have helped newspaper sales, and it apparently doesn’t drive traffic here either. Sad, but that’s how it is.

    One thing that was genuinely very useful over there was the reports from the road his team posted. Their observations of the exceedingly feeble McCain effort on the ground in many states gave me better confidence that the polling results would hold up through election day.

    I noticed your earlier comments that ‘putting your thumb on the scale’ in 2004 just got you a bitten thumb. And I’m impressed that the pure simplicity approach came through with accurate results again this year.

    I’ll definitely be back here next election cycle.
    And I hope you’ll cook up something to offer for the mid-term Congressional elections.

  • Sam Wang

    Observer – It’s ironic that people who started as poll aggregators made one of their best contributions by doing some old-fashioned, original, on-the-road reporting.

    It might be enjoyable to come back to this in two years. But I would need to gin up some drama. I don’t know, challenge the competition to a $10k bet?

  • semiotic_guerilla

    I’ve been lurking here since september, just like on the 538 and pollster.com . Numbers dont lie and you won this thing, but from my point of view (someone’s from Eastern Europe who reasearch and write about political communication and electoral politics) what is real value of sites like yours or Nate’s, or pollster.com is in depth analysis of all those numbers and outcomes in terms of what they actually mean.

    I love Zen style simplicity of your model but what really gave ma a lot food for tought was your article about economy of poll as a news – or rather like Boorstin would say as pseudo-event.

    I start to love the numbers again (after some bad experience with statisics or rather teacher of statistics at my alma mater) thanks to you and Nate though I work rather on iconic-symbolic aspects of communiaction, visual and verbal rhetorics and similar stuff. What is cool in your model (from my point of view) is that it allows to recognize connections between numbers and context much better than national trackers, Nate’s supertracker or means of all polls or regressions of pollster.com . For example juxtaposition of your histogram and timeline of expenses on negative advertising by both candidates is really illuminative.

    Thanks a lot for your great job.

    One thing that makes me wonder is why are exit polls in US er so much off the mark. I mean in my country polls are often very inacurate (though we dont have fifty states so it should be easier :)but exit polls are always within the MOE with real outcome. Why is that so?

  • semiotic_guerilla

    One more thing. I think what is another great value of your model and something that makes it less newsworthy than reporting single outliers showing race tightening to fit the horse race narrative is that it gave very stable picture of the race. Its no news – you show trench warfare like during WWI and media want Desert Storm. And although the general outcome (Obama wins) was very stable even during McCain Palin/convention bounce still your histogram offers much better picture of interaction between campaign context and public opinion. Thats the beauty and biggest strenght (for me) of your approach. Once more thanks for great job and i really envy that you guys in US have not only such accurate polls but also amazing and insightfull aggregators and projection models.

    Oh and sorry for all my spelling errors.

  • Scott

    What bugs me is that the last couple of times Silver has been on MSNBC (most recently Matthews last night talking about the remaining senate races) he’s billed as the guy who beat out the rest of poll aggregators by being more accurate. Of course, Nate doesn’t correct them when they say that, how’s that for accuracy.

  • mddemocrat

    Although this faux competition may actually provide the, er, “debate and discussion” it appears the public craves, I truly enjoy the insight and the comments of both sites. Nate and you are both doing great work and provide to the data base for everyone.

    If some ballyhoo will accrue some spotlight to both sites, all the better!

    Thanks for your great work!

  • Vicki Vance

    I liked your site the best by far during the election, Sam. The simplicity was reassuring. I think a lot of people thought the same. I still check in every day. I tried 538 a couple of times, but it just didn’t do it for me. Anyhow, you won whether you get the credit or not. Also, you have your other life – this is just a sidelight for you. By the way – I am buying your book for my daughter for Christmas.

  • Michael

    I felt that Silver’s assumptions and simulations were the least interesting part of 538, and that Sean Quinn’s reporting and Brett Marty’s photography were the best things over there. Not that I’m suggesting you hire a reporter and a photographer for the next cycle, Sam.

    Then again, considering Nate’s background, what else would we expect from him? Running simulations is what baseball stats geeks do. It’s a reasonable way of attempting to translate statistical analysis of individual player performances into predictions of team level outcomes. It’s also fun. His mistake, if there was one, lies in drawing an analogy between a protracted contest like a baseball season and a discrete event like an election.

    As far as I know, Nate’s only substantive criticism of Sam’s simple meta-analysis was that it addressed “a largely meaningless question” — namely, what would be the likely outcome if the election were held today. If you want to predict the future, as Silver claimed he was trying to do, then you’re going to have to make some assumptions. If you’re assumptions turn out to be more correct than other folks’ assumptions, then you stand a better chance of predicting the outcome of an election that’s a couple of months away. But the closer you get to election day, the less value there is in any tweaking you might want to do, since there’s really no future left to predict. When the election is tomorrow, the answer to the question “what would happen if the election were held today?” is pretty much exactly what you’re looking for.

  • gprimos1

    Dr Wang,

    Thanks once again for your update. I would like to comment quickly on the confidence intervals and predictions early in time. I was not so interested in the final EV point estimator as I was with the 95% CI. As I mentioned before, your prediction had a MUCH smaller CI compared to Nate’s. Even the CI YOU provided may have been too wide, considering how close the final EV came to the median prediction. What is the probability of seeing a result within that range of the mean? I guestimate that it would somewhere near 1/10, itself a relatively improbable occurence given your variance of the model.

    Now I take the issue of CI’s to the time dimension. The one advantage that one could argue about Nate’s model is that it attempts to model the uncertainty in the polls very early in the season compared to election day. Ideally, this would be able to account for all likely shifts in opinion due to the usual gaffes, debates and minor events (although black swans are always lurking around the corner, but let’s ignore them for the moment). That being the case, Nate’s CI should theoretically have been more likely to contain the final EV for a longer amount of time than your model, and as far as I can recall it did, though he did not provide an EV estimator over time graph, only a national vote prediction. All this is no criticism of your model since you only were trying to predict the result if the election were held the on any given day, not election day. Anyway, as time went on, Nate’s CI just did not shrink as much as it needed to bring it in line with the final election day results. Even a 2% chance of McCain winning was absurd given the final results.

    I would like to see a model that combines the accuracy of the simple model with some attempt to model the variance vrs election day. It seems like just adding one additional factor might have gone a long way towards that end (that is, the mean historical difference between polls and final results). As we approached election day, it would converge to the simple model, but it would be more likely to contain the final EV in the CI earlier in time.

  • hugoboom

    I agree with Michael in general. The reporting over at 538 was interesting and in some cases they were covering topics that were not receiving attention elsewhere.

    But I also liked the pollster handicapping. The ability to look at the leans and the biases and then go to Pollster and construct my own model using their java app.

    So I viewed the three sites as complementary in the strictest sense. And although I did not look elsewhere for my polling news … I would have been missing something if I did not have all three options.

    Thanks.

  • Sam Wang

    Michael and gprimos1 – FiveThirtyEight’s time course pretty much went up and down in parallel with my index. In fact, it even crossed below 270 EV once during the peak of Palinmania, which was probably because of the national-poll correction. This was more or less an error on his part, having to do with Palin’s average national support being concentrated in red states. He even discovered this fact, but did not act upon it.

    If you look at the assumptions, there was no way the two trackers could do anything else. The one here responded more slowely during times when there were few state polls. But basically, that’s it – these are tracking indexes. To call one a “prediction” of any different accuracy than a same-day snapshot is basically baloney. But for some reason, people believed it.

    Hugoboom – I agree about the interesting aspects of FiveThirtyEight. The photography and on-the-road reporting were great. The pollster accuracy was interesting, though that information was available elsewhere.

  • gprimos1

    Dr Wang,

    Just to make clear, when you say that Nate’s index went up and down with yours, my impression you are referring to the EV central tendency. I think any decent model relying on public information would basically agree on that for any given point in time. The real “predictive” question is what is the variance of the EV estimator vrs election day. I would think a predictive model would have much larger variance early on, that would shrink as we approach election day. How would one make any valid estimate of the variance (EV current vrs election day) without some kind of predictive assumptions?

  • Alexander Yuan

    I agree with griprimos1 here. It’s probably fairer to 538 to evaluate them on their projections from all throughout the relevant period, since that’s what they actually tried to do. Their goal for the vast majority of the time wasn’t to snapshot the race, so I think it’s not unexpected that they would do worse scrambling to change their model for the few final days.

    It’s worth noting that their projection bounced around a lot over time, so perhaps they didn’t achieve their goal very well (for example, they had a convention bounce correction but removed it because of…popular demand, of all reasons).

    But since your model’s goal and their model’s goal are different, I don’t really think there’s a way to compare the two.

    @gprimos1 – “Even a 2% chance of McCain winning was absurd given the final results.” If you’re already given the final results, anything other than 0% would be absurd, no? ;)

  • George Smiley

    Sure, your predictions were slightly more accurate, and your model was quite a bit simpler. Those are things of real merit.

    Nevertheless, there were compelling reasons to read 538. They covered a lot of downticket races that you did not, and they made a lot of raw data easily accessible in tabular form, which you did not. They also covered a lot of the mechanics of the campaigns that you ignored. These mechanics may not be useful as predictors, but they are useful if we are interested in thinking about what happened — in formulating post-hoc explanations (or hypotheses).

    Frankly, your constant comparisons to 538 — and the general tone here — are suprisingly whiny. Get over yourself (this may be an impossibility, as I’ve found Princeton faculty in general to be even more arrogant than Harvard faculty).

  • Michael

    To call one a “prediction” of any different accuracy than a same-day snapshot is basically baloney.

    Right. That’s why I said predicting the future was what “Silver claimed he was trying to do.” I didn’t mean to suggest that I believed he was predicting anything. If you’re really going to get into the prediction business, you can’t go changing your predictions every day. Can you?

  • Sam Wang

    George Smiley, you left a fake address. Normally I don’t like that sort of thing. But you have made telling points.

    I agree about all the positives that you cite at FiveThirtyEight. I think that there are many resources that are quite useful. They’ve mainstreamed this rather odd hobby, which is quite an achievement. It’s also true that I would never do all of the tabulation of data, mainly because I consider it the exact opposite of what ought to be done.

    Part of what underlies my sniping is that there are many hobbyists with all kinds of electoral models. In reaction to all of them, I’ve advocated leaving out unwarranted assumptions. For me this is a hobby that got out of hand. It’s not particularly hard to do this kind of analysis well. Anyone can do it, as long as a proper aesthetic simplicity is followed.

    Now one hobbyist has gotten quite famous, and has fans who think that’s the way to build a model. What came out wasn’t false, but it wouldn’t be a good way to analyze a closer election accurately. I also think some wrong lessons are learned if one holds up subpar work for admiration.

    I was just corresponding with Andrea Moro, another hobbyist. He thinks it’s unnecessary to do this any more now that the whole thing has mainstreamed. Maybe he’s right.

    If I come off as whiny, my apologies. I am only now starting to grasp that many readers of this and other sites don’t really distinguish between the flavors of models. There are more substantive things that can be said. In the future I’ll try to stick with those. Thanks for your feedback, even though it came with a low blow or two.

  • Aaron Andalman

    Here are my thoughts:

    1) In general, my feeling is that looking carefully at which model performed more-close-to-perfect in this election is not the essential
    question. We are working with an n of 1 by defintion, at least until 2010. Rather the key is that your model performed just as well with
    far far less complexity. This, I believe, is what makes your model superior. Not that your median was 4EV higher than his in this
    particular instance. (although… it might be possible to estimate the 95% confidence interval of the medians by feeding the models
    random subsets of the available pre-election polls).

    2) I didn’t intend to imply that your guess lacked rationale. I was rather reacting to the use of your rational guess in comparing the
    validity of two models. In the end, I still believe that your selection of 364 does not significantly inform our understanding of which model performed better.

    3) Less importantly, how do you know 538’s mean and median are equal? 538’s model is impossible to fully understand, but I would
    assume his monte-carlo simulation results in a peaky distribution just like yours. And I don’t remember seeing Nate report anything more than the mean and the common outcomes (338, 353, 364, and few others I can’t remember).

    (Finally, unrelated to my original post. How did you come up with the polynomial expansion trick? It is very clever and I’m curious.)

    Thanks for the exchange.

  • Sam Wang

    Aaron Andalman – The same method worked well in 2004. Offline, I found that the approach to House and Senate races was dead-on in 2006. So I came into 2008 with a lot of confidence that a simple approach could do quite well.

    In regard to median and mean being the same, when many probabilities are indeterminate in this problem, the overall distribution starts to take on properties of a normal distribution. That’s always the case for sums of many random variables – it’s called the Law of Large Numbers.

    The polynomial trick – Lee, a reader from 2004, sent that one in. Before that, for about a month I went through all 2^N battleground-state permutations. I am embarrassed about that.

  • Observer

    “I don’t know, challenge the competition to a $10k bet?”

    Very good, Sam. And very bad.

    You might challenge three or four of the aggregator sites to a poker-type bet — each puts in, say, $5K. Site with the final EV number or closest takes all. That would drive traffic, and media attention. All sites in the pool would actually win.

    But – it would just add to the already noxious excessive focus on the horse race aspect!

  • Mike

    Dr. Wang-

    First off, congratulations on a great job this election season. I loved reading this site in 2004 and 2008. I also very much enjoyed reading 538 as well, and thought there was a lot of value in reading both. I do have to agree slightly with “George” though. While it is both instructive and interesting to read about what you think is right about your methodology and what you think is wrong about Nate’s, your tone increasingly comes across as rather condescending toward Nate and bordering on disrespectful. I think Nate has added a lot to the discussion and deserves to be commended for that even if you outperformed him. Regardless, I hope both sites are still around four years from now. Knowing Nate’s work with PECOTA

  • Mike

    Sorry for the double post…

    Knowing Nate’s work with PECOTA, he’ll try to improve on his model, and perhaps everyone will “benefit” from even more accurate projection models in the future.

  • Vicki Vance

    whatever. I don’t think anyone needs to worry about Nate. when he said this site was “largely meaningless”, he was just plain wrong….. and he is getting too much attention and is full of himself. Personally, I think all this talk about him is getting tiresome. I used to look at Sam’s site everyday and think “if the election was today, Obama would win :):)” – then one day the election WAS today and Obama DID win and almost exactly they way Sam’s model said. It gave me a great deal of comfort during the election without all the craziness and worry associated with guessing what would happen in the end. LOVED THIS SITE!! Sam – you’re the best.

  • Observer

    Sam, it is true that you have made some positive comments about FiveThirtyEight along the way. Also true that a good part of the traffic over there was driven by debate over assumptions that didn’t get him as close to the final result as your lean, elegant approach anyway. Also true that Nate is a bit full of himself now (not too much I think, given his real achievment — which was “[t]hey’ve mainstreamed this rather odd hobby”).

    But I also did pick up a whiff of jealousy here, of the ‘I’m more accurate [true], why isn’t the world paying more attention to me’ variety. I smpathize with you on that, but being a celebrity statistician is hard . . . .

  • William

    I actually think that his popularity is less due to controversy over the complexity of his model, and more due to the fact that FiveThirtyEight.com was not originally its own website, but a Daily Kos diary, and thus he already had an established base when he spun it off into his own website.

  • William

    Also, I think it might be instructive to take his assumptions one-by-one and feed each one separately(not all at once) into your model. Personally, I think that accounting for “house effects” might have helped with the volatility of your model.

  • Sam Wang

    Mike – I’ve been reflecting on your point regarding Silver’s positive contributions. I have remarked upon them repeatedly – perhaps that’s not clear.

    Regarding PECOTA, however it was applied, it’s like other approaches that hobbyists have taken. It didn’t improve electoral prediction and would not have helped in 2004 or 2006. Empirically there’s no evidence that it’s necessary. I have a lot of trouble identifying a reason to cling to it. Why not lose it – the rest of that site should be plenty.

  • Brad Stiritz

    Hi Sam,

    I never picked up on this rivalry vs 538, but reading about it now makes me feel a little sad.. What, so the other guy was more popular? Come on man, do you really give a crap? You have the superior model, you have a great job at a great institution, you just published a book, etc etc. Not to mention, of course, that your man won the election (and isn’t that kind of the greater good here after all?) Dude, you should be on top of the world.. :) ~brad

  • Sam Wang

    Brad – Thanks for pointing all these things out. Perhaps the best thing at this point is to watch the last numbers come in and call it a day!

  • JJtw

    Nate’s initial success was largely built off of being very accurate, but in the primaries. His extremely accurate prediction of the Indiana and North Carolina Democratic Primary results were contrary to the conventional wisdom and shocked everyone in the mainstream and made many geeks to lazy/busy to actually do the analysis happy and/or jealous. In retrospect, the utility of his approach was obvious, but it took him to put out a testable hypothesis. For that he deserves a lot of credit. This cemented his credibility and he built from there.

    However, I contend that his prognostications for the general election were far less insightful (mostly navel gazing IMO) than his primary modeling, though I agree with many commenters here who commend Sean and Brett’s original reporting. I’d also like to give Nate some credit for his off the cuff mini-analyses that he’d do with the news item of the day. For example, his highlighting of the Harvard study about the Bradley effect, his quick comparison of landline only vs. cell-phone containing pollsters, his analysis of support of Illinois Senate successors, etc. It is nice to have a media darling that is willing to amplify existing analyses or do quickie analyses himself that challenge statistically-flawed conventional wisdom.

    Ultimately, I have to agree with Sam that his modeling resulted mainly in adding drama and noise, as long as we are talking only about the general election. His model was really way too complicated. The reason his primary model was so good was because it was simple*.

    – J.J.

    What follows my quick guess at what Nate did for the primaries, attached as a rather excessively long postscript. Despite its length, it is a rather simple idea, taking advantage of the fact that many very accurate polls of the primary race have already been done IN PREVIOUS ELECTIONS.

    ===

    The following is my vague recollection of what he did, though I’m sure the details differed a bit.

    Rather than predicting a single discrete event like a national election, he decided to test the assumption that demographic variables alone are capable of predicting the vote outcome, as long as one can measure the influence of those demographic variables directly. Fortunately, he was able to get real results from previous primary results.

    More or less what he did, as I understand it now more than 6 months later, is construct a data frame containing one response variable (the vote share) and a crapload (that’s a technical term) of potential demographic predictors associated with each outcome. For all counties in each state (or congressional districts in each state, I forget which), he had data on the outcome and the demography for every election up to and including Super Tuesday. He then used some standard statistical methods to figure out the best predictors (AIC maybe?).

    The hardest technical part would be figuring out the informative variables. But once you did that, it would be gravy. The basic procedure can be sketched out in only 6 lines of code, at least the way I’m thinking about it.

    # The real hard work is compiling the data. lines 1 & 2
    finishedPrimaries <- read.table(‘primaries.txt’, header=T)
    ncinDemog <- read.table(‘ncinDemog.txt’, header=T)

    # This is too simple to pick the predictors, but this is the basic idea
    # lines 3 & 4
    model <- lm(voteshare ~ race * age * education * income, data = finishedPrimaries)
    AIC.model <- stepAIC(model)

    # This gets the county predictions. line 5
    NatePrediction <- predict.lm(model, newdata=ncinDemog)

    # Sum over the counties, weighting by turnout, to predict the state.
    # line 7
    #

  • Sam Wang

    JJtw – I wasn’t paying close attenton to primary-season polls. If I understand correctly, he used demographics to fill in for sparse polls? That would be a justifiable use of his modeling approaches.

    The general-election model mainly served to add noise to the rather abundant polling data. A possible exception was North Dakota, where the sparse data made it hard to make an accurate prediction.

    Finally, yes, his off-the-cuff analyses were always interesting to read. The press has low standards in this regard. For example, the incessant commentary on the Bradley effect drove me up the wall. However, I found a fair fraction of Silver’s comments to be unbelieveable as well: the ostensible cell-phone bias among pollsters, the reverse Bradley effect, and I forget what else. A major exception was his observation that McCain’s post-convention (and post-Palin) bounce was concentrated in frontier states such as Montana and North Dakota. This was quite good, though he never connected the dots to see that it undercut a step of his own model, the national poll adjustment.

  • Hans

    Dr. Wang:

    In regard to comment 19 and your reply to it, I do think the principle objection to your methods that could be raised is simply that you only have two test-cases (2004 and 2008).

    But, it seems like that could easily be remedied. Is there any data source available that would allow you to look at older elections and reconstruct what the meta-analysis would have looked like? It would seem that 1996 (which was also a landslide in the electoral college) and 2000 (which was even tighter than 2004) would both be very useful test cases for your methods.

    It would seem like even a a fairly small amount of legwork by your readers/research assistants could construct an appropriate data set if one is not available; simply go to the library and record from the major newspapers of the appropriate timeframe the reported poll numbers. If I still lived in Indiana (and not Ontario) I would gladly participate in such a project.

    It seems like it shouldn’t be necessary to wait until 2010 to collect more evidence.

  • Hans

    It also would seem that I use the phrase “it would seem” pathologically frequently. Forgive me for sounding like an idiot.

  • Sam Wang

    Hans – Obsessive, repeated polling in many states is a relatively recent phenomenon. That’s why this hobby didn’t exist in, say, 1996. In this context, I thought was providing a lot of information. On the up side, polls in 1996 (or better yet, 1992) might be few enough to be dug out without that much time. It would be interesting to have someone do that.

    I developed an interest in this topic in 2000 when I read Ryan Lizza’s polling resource over at The New Republic. I noticed that the entire election hinged on the Florida outcome. In a sense, it was rewarding to be proved right.

    You should explore the backfiles of electoral-vote.com. When it comes to historical records of this kind of aggregation, Andy Tanenbaum is the dude.

Leave a Comment