Site improvements (and Bayesian jealousy)

September 26, 2012 by Sam Wang

Some improvements from Andrew Ferguson: The update date/time above now links to the history graph, and the Obama/Romney numbers link to the map. In the map itself, clicking it generates a popup window with lots of little features, especially if you start right-clicking on particular states. Finally, over to the right, under The Power Of Your Vote, the “Obama/Romney +X%” values now link directly to Pollster.com. Now you can easily compare the median-of-last-3-days-or-7-days’-polls with the original data source.

Short-term predictions are coming soon.

In other Presidential-race news, check out this interesting and aesthetically lovely site, Votamatic. Prof. Drew Linzer at Emory starts from fundamentals-based information to calculate a range of likely possibilities (called a Bayesian prior, where other political science models basically stop). Then he uses all available national and state polls to sharpen the picture, not blur it. I suspect it might weight the prior more than I would (I’m still reading). But generally, it is a logical way to combine general start-of-season knowledge with up-to-the-moment polling. My hat’s off.

At his site is a graphical interface that allows you to sort all the state margins in order. Notice that North Carolina and Indiana have a substantial gap between them. For this reason, breaking past Obama 347 / Romney 191 EV is unlikely.

36 Comments

Xtalographer says:

Wow, great website! Thanks for the link.

Not a skeptic, but ... says:

Time to put the 47% gaffe as a separate marker on the graphs?

Matt McIrvin says:

Man, I have no idea what is going on in New Hampshire, and I am literally now sitting three miles from New Hampshire.

Tapen Sinha says:

Bayesian prior is a great concept – in theory. In reality, it is another matter altogether. Either you need a huge amount of micro data or you ASSUME some prior. You can start with a no-information uniform distribution prior or you can impose some sort of prior. That’s the tricky part. I have seen those models up close and personal in central banks and in investment houses. They are, mostly ad hoc, beyond certain point. And the results you get, in most part, not usable in real life. And the solutions, are typically knife edge – a small change – and your model explodes.
My two centavos worth.
Tapen

Sam Wang says:

All the more reason to assume a broad prior. I guess in this lingo, I use an “indifferent prior.” I think the priors are mainly useful for predicting exact margins in poorly-polled states.

Drew says:

Tapen: Yes, the prior is arbitrary to a certain extent. One way to think about my model is just as a fancy way to update whatever prior you’ve happened to select. Obviously if you want to get better forecasts, it’s preferable to pick an intelligent prior (I use the Abramowitz model, which has a good track record), but the priors don’t get all that much weight anyway, and the model is fairly robust to the choice.

Joel says:

Linzer’s projection trajectories are *very* smooth. Is that weird?

Drew says:

I actually think it’s a good thing. The forecasts are meant to update in a gradual manner during the campaign, before converging (hopefully) on the actual outcome. The daily ups-and-downs in the polls – which you can see on my site too – shouldn’t have too drastic an effect on the forecast, especially early in the campaign.

OwlOfMinerva says:

Pretty graphics, but ultimately it’s one of those three-factor poli-sci regressions (“Time for Change”) with some sensitivity to polling. Those models are great for predicting the past. His prior (Obama with 52.4%) is intuitively reasonable, which is why his results seems so reasonable. But it is still completely ad hoc.
By late September there is no good reason to be multiplying the abundant data by any sort of prior.

Drew says:

At this point in the campaign the prior is really only doing two things. One is, it’s helping to peg the level of national swing, to rein in/shrink the forecasts from being too sensitive to daily polling. The other thing it’s doing is calibrating the trendlines in under-polled states. So for example, Arkansas hadn’t been polled until just recently, but I already had a pretty good idea from the prior that when it was eventually polled, it would come in at around 39% for Obama (as it did). You can see on my site that I estimate trendlines for states like Alaska and Delaware even though there aren’t any polls.

MarkS says:

“By late September there is no good reason to be multiplying the abundant data by any sort of prior.”
This statement betrays a fundamental misunderstanding. Data is ALWAYS multiplied by a prior, either explicitly (Drew) or implicitly (Sam). When there is a lot of data, different priors should give the same final result. But Drew’s carefully thought-through prior is likely to give better results when data is sparse (for, say, a particular state).

OwlofMinerva says:

“This statement betrays a fundamental misunderstanding. Data is ALWAYS multiplied by a prior,”
No, you are simply nitpicking the semantics.

OwlofMinerva says:

Drew, are you saying that the “Time for Change” model no longer has much influence on your forecasts? What would be the effect if you tweaked the Time for Change parameters, to give Obama, say, a 47.6% probability instead of 52.4%?
“too sensitive” is a tradeoff. Your model appears to have very little sensitivity at all to movements in the polls.
The sparsely polled states aren’t polled because they aren’t particularly interesting. Just shift the national polling by some delta for each state… which appears to be pretty much what you’ve done, no?

MarkS says:

It is much more than semantics. Before any poll is taken, Sam would say that Obama’s chance of winning is 50%. Drew would say that it is 52.4%. These are both priors, and they are both ad hoc. Bayesian analysis recognizes that there is no way to escape making ad hoc assumptions before data is collected, and therefore it’s best to specify these assumptions explicitly.

OwlofMinerva says:

Mark S: “When there is a lot of data, different priors should give the same final result”
In fact, if anything this is a “fundamental misunderstanding”, but I understand what you are trying to say, so I won’t accuse you of not understanding basic statistics in ALLCAPS.
At any rate, to clarify, what I was attempting to say is that I see no good reason in late September to be filtering the polling data through an econometric/polisci prior.

OwlofMinerva says:

Yes, thank you again for the basic statistics lesson.
So here is a basic statistics lesson back at you: The prior is not only defined only by the mean, but also by the uncertainty (plus higher moments like skewness, fat tails, etc.).

Sam Wang says:

My first thought is that one would probably want to set the minimum width of the prior to reflect the SD of national opinion swing during a re-election race. I took this as 2.2%. I’d set it somewhat wider to be conservative, i.e. not overly restrictive.
My second thought is that there is no need for nastiness or name-calling when having a discussion on how to set a Bayesian prior.

Drew says:

Owl: The Time for Change model is still having some effect, yes, in adjusting the overall swing of the states. For example, Obama had been running slightly behind his long-term forecast for much of the summer; now he’s slightly ahead of it. So the prior corrects for that in aggregate, but pretty soon, as we get closer to Election Day, it will stop even doing that. I don’t know exactly what would happen in I fed in a prior of 47.6% instead of 52.4%, but it would certainly pull down the forecasts somewhat. Again, though, that effect will wear off before too long.
As for the sensitivity to the polls, the EV and state forecasts have shifted up within the last week in response to the poll movement. Aside from that, public opinion has been very stable this year anyway. You’re welcome to compare to how the model performed in 2008: http://bit.ly/Ne486l

OwlofMinerva says:

Opinion will drift between today and Nov. 6. IMO there is a significant difference between (1) modeling the variance and kurtosis of this drift, and (2) modeling a non-zero mean, which indicates that somehow you have information about which way the polls will move.
It’s not that I believe the polls are an “efficient market” – but if you are going to forecast a direction, you need to make a case based on leading indicators. Whatever their merits, IMO models like “Time for Change” are not leading indicators five weeks before an election.

MarkS says:

If there is “no good reason” to filter polling data through an economic/polsci prior, then there is also no good reason to filter it through a flat prior. Every choice of prior, including the flat one, reflects an underlying assumption.
Note also that Sam uses an infinitely sharp time-dependent prior on polling data, dropping polls completely after they become roughly one week old. What is the fundamental justification for this prior? Answer: there is none. It’s just a choice. The point is that some particular choice must always be made.
If I had to bet on who’s model would give more accurate results, I would bet on Drew’s, because it uses more of the available data in a sensible way.

Drew says:

It’s debatable. Andrew Gelman wrote about this earlier today. http://andrewgelman.com/2012/09/a-non-random-walk-down-campaign-street/

OwlofMinerva says:

Thanks for the link. As I indicated, I do not subscribe to the view that polling is an “efficient market”, and therefore the polls follow a random walk. In fact, I do believe that there are “fundamentals” and the campaign is largely about bringing out those fundamentals. I have been saying all year that the fundamentals were “Obama by four”, and therefore my intuitive view of the race is quite in line with your model.
What I am questioning is the ability of political scientists to quantify these fundamentals using simple regression models based on extremely limited historical data. I don’t really disagree with your projections so much as the justification for them. IMO the fundamentals of this race have a lot to do with Mitt Romney and growing class tension.

Drew says:

Oh, well then yes, I COMPLETELY agree. I use the Abramowitz model because it’s simple, has (I think) the best track record, and, basically, my model has to start somewhere.

Pat says:

I was looking at the 2 history graphs (EV estimator and Meta-Margin) and it’s interesting to note how they evolve in a similar fashion (which is of course logical). However, we can also note differences.
For example, the meta-margin is now at the same level as it was in the first half of August (about 5%), yet the EV estimator is not much higher than it was in early August.
One graph that I would love to see would be a parametric plot of EV estimator vs. Meta-Margin. It would probably look a little messy if we plot a solid line with time as the parameter. Otherwise, a scatter plot would be enough.
Sam, it is possible to see such a graph? If not, I would love to make it myself. Among all the detailed files you provide in your “for geeks” link, I couldn’t find the date-by-date history of the meta margin and EV estimator. Did I miss something or it is not available?

Sam Wang says:

That is a good thing to do – in fact I do it as part of making predictions. Go get EV_estimate_history.csv linked at the Geek’s Guide. Columns 2 and the last column contain what you want.

Pat says:

Gives an interesting plot indeed! When fitting the scatter plot with a line, the fit goes almost exactly through the point (270 EV; 0% meta-margin) – which is in fact not too surprising – with a slope of 13 extra EV for each % in Meta-Margin.
There are some deviations from this trend that each 1% meta-margin gives 13 additional EV from 270: yesterday’s deviation from the trend is particularly striking: at 4.66% meta-margin, the trend would give an estimator at 330 EV, but instead it was 347.
Would be definitely interesting to see what would happen if the meta-margin kept increasing, whether we would observe a plateau reflecting the fact that the next few states (AZ, MO, IN) are harder to reach.

Ebenezer Scrooge says:

I can think of one reason why Obama might break through 347–admittedly only by one vote. AFAIK, you view Nebraska as a unitary state, but it isn’t. The Omaha area is one potential EV for Obama, as it was in 2008. The Obama campaign worked hard for that EV in 2008, although I have no idea if they’re doing so now.

steve in Colorado says:

They remapped that one Nebraska district in 2010- because, according to Nate Silver, they didn’t want Obama to win it again in 2012

Amitabh Lath says:

Linzer has error bars!
But look at the top left plots. The mean values (dark lines) are basically touching the top (bottom) of his error band for Obama (Romney).
Can that be right? Can all his uncertainty be on one side of the expected line? The state error bars are all symmetric. I smell a bug.
And also, on the Median EV estimator, the entire 95% CL (gray) has been above 270 for a while now. However, the extrapolation to Nov 6 still has lower edge of error bars (yellow bands) going below 270. How come?

Drew says:

Amitabh: Glad you like the error bars! Yes, the EV distribution has gotten very left-skewed in the past few days. You can see it in Sam’s histogram too. The reason is the big gap between NC and IN that Sam highlighted in his post.

Jacob Hartog says:

I’m trying to think about how much a) using a Bayesian fundamentals-based prior combined with polling data differs from b) a 538-style OLS model that is estimated off of both fundamentals and polling data simultaneously. I mean, epistemologically (a) makes more sense, but in practical terms I’m not sure how much difference it makes.
My limited experience with Bayesian models is that they tend to produce estimates that are smaller in magnitude but lower in uncertainty than classical estimates, because the presence of the prior tends to take extreme values of data with a grain of (Gaussian distributed) salt. Since a similar grain of salt is added by taking the median of state polls rather than their average, it seems like you would only want to do one or the other (median-filtering or Bayesian estimates), and not both. Thoughts?

wheelers cat says:

Im gradually becoming convinced this election season that Bayesian models are loosing their efficacy because of the increased speed of memetic transmission and memetic evolution, due to social media and internet access.
For example, the 47percent gaffe seems to have extended Obamas convention bounce.
If you follow twitter, there is a Romney gaffe a day trending at this point. Nate Silver and others made the point that a-priori 90% of gaffes have no effect, but I think that has changed.
If you look at RAND it actually seems gaffes have the potential to change peoples minds…that is, people with internet access, like all the respondents in the RAND survey.
and….I confess….I’m an otaku of robust statistics. Asymmetry and randomness are two of my favorite things.

THE says:

Bayesian Nonparametric Estimation of the Median; Part I: Computation of the Estimates
Ann. Statist. Volume 13, Number 4 (1985), 1432-1444.

Sam Wang says:

That is a most gnomic utterance, THE.
Jacob, my take in English: Median-based statistics confer resistance to outliers, but at the cost of slightly reduced precision in estimating the true average. You are correct that having a Bayesian prior could take the place of the median. The challenge is choosing the prior wisely.
My take is that a Bayesian approach adds little value to the Presidential race at this point. An indifferent prior is fine. However, Reverend Bayes’s followers could be of tremendous help in House races, at present a challenging missing-data problem.

Drew says:

Thanks to Sam for link, and to everyone in the comments. Maybe I can try to go back and reply to some of these questions.

JaredL says:

I’ve thought about doing a Bayesian model with a prior based on past elections. It seems like if you went over past gubernatorial, presidential and senate races you could get some decent estimates for the mean and standard deviation of the vote there, taking into account incumbency and shifts over time. There’d be some issues like how far back to go and, most importantly, it would take a ton of work, but it seems like a way to come up with priors that aren’t completely arbitrary.
Any thoughts on this approach? I’ve never built a presidential-election model so I’m curious what more experienced geeks think.

Leave a Reply

Your email address will not be published. Required fields are marked *