I hear that the Princeton Election Consortium calculation has come under criticism for being statistically overconfident. I think there is confusion here, which requires a little explanation – and an appreciation for what I’ve learned since I started doing this in 2004. Basically, after 2012, any predictive calculation started to build in Election Day uncertainty. By conflating a 2010 snapshot with the 2012/2014 predictive model, Nate Silver has made a factual error.
The key difference is between the snapshot and the prediction. Our snapshots are precise because they give a picture of conditions today. Our November prediction builds in the possibility of change occurring in the coming seven weeks. Thus the November prediction above (today at 70%) will, in the near future, usually be less certain than the snapshot (today at 80%). As a reminder, the predictive model is documented and is open-source.
I explained this in 2012. As an example, when our current prediction method is applied to past Presidential races, they gave a cliffhanger in 2004, and clear Obama wins in 2008 and 2012. A polls-only approach suggests that this year, Senate control is also a cliffhanger, with a slight advantage for Democrats+Independents.
I’m sure there are more points I have missed. Have at it in comments. Please be nice about everyone, including any rival sites. Nonsubstantive and rude comments will be moderated.
(Note: while we were down, my response was here.)