Feeding Karl Rove a bug
Today’s PEC news clips: USA Today, the Philadelphia Inquirer, the LA Times, Atlantic Monthly, and the Daily Princetonian. Early on Elect...
Senate: 50 Dem | 50 Rep (range: 47-52)
Control: D+0.3% from toss-up
Generic polling: D+0.8%
Control: D+0.8%
Harris: 274 EV (D+0.3% from toss-up)
Moneyball states: President AK AZ NC
Click any tracker for analytics and data
Mother Nature is our best teacher and the only one who is always right.
– Viktor Hamburger, biologist
In yesterday’s Los Angeles Times, several prognosticators (including me) commented on why we were right or wrong. On the wrong side are Colorado researchers Bickers and Berry, who thought that Romney would get 330 electoral votes. Now, they offer a convoluted explanation culminating in something about Hurricane Sandy. Their wrong answer reveals a persistent false idea that is circulating in some circles.
All this election season, I have pointed out that a transparent statistical snapshot of state polls gives the clearest look at the ups and downs of the Presidential race. Nonetheless pundits, especially on the right, asserted that pollsters as a whole were biased – even attempting to “unskew” the bias. However, the bias in state polls turned out to be close to zero – certainly less than 1.0%, based on Florida. To pollsters in 2012, the state-level problem is a correctly solved problem.
However, the same cannot be said of national polls. In final surveys, the national average* Presidential margin was Obama +0.31 +/- 0.37% (mean +/- SEM). The actual margin is currently Obama +2.74%. Therefore on average, national polls were biased by 2.4 +/- 0.4% toward Mitt Romney.
With this number in hand, it is now possible to perform the ultimate unskewing – bringing polls into alignment with reality by adjusting them toward Obama. Here, then, is the “first draft of electoral history” at the national level.
From this graph, three facts are evident:
So Bickers and Berry’s claim that “the president clearly benefited from the ‘October surprise’ of Superstorm Sandy” is unsupported by data. I am developing doubts about their analytical neutrality. A second piece of evidence is that most econometric models pointed in the opposite direction to theirs. A prominent example is the prior that informed Drew Linzer’s analysis at Votamatic.com.
I do want to correct one error in the LATimes piece – the cartoon. It is not true that “Democratic” math somehow beat “Republican” math. Instead, what we saw in this race was a triumph for the expertise of a profession of experts whose job it is to measure public opinion: pollsters. As a community, they did very well at surveying state opinion – as they have since 2000 in my memory, and perhaps earlier.
There is one remaining finding that puzzles me. Although post-debate-#1 national poll averages did not move much in October, state polls did – toward President Obama. There is some unexplained discrepancy. Were swing-state voters more malleable by messaging and advertisements? Were they more attentive to the race? Did these swings occur at the national level too, but polls failed to pick it up? Hmmm.
*This time I’m using the average. The median Election Eve poll margin was Obama +0.0 +/- 0.95% – even worse performance.
As Silver noted today, the swing state voters were indeed more attentive.
I am glad to note (once again) that this election (1) proved the quants right (2) Romney will not be able to take credit (or blame – nah) of what happens in the next four years.
Do I feel sorry for Eric Hartsburg? Nope. R&R? Nope. They all deserved what they got.
http://disappearingromney.com/
Tapen
Question on Median vs. Mean: isn’t +0.0 +/- 0.95% technically more accurate than +0.31 +/- 0.37% when accounting for uncertainty?
Also, I’d be interested in a correlation of National Polls vs. Meta-Margin. Was the difference in movement described in the last week present throughout the campaign?
Also, given that the national polls were off by 2.4% in the end, while the state polls were not, what does that say about the quantatative possibiility of aggregators failing in the future?
Medians are useful when you think there are outliers. The penalty is that you lose a bit of precision in the form of increased SEM. In any event, the absolute error, not the Z-score, is the guide for future pollster actions.
In regard to national vs. state polls, this discrepancy is nothing new. I wrote about it before the election – it’s been like that since 2000. There is no change this year. I don’t see a reason to doubt the future accuracy of aggregation based on this information.
So basically, if Obama had more coffee before the first debate, the unskewed polls suggest that he would have gone on to win the popular vote by a six point margin.
Honestly, I think Mr. Obama was simply floored by Romney’s run to the middle. What does one say when the other guy is suddenly trampling all your talking points?
He should have attacked the flip-flop at that point. Oh, well.
He should also have done a better job explaining why his “716 billion dollar Medicare cut” is not a big deal at all, and why Mitt Romney’s 20% across the board tax cut is. Instead, he kept repeating rhetoric that made it sound like he was arguing that Romney was going to give tax breaks to the wealthy, which is not what Romney’s plan would have done. If Obama had simply watched Bill Clinton’s DNC speech and repeated all of its talking points, he would have won the debate and the popular vote by six points instead of three.
This is just speculation, but I believe that 3% give or take was Obama’s natural meta-margin, and that the 47% comments accelerated and artificially extended the DNC bounce. The MM was ripe to drop given where the race had previously been throughout the late summer and early fall – if that video had never come to light but the debate had gone on just the same, the drop wouldn’t have been as precipitous.
Weren’t the polls already correcting before the first debate? I thought Obama’s convention bounce was fading and the debate accelerated it.
No, I don’t think that is true. See older posts on the subject.
Apologies – It appears that the post by Professor Wang that I was responding to is no longer posted in the comments here.
It is not all that obvious to me what Obama could’ve done differently in the first debate. When your opponent shamelessly disregards social norms it is very difficult to respond. You can’t just enter into tit-for-tat with a persuasive liar. Obama was doubly constrained because he could not risk being seen as condescending or an angry black man.
It is true that he was somewhat condescending in the subsequent debates, but the first debate helped lay the foundation that Romney had it coming. And, of course, Romney played right into Obama’s counterattack with his Benghazi taunts.
POTUS was caught out in the first debate by the simple expedient of Dr Jeckyl assuming the persona of Mr Hyde for the evening. Having learnt how Romney’s flip-flopping extended to his own personality, Obama’s team made damned sure that the same mistake was not repeated.
Is it valid to assume that the final 2.4% skew in national polling was consistent throughout the earlier time period? I realize the popular vote meta-margin would suggest this, but are other explanations possible?
ps–fantastic site–I got addicted to PEC in August and got my wife hooked in October. Even my kids were asking to see the funny looking red-and-blue map
Good point, though we only get one true anchoring point. The various indices appeared to be well-behaved into the home stretch. For example, the Meta-Margin basically oscillated around Obama +3.0%.
A fancier approach would involve drilling into registered vs. likely voters, that kind of thing.
Please take this in the spirit of healthy academic argumentation. The tables are easily turned on your argument. Win-probabilities do not get cited for legitimacy, and whether the win-probability is accurate at 55% or at 90%, EVs are assigned the same way if it is a win. By that argument, win probabilities are not of interest in any governing scenario.
Vote margins are actually counted, and past and projected vote margins are cited for many purposes by analysts and strategists who work with campaigns.
I’m open to hearing one, but I have yet to hear a good argument as to why win-probabilities should be considered more important than vote margins in forecasting. I may be incorrect, but no other forecaster seems to suggest that they should be.
Win-probabilities may be the primary focus of your model, but there does not appear to be any compelling reason why all other forecasters should share the same primary focus.
Bickers and Berry. The poster boys for why Nate Silver needs to keep his economic finger off of the scale in his model. They may drop out at the end but they may actually be misleading when they are still in there. Likely voters are probably already paying attention to this stuff so it is likely double counting.
I wouldn’t call it a finger on the scale, neither in the case of 538, nor in the case of Votamatic. You may question the usefulness of all this data (though in sparsely polled races, I think everybody agrees they are useful), or the weight it should be given, once there is sufficient polling data. But to imply that these data are inserted selectively to attain the desired final result (which is what a finger on the scale means) is both unfair and inaccurate.
I agree with Some Body. The question is not simply whether you use economic variables, but how you use them. B&B used them extremely poorly. 538 and Linzer used them extremely well.
As I recall Bickers and Berry claimed that their model was 100% accurate backdating for X number of elections but still missed spectacularly this cycle. I can’t imagine that Nate’s weight is somehow much better since he would be using the same historical data to make his model. Given that why do you think “though in sparsely polled races, I think everybody agrees they are useful”? This metric has been discredited this cycle. I don’t want to quibble about WHY Nate threw the weight in. Obviously he, it would seem incorrectly, decided it was a needed balance. To me, that is a finger on the scale to meet expectations which would skew the projections it was included in. There is one lesson I carry to this day from PSYCH 101. Humans have an innate need to know things. When the information is lacking we aren’t adverse to filling in the gaps in our thinking. This is the danger when you start throwing weights in and probably one of the reasons Sam likes to stick with the numbers the pollsters feed him. That was over 35 years ago though. Is there new thinking on that subject now or am I filling in the gaps of things forgotten over 35 years? 😉
@Refugee — I’m afraid you have a serious fallacy in your argument. One clearly overfitted model using economic factors (B&B) hardly discredits economic factors in general as predictors. There’s lots of literature showing economic factors do have predictive power for election results (not, of course, as much power as a good number of recent polls), and people like Silver and Linzer, who used such factors carefully, taking the necessary precautions to avoid overfitting and related problems, were by no means putting their fingers on the scale.
Oh, and one more point (not too relevant to 538; more so to Votomatic): if you’re doing political science, your job would be not only to predict the outcome, but also to explain it. If Obama leads the polls in the final week of the campaign, that’s a very strong predictor of his eventual victory, but it is absolutely worthless as an explanation. A well-designed economic model may be a mediocre predictor, or even a lousy one, but it does offer an explanation, which is then confirmed or refuted (and most likely – partially confirmed and partially refuted) by the eventual results.
I disagree that economic fundamentals shouldn’t be included because it is Drew Linzer whose electoral forecast was the most accurate over the long term, and his model uses the Time for Change model to derive its prior distribution. I think the issue with Berry/Bickers is that their model is poorly specified.
The B&B model only went back to 1980, which is a remarkably small sample size of election results. (Some models go back to the early 20th century, most to the immediate postwar era.) It was grossly overfitted.
One last PS on the finger: http://www.acthomas.ca/comment/2012/11/538s-uncertainty-estimates-are-as-good-as-they-get.html
You can argue with Thomas, of course, and the Senate results had more problems, but a finger on the scale? Come on!
And restated with a bit less jargon by De Long here: http://delong.typepad.com/sdj/2012/11/538s-state-presidential-vote-estimates-are-as-good-as-they-can-get.html
I’m much less statistically knowledgeable than many others here. Can Some Body, or anybody, explain help explain Thomas and DeLong:
“Consider instead a related question: how close were the vote shares in each state to the prediction, as a function of the margin of error? The simplest way to check this is to calculate a p-value…”
Does the p-value reflect how well the a state vote share prediction captured the actual result within the prediction’s expected margin of error, using a normal distribution across that margin?
“Media coverage suggested that Nate Silver’s intervals were too conservative; if this were the case, we would expect a higher concentration of p-values around 50%. (If too anti-conservative, the p-values would be more extreme, towards 0 or 1.) …. The values [for Votamatic] are pushed towards zero and one, so the confidence intervals are far too tight: the Votamatic predictions turned out to be too overly precise.”
Does this mean that the margins of error given by Votamatic were too small/narrow?
I’m not that knowledgeable myself, but De Long, at least, was very clear and not too technical: Silver had close to 80% of the actual state margins within the 80% confidence interval of the predicted ones. I.e., the uncertainty for presidential state forecasts was accurately stated.
That’s not bad, but state-by-state vote margins are not of interest in any governing scenario. They don’t get cited for legitimacy, and the EV are assigned the same way whether the win is 1% or 10%. However, it is good that he finally did his confidence margins well in one instance.
My own view is that this is not an important prediction compared with win probabilities. If you look at Nate’s summed win probabilities, they should have added up to 50.5 but were substantially less. A failure there, though in a direction that people presumably won’t ding him on.
Parsimony.
Thanks all. I appreciate your comments. If a model fitted to 100% going back to 1980 is too small a period may I point out that poll aggregation is even smaller by comparison? Nate eventually drops the weight so his end result isn’t affected. I don’t know Votematic’s mechanism so I can’t comment on that. My point stands though. Once you have polling, you have real data to work from and those polled are already affected by the economic factors. I also don’t believe that 538 and Votematic’s model will be substantially different than one accurate back to 1980. They just weight it less instead of 100% and every election some historical predictor bites the dust.
http://www.explainxkcd.com/wiki/index.php?title=1122:_Electoral_Precedent
We are now told that the Republicans totally blew their get out the vote effort. We don’t know how that might have changed things. Could they have flipped Florida? Closed the margin in other states? I have to believe NO model accounted for this yet it undoubtedly had an impact.
I am puzzled why you say state polls were right on. Isn’t it the case that there was a systemic pro-Romney bias for poll medians across the swing states, but that it just did not happen to affect outcomes?
I have two questions here: One about the possible role of turnout/GOTV and the other about what you said about the margin.
Let’s start with GOTV. Since the polls we’re talking about are (almost) all likely voter polls, they include a component of assessing who’s going to turn out and who’s not (duh). Now, this seems to complicate the “unskewing” process a bit, because your chart does not set changes in opinion apart from changes in the eventual makeup of the electorate. Actually, I’m not sure how to start tackling this methodologically (eventually showing up at the polling place is not the same as intention to vote or likelihood of voting, so not sure it can in fact simply be projected back on all previous margins).
And about the margin – here I’m simply having a difficulty to understand how it works. You said the state polls missed by less than 1%, based on Florida (where Obama leads by 0.9%). Of course, they missed the margin by significantly more than that in some other states, including closely contested ones (NH, IA, WI, CO). Are you saying the difference in margin only counts if it flips the state? If so, I don’t really understand why it should be so.
Dr. Wang, along with badni and Some Body I do not understand how you can say “the bias in state polls turned out to be close to 0”. From you power of your vote list there are some big misses: NM, MI, NH, IA, and NJ.
This is not the right way to think about whether there is overall bias. The point is the mean overall bias among pollsters. 50 out of 51 races correctly predicted the direction of the outcome. The remaining race was a tie in Florida with an SEM of about 0.8%, which is within error bar. This performance matches 2008 and 2004.
But then why not say the national polls were perfectly correct (getting, in the aggregate, the winner correctly, while missing the popular vote margin)? I really, sincerely, don’t understand.
Linzer writes: “Interestingly, in most of the battleground states, Obama did indeed outperform the polls; suggesting that a subset of the surveys in those states were tilted in Romney’s favor, just as I’d suspected. Across all 50 states, however, the polls were extremely accurate. The average difference between the actual state vote outcomes and the final predictions of my model was a miniscule 0.03% towards Obama.”
Regarding his suspicion that certain polls were tilted in Romney’s favor in battleground states, with the result that Obama outperformed the aggregated polls, Linzer links to this post on his site:
http://votamatic.org/another-look-at-survey-bias/
One small PS: Does the national poll trendline account for the pollsters (notably including Gallup) that stopped polling because of Sandy? If not, the one-point bump may be an artefact.
“The actual margin is currently Obama +2.74%.”
Dr Wang, as your phrasing implies, they are still counting. How many provisional votes were cast, I wonder? To my astonishment I cannot find this number anywhere!
Does anyone here have the numbers, preferably a state-by-state list?
I am very curious about the final tally, about the number of provisional votes, and how the provisional votes finally break.
If this break should be strongly skewed towards President Obama (i.e. significantly more so than the current national split) , does this not hint at a systematic pressure to have Democratic-leaning demographics cast provisional votes. Or that far more Democrats than Republicans have been “removed” from the voter rolls?
Would that be unreasonable to infer, if there is a strong imbalance in the break? And if so, should this be a cause for concern?
I have found this spreadsheet, which is apparently updated more regularly than major news networks, tracking the SoS websites.
I know this is not exactly what you are looking for, but it gives a good idea of where most votes are missing compared to 2008:
https://docs.google.com/spreadsheet/ccc?key=0At91c3wX1Wu5dFp2dUlkNWlJeGN5NFUxa0F3cXpoLXc#gid=0
Thank you, Dr. Wang. Love the site.
But could you say more about R&R’s dubious analytical neutrality? Are they self-deceiving? Partisans cloaked in “expert” clothes? Something else?
It has seemed clear from the start that they have “hack”-ish tendencies. (Nate pointed this out, brusquely, after R&R’s first prediction in August.) I’d love to know what you think the story is.
Thank you again for the terrific service you provide here.
I think the Hurricane Sandy excuse tells you most of what you need to know.
Post election when interviewed on Colorado Public Radio for the program “Colorado Matters”, one of the R&R duo tried to explain their wrong prognosis by saying President Obama practically defied gravity by getting re-elected in spite of worsening economic conditions. That told me he was a partisan hack, since the economy is no longer worsening but has shown considerable improvement in the last couple of years. A fact completely overlooked and erroneously stated by the esteemed professor. It also sounded like he had skin in the election outcome and was rooting for a Romney win so thta his model would not miserably fail as it did. He sounded really disappointed in the election outcome and could only offer unscientific analogies for the failure of his model.
Thanks for the analysis, Sam. It’s neat to see the errors leading to various results. I wish more folks would be as honest as you are about your methods and results.
One question: Is it worth determining the bias of each polling house and adjusting accordingly or will that introduce errors that are greater than the potential gain in accuracy?
My feeling is that such corrections could help with state-by-state margins, that kind of thing. But it didn’t affect the EV or MM calculation – unless one is concerned with the confidence interval, a legitimate concern.
However, corrections also invite a political hit of the type we saw this season. As a practical matter, I think there is a strong role for no-corrections calculations.
Some Body: measurements of a system (polls in this case) can miss by large margins (have large error) but still have no bias. Bias occurs when you consistently miss to one side.
Actually, I am surprised that there was so *little* bias. Everyone seems surprised by the youth and minority turnout (and depressed caucasian).
Which means everyone’s likely voter screen was wrong (too tight).
So everyone should have had a bias, +R.
So the fact that they did not means there was some other (+D) bias that canceled it out. Two wrongs made a left.
😀
I know the difference between error and bias. The typical error for non-competitive (and sparsely polled) states was to underestimate the margin of victory for the winner (Sam showed it in an earlier post). So, if you include all the uncompetitive states, you get some error, little bias. But when you look at the group of the closely contested (and densely polled) states, there’s a good argument to claim there was some Republican bias, not just error (maybe if you take Rasmussen, Gravis and the rest of the “we’re gonna have an electorate like 2004” crowd out, it would disappear; this is what Linzer suggests as an explanation in his most recent post).
Anyway, I’m still trying to understand Sam’s point in his reply. Maybe it’s just the point at which the quant reasoning goes beyond my humanities-trained understanding 😉
Thank you again, Sam and Andrew, for all your work. I am completely unskewed till 2014. And a final reminder. Contact your US Senator about reforming the filibuster. Do. It. Now.
And let’s all call attention to the need to preserve section 5 of the Voting Rights Act, although under this court I’m not very optimistic.
Looks like turnout was slightly *UP* vs. 2008 in most of the swing states, while turnout was mostly down significantly vs. 2008 elsewhere, especially in the Northeast (Sandy effect?)
The result is that swing state voters — roughly 25% of the population — may have cast 30% of the votes.
Do the national tracking polls apply the same likely voter screen nationwide for all surveyed voters, or do they employ different screens for voters in different states? Is it possible that the “likely voters” identified in the national polls over-sampled the non-competitive states and/or under-sampled the swing states? Could this explain some of the bias in the national polls?
Bickers and Berry are tenured full professors, so they can get away with spouting whatever nonsense they want. But Drew Linzer is still just a lowly assistant professor, so he’d better get things right if he wants to keep his job and get promoted some day.
I’m kidding, of course. Well, mostly. Or at least a little.
Here is a link for the California 3.3 million uncounted ballots:
http://www.sos.ca.gov/elections/2012-elections/nov-general/pdf/unprocessed-ballots-report.pdf
Along with WA and OR, there are millions of votes left to count in non-battleground blue states. Obama’s margin of victory will be well over 3% and the percentage between battleground and non-battleground turnout will be lessened.
It is best to wait until all votes are counted before determining who’s poll/model did best – or at least projecting the uncounted states.
That website is dated Nov 9, but even now on Nov 12 there are still 200,000 votes to be counted here in San Diego County and a House seat hanging on them. (Peters’ (D) lead over Bilbray (R) is gradually increasing.) Arizona also has a large number of votes still to be counted, and they also are leaning Democratic. It would be fun if Arizona ended up looking kind of purple.
Dr. Wang, i am not sure people who believe in using economic variables to explain election results are worthy of your energy.
One forecast that has gotten very little attention is the U of illinois project
http://electionanalytics.cs.illinois.edu/
They in fact had even higher probabilities than you did. I think they might have been too confident (but then again they were correct), i would be interesting in comparing your approaches.
Sam, is it fair to say that the U of Illinois “electionanalytics” group has borrowed substantially from your methods without attribution?
I find it a little hard to believe — having looked at their baseline academic paper from 2009, in which you are nowhere mentioned — that by that year they were unaware of your work and the methods you developed.
About the best I can speculate for them is that you haven’t published academic papers on your work and so they didn’t see a need to cite you. That’s not how I would have played it if I were they, though.
It could be. My 2004 calculation was fairly viral among political and social scientists. That would be a shame. Citation is so cheap.
Dr. Wang,
The ideas that you are both promulgating are
1) state polls are enough
2) use Bayesian analysis.
(and my belief is that the ideas are principally yours, but do ideas really belong to anyone?)
There seem to be some differences between the two of you, as your results are different. It would be interesting to examine these. Just from an optimization of information perspective, i wonder what the minimum information set we require to make the best conjecture on an elections result.
The marketing nerds at Netflix must be closely monitoring my Internet habits, as they recently recommended that I watch “Magic Town,” a 1947 film from Frank Capra, the populist director who brought you “It’s a Wonderful Life.”
“Magic Town” centers on the American public’s then-nascent fascination with the marketing and predictive powers of public polling. Because I do whatever my computer tells me to do, I watched “Magic Town” (available instantly!), and I highly recommend it to anyone, even scientists, in need of a pleasant diversion after this grueling and seemingly interminable election season.
Synopsis: A Midwestern town goes to hell in a hand basket after its inhabitants are declared by a Gallup-like entity to comprise the perfect microcosm of U.S. opinion and beliefs. In a word: Ohio. Fun! Adding to the fun, in a meta sort of way: The movie co-stars the former Mrs. Ronald Reagan. Enjoy.
http://www.imdb.com/title/tt0039595/
Food for thought — there are fifteen states that were either part of the confederacy or claimed by it. In the other 35 states, Obama won the two party vote 55.3% to 44.7% and had an electoral vote margin of 290 to 58.
michael,
andrew sullivan pointed this out and george will called it emprically false.
http://andrewsullivan.thedailybeast.com/2012/10/the-gops-geography-and-the-confederacy.html
how much of Romney’s vote came from simple racism is difficult to discern.
First, I wasn’t really trying to argue overt racism as the explanation. Second, Will didn’t refute Sullivan’s point; he just changed the subject. But the point that Will misses is that the Democratic Party doesn’t have a white voter problem; it has a southern white voter problem. And that’s much less of a problem.
isn’t covert more problematic than overt. do you think there is more racism in the former confederate states or in the former union states (and post bellum states)
FYI in your ActBlue page I think you mistakenly recommended donating to John Hernandez in the 21st District. You claimed that the district went for Obama +11% in 2008 over McCain, but it appears to only have been by 5%
http://racesandredistricting.blogspot.com/2011/12/analysis-of-californias-congressional.html
And the other California (Jose) Hernandez, in the 10th district, only saw Obama win in 2008 by 4%
Are the numbers in my link above incorrect, or what? John Hernandez got shellacked this year, by the way.
Your Senate recommendations were really good, and I donated to them. but I ended up 0-for-3 by following your House recommendations.
Being 0-for-3 isn’t necessarily a bad thing. Win or lose, you push the needle, assuming the money isn’t wasted.
Those who donated to races with the closest final outcome, win or lose, made the wisest choices.
Generally, the Senate races were picked correctly with the exception of Nevada, which deserved more emphasis.
In regard to House races, this was harder, which is why the emphasis on specific races varied from week to week. In retrospect, the DCCC (517 contributions, $31k) was a better investment than CA-21 (233 contributions, $5k). However, the efficiency of the investment was still higher than that enjoyed by donors to Karl Rove’s Crossroads GPS (though I did not track those). Generally, I stand by my recommendations. I’m not always right, but still, there are Senators Heitkamp and Tester.
The process deserves a more systematic approach. Perhaps an automated algorithm for Senate races and a general recommendation for DCCC/RNCC for House races.
…. but i bet its a lot
In all this data, is there any indication what effect the billions spent during the campaign had, one way or the other?
Two unrelated points:
First, there are possibly millions of yet-uncounted votes from strong Obama areas, notably the west-coast states (which are heavily or entirely into mail voting). It is being commonly said that in the end, the popular-vote margin may well exceed 3%, which would materially raise the error in the national-level polls.
Two, EL is entirely correct: “Contact your US Senator about reforming the filibuster. Do. It. Now.” But make it “Senators”, plural; and, if you gave any money through Dr Wang’s ActBlue page to any senatorial campaigns outside your state, make contact with them, too (and say why).
I’m a firm believer in Science and in the professionalism of the pollsters (I wrote software for survey research in the 90’s). However I did have two bad moments: on the penultimate day Sam announced that he’d been double-counting incoming data, and then on election day, electoral-vote.com crashed.
–bks
The GOP is saying that they can raise revenue without raising taxes. I think they’re using the same math they used for their polling data.
its the same math Reagan used for lowering taxes and raising revenues, the laffer curve (or is it the laughter curve).
a bunch of rich people benefit so they make sure the narrative is propagated.
Thanks so much Sam.
Wow.
I must say that I was appalled that your site did not get more national attention and respect for the meta-analysis. Even a priori it makes makes sense that an analysis of all local polls (sampling politics at local levels, where comparatively local politics might wildly vary) is the best overall measure of collective politics. And certainly empirical results of such an hypothesis confirms or disconfirms even an a priori intuition about stats. I’d say that Prof. Wang’s hypothesis has been very strongly confirmed, and deserves much wider credence than the press has provided thus far.
If nothing else, it is a perfect example of Occam’s razor. Something that predicted the outcome just as well, with fewer assumptions.
The scary part in all of this is that the Romney campaign was unskewing *internal* polls.
Then there is Joe Scarborough barking at the conservative establishment quietly trying to hide how flat his punditry was when he called Silver and other aggregators a “joke”
Really, it’s time to propagate a quantitative model of punditry, in which one’s level of accomplishment as a pundit is inversely proportional to the factual support for the positions claimed.
A pundit with a 100% track record would be hounded out of the profession and need to seek asylum in science. Wait, that already happens…
Perspective: All us Dems are jacked with Sam’s site b/c it accurately predicted what we instincitively felt – Obama was ahead all the way. The silly media wouldn’t report that; had to keep people tuning-in by reporting pre-election fallicy of a “statistical tie”. It will happen – sooner than we’d like to think – that the GOP again leads. Will Sam’s site get as many of us looking on a daily basis then?
Fewer Democratic readers, I am sure. But the calculation will still be done the same way.
If that happens, I will stick my head in the sand — but will peek at Sam now and then to see if I dare pull it out.
Well, I want to think that Sam is wrong, and Democrats will continue to follow his calculations and recommendations. This information can still inform where to put resources if Democrats are swimming upstream (as was the case in 2010, and could be the case in 2014).
We should always stay involved, even if we think our preferred candidate may lose. If we drop our support, our preferred candidate is then more likely to lose.
I’ll certainly be looking here, 538, and Votamatic. If they’re all showing the R ahead, I won’t have a chilled bottle of champagne in the fridge that Tuesday night. But at least I won’t be “shell shocked”!
I was following this election pretty closely and I cannot remember one day throughout the campaign that Mr. Romney held leads in enough states to put him over the 270 mark. So I find the claim that he could have won had it not been for sandy funny. He was never winning.
Who knows what brings voters to the polls? If Obama had done better in the first debate, and if his predicted margin of victory was high, perhaps fewer Democrats would have voted. The fact that many thought the race was close and Republican confidence seemed high insured that many of my friends voted – some early. They were scared Romney would be elected in spite of my telling them of Dr Wang’s numbers.
Thanks again. I’ll be back next time no matter the predictions.
Well, I’m hoping we’ve won over some of the Fake-Based voters to the Fact-Based notion that not everyone was screwing around with skewing.
As the last bite of the meta-margin goes down the gullet [ http://www.facebook.com/photo.php?fbid=4951909600171&set=a.1154730673071.23076.1371822976&type=1&theater ] we bid fond farewell to this election season… What’s the chance of winning the house in 2014?? Where’s the chart?? Lemme at it!
Ms. Jay.
The cake is *not* a lie.
princeton election consortium
november 2012
Ms Sheckley,
I too wonder about the party breakdown of the House seats, as well as Senate seats, that are up in 2014.
I predict the chances of Democratic gains will depend on whether the party is effective in the next two years, whether they have the spine to stand up and pass good legislation on a number of fronts. Especially taxation of the 10% and “offshore” corporations, and immigration.
Furthermore, they must publicly highlight GOP obstruction — especially with regards to continued tax breaks for the middle class and immigration.
oh my goodness what kind of nerds are you guys?
thats a riff on the famous gamer culture meme from Portal, “the cake is a lie”.
A difference in this election camp from 2008, is gamer culture tagging. I havent seen hardly any FTWs, Epic Fails, and ‘all your base are belong to us’ tagging.
Are gamers growing up? Or is gamerculture just submerging.
I think gaming is the dark matter of the social universe.
Classic line, “I am developing doubts about their analytical neutrality.” I am one of many who appreciate your integrity, in addition to your expertise.
“There is one remaining finding that puzzles me. Although post-debate-#1 national poll averages did not move much in October, state polls did – toward President Obama. There is some unexplained discrepancy. Were swing-state voters more malleable by messaging and advertisements? Were they more attentive to the race? Did these swings occur at the national level too, but polls failed to pick it up? Hmmm.”
Just a guess: low information voters show greater tendency to lean Republican than Democratic? As limited as the info is in swing state political ads, at least it’s more info than is normally omni-present (nationally).
Us in the gambling fraternity are pleased that you didn’t get more attention and Morning Joe and others raised doubts even though their arguments had no foundations or indeed logic to them. I think that Sandy might have tipped the balance in Florida, but that is more of a “vibration” thing than a scientific observation. For example, in Iowa you had aggregated polls but when you have the Register publishing a Seltzer poll +5% to Obama, there is certainty that there is no bias towards Obama in the other polls. As it turns out the polls and aggregates seemed to be slightly biased to actually biased towards Romney. There does need some explanation as to why the count was closer in Ohio than to the polls though, as on every other “battleground” State I think I am correct in stating that Obama performed better than the aggregated forecast. When the final votes are tallied I expect a margin a tad actually over 3% nationally.
When will the final tally be completed?
Are provisional votes being counted in all states?
I have been reading all throughout the election cycle about how biased Rassmussen and other pollsters were towards the Republicans. Is there any final analysis somewhere of individual polling companies, and just how biased they were towards the Republicans (or towards the Democrats)?
Again, thank you for an incredible site! It kept me sane this past election amidst all the chaos on the news.
@Albert, this thread by nate silver is one list that i saw, others have done this also.
http://fivethirtyeight.blogs.nytimes.com/2012/11/10/which-polls-fared-best-and-worst-in-the-2012-presidential-race/?gwh=53E1175455007C789CC4AB7BC0D9E1A3
What’s the basis for concluding that the pollsters’ error on election day would be the same as their error a month or two months before?
Albert try this:
http://fivethirtyeight.blogs.nytimes.com/2012/11/10/which-polls-fared-best-and-worst-in-the-2012-presidential-race/#more-37396
If it weren’t for B&B I would never have found PEC, as I was frantically surfing the web 2 months ago after hearing B&B tout their ‘retroactive prediction’ record and their belief that Romney would be the winner with 330 electoral votes.
So glad that I found sanity, math, science, and PEC. Thanks again, Dr. Wang.
I quickly scanned the comments and didn’t see but perhaps missed this suggestion as to why the state polls did not track the national polls. The experience of living in a swing state was incredible. All that money bought a lot of advertising. A friend who moved from California to Colorado during the campaign was incredulous at the difference in advertising.
Bickers and Berry are colleagues of mine in a different department. They are not boogeyman. They are not big-time quant guys and they just believed in the potion that historical economic data always predicted the outcomes. Maybe when all voters were angry white men, that was the case.
Sam,
Appreciate your good work. But after your performance in 2004, I would have expected you to be a little more understanding of those who got it wrong this year.
I have the zeal of the convert – but yes, I do require high-level leaders to look at good data with a cold eye. I am basically a hobbyist. What is the excuse of a hektamillion-dollar apparatus that can afford the best talent? Should such people be trusted in evaluating data for complex problems of governance?
Great to see Colorado the tipping point state again as it becomes more blue every year. Good news for 2016 since no Rep. has won statewide for eight years now. New marijuana law is not going to make it redder. Thanks for all the great work Dr.Wang !
The correct Hamburger quote is:
“Our real teacher has been and still is the embryo, who is, incidentally, the only teacher who is always right.”
As with the MLK monument–the original quote was more eloquent than the paraphrase.
I enjoy your site.
Thanks Dr Wang I love that quote!