Princeton Election Consortium

A first draft of electoral history. Since 2004

A Reply To Nate Silver – With Factchecking

October 6th, 2014, 8:19am by Sam Wang


Political Wire:
This post:
In response to the “Twitter-crit” post below, Nate Silver wrote a longer piece for Political Wire. There’s also this interesting analysis by Daniel Altman at The Daily Beast.

This Monday morning, I replied in Political Wire. Check it out.

I thank PEC readers Bum, bks, Froggy, Kevin, AySz88, Hugh J Martin, Art Brown, Alan Koczela, Lojo, 538 Refugee, Philip Diehl, Amit Lath, and A New Jersey Farmer for advance comments on the essay.

Tags: 2014 Election · Senate

73 Comments so far ↓

  • Rich Seiter

    The link to https://politicalwire.com/archives/2014/10/02/nate_silver_rebuts_sam_wang.html is broken. I went to the Internet Archive to look for a saved copy and was intrigued to find out that page disappeared shortly after the election. See http://web.archive.org/web/20141108150947/http://politicalwire.com/archives/2014/10/02/nate_silver_rebuts_sam_wang.html?
    and compare to the earlier archived copies.

    Any idea why that would be?

  • RCA

    I just want to point out, Nate Silver claims, and the media still repeats this false claim, that he predicted who would win all 50 states in 2012. That is a lie, he incorrectly predicted that Romney would win Florida. Silver changed his mind on the MORNING of election day 2012 and SWITCHED his position, but THAT IS NOT A PREDICTION. That is switching your position after getting inside information from contacts in Florida. Once it is already election day and you are getting inside information from the state in question then YOU CANNOT LEGITIMATELY CLAIM YOU HAVE PREDICTED ANYTHING. Yet the media to this day claims he predicted Florida. “Pre” means “pre” – before! I live in Florida and I predicted Obama would win Florida by 0.75%, which is about what he won it by. I do not pretend to run fake useless computer models and I do not pretend that by averaging good polls and bad polls the result will be the answer. It sometimes won’t be. Nate is lucky the media wants to believe his exaggeration of his accuracy. His accuracy is OK, but not that great. Remember, in 2012 everyone with a brain knew who was going to win 41 states. Only nine were in question. So he got 8 out of 9 right. That’s OK. It isn’t 9 of 9. **** Also Nate and many other say it is likelier that the Repubs will grab the senate tomorrow. Actually the opposite is true. Democrats will win NC, NH, AK and Kansas. Polls are garbage in Alaska, Begich is simply qualified and the Repub isn’t, he is popular and the Repub isn’t, he has a GREAT HQ and ground game and the Repub doesn’t. **** I count Orman as a Democrat, he in effect is. So all Democrats must do is win 1 states out of the remaining 6, IA, AR, CO, GA, KY, and LA. And Mary Landrieu is AHEAD in LA, not behind as the media says. She is ahead by 7%. Look it up. Whether there is a runoff or not, she will likely win. If she gets to 50% tomorrow she wins. If she doesn’t you hit the reset and a month from now we find out. None of teh votes carry over, and it is smug and stupid of Nate and Real Clear politics to say Landrieu is behind when the polls say she is 7% ahead.

  • atothec

    New polls today have GA Perdue only +1 and AR Pryor +2. Repubs have pulled out of MI and Dems are going *into* SD(!).

    The GOP ‘surge’ is effectively over. I think Sam’s model is on the right track.

    • atothec

      Also the SD race has Independent Pressler last polled only 3pts. behind Rep. Rounds. With Weiland getting $1 million for his campaign (which will go a loooong way in SD) that means there’s a good chance Pressler or Weiland will win.

    • atothec

      Whoa, make that TWO million dollars in SD. The DSCC just added another million on top of the Mayday PAC. This election is getting crazier every day!

    • Steve

      Perdue is getting hit hard with the Pillowtex bankruptcy court documents. They make him look more ‘vulture capitalist’ than ‘effective manager’.

  • Art Brown

    Here’s an estimate of the magnitude of the Orman effect, from the Huffington Post model:

    Huffington Post’s Democratic control probability prediction is currently at 46%, under their assumption that Orman’s caucus probability is 50/50 if he decides the majority.

    That 46% number pops out if one uses, for the Kansas Democratic win probability, Huffpost’s Orman victory probability of 51.2% divided by 2, or 25.6% (with the Republican probability correspondingly set to 1-0.256).

    Alternatively, if one doesn’t divide by 2 (corresponding to PEC’s baseline of Orman always caucusing Democratic), one gets a Democratic control probability of 52% from the Huffpost numbers.

    • xian

      i don’t see PEC as assuming where Orman will caucus but simply showing the scenario in which the Dems + Inds could have control if they do. choose to caucus together.

    • Art Brown

      I was attempting to calculate Huffington Post’s prediction under the same scenario, to see how much difference between these two polls-only models remained after accounting for the “Orman effect”. (PEC’s D+I probability was 64% at the time.)

  • Dylan

    In 2012, 538 made the following state-by-state election predictions:
    CO – 79% chance of Obama win
    IA – 87% Obama
    VA – 79% Obama
    NC – 74% Romney
    FL – 50.3% Obama
    If we accept these probabilities, the chance that each of these five states would eventually go to their favored outcome (which they did) is less than 20%. It seems like Nate is being dishonest by claiming that he “called” all 50 states when his model seems to make that a fairly unlikely outcome.

    • Sam Wang

      Well, let’s be charitable. In my view, the real issue is that his approach essentially ends up modeling the noise – and therefore adding irrelevant noise to the prediction. Therefore all the probabilities are underconfident. I pointed this out a few years ago, and Andrew Gelman wrote about it too.

      In that light, the real problem is that he should figure out how to convert margins to probabilities correctly. That’s why I have been going off about sigma_systematic, which I add in *after* aggregating all the states. If I put it into each individual race, it makes the final projection underconfident.

      I would say that if we allow for the problems in his math, he called 49.5 races correctly, the Florida probability being too close to 50% to get full credit. This can all be done far more rigorously – see my reply to Forrest Collman regarding the Brier score.

      Incidentally, if anyone calculates past Brier scores for us in 2010 and 2012, i’d be curious to know the result.

    • Mitch

      Dylan -

      To make your calculation you have to assume that the probabilities are entirely uncorrelated. That’s not a good assumption for this sort of thing. If the undecideds break in a particular direction, or there’s a bias in the polls or something like that, there’s a good chance that it goes in the same direction in all the states.

      … which is not to say that I am not endorsing Silver’s calculations or claims.

    • xian

      i recall PEC, 538, and others fiddling with their florida calls down to the last day, so i’m not sure we can credit anyone with nailing that one.

      florida is still white in the final PEC snapshot from 2012, for example.

    • Sam Wang

      “Wang & Ferguson crushed both Silver and Intrade” – aw, shucks.

  • Mitch

    It’s easy to forget, but one reason Silver lost his position in the pantheon when he went to ESPN was that it was a dud when it came out of the gate. One quite critical problem was that he hired Roger Pielke, Jr. to cover climate – and his first article was one 538 felt the need to commission a rebuttal to. This was on the heels of Silver’s wrongheaded chapter about climate change in his book. (See http://www.huffingtonpost.com/michael-e-mann/nate-silver-climate-change_b_1909482.html) No one who wanted the public to have an clear understanding of what climate scientists are saying would hire Pielke, and no one who isn’t an expert should just go off half-cocked and write what he did in his book.

    I think that this debut soured many of us on him, and his behavior lately hasn’t helped matters. I think Silver let his success and the public accolades go to his head, and he decided that he knew more about just about everything than everyone else.

  • bks

    Okay, everyone be on your best behavior. Paul Krugman reads this site:

    PK: I read all of the analysts: I read [molecular biology professor and political analyst] Sam Wang, I read “The Upshot”

    http://dailyprincetonian.com/news/2014/10/qa-paul-krugman/
    –bks

  • Forrest Collman

    I have to say, one argument Silver made that frustrates me is that the best quantitative way to assess how good a model is would be to check whether then races that are called to have say 60% probability are indeed correct 60% of the time.

    Consider the stupid model of calling all races with 50% probability. Look the model is perfect! no matter what happens!

    If someone dared write a model that had only 100 or 0% probabilities and got them all correct, that would indeed be the “best” model.

    The proper way to quantify this is simply to calculate what the prognosticator thought was the probability of the end result. In the case of a single race this is sort of stupid, because reasonable people would always lag behind someone claiming 100% for one candidate or the other.

    In the case of multiple races, this is more interesting. What we shoudl do is just multiply up the probability of each race independently.. or maybe sum up the log probabilities, and whoever gets the highest score clearly wins.

    So in a simple case, PEC predicts 80% candidate A over B, and 55% candidate C over D. Turns out, A wins and D wins. Thats a score of .8 * (1-.55) = .36. or log(.8)+log(1-.55) = -.44. Lets say 538 predicts 60% for A over B, and 51% for D over C. That would be .306 or -.51 in log space.

    So, in this hypothetical, 538 went 2 for 2, and PEC went 1 for 2 using the stupid horse race criteria of rounding 51% to 100%, but the actual result was more probable in the PEC model than the 538 model.

    Using such a metric we can actually quantify how much PECs “overconfidence” as Nate seems to think exists hurts vs helps him. If Silver sticks to his larger variance predictions, then his model might look “better calibrated” in the sense that he gets the right result the proper percentage of the time, but I don’t think that’s the proper way to evaluate how “good” his model is.

    • Sam Wang

      Yes, that was a bit silly. It only addresses calibration, but not quality.

      From reader comments, I have learned of a better measure, the Brier score, defined as the expectation value B=< (p_i-o_i)^2>, where p_i is the estimated probability and o_i is the binary outcome. The lower the score, the better the forecast.

      Imagine two weather forecasters. Forecaster A says there’s a 50% chance of rain on each day. Forecaster B says the probability is 80% on day 1, and 20% on day 2. Now imagine that it rains on day 1, but not day 2. Forecaster A’s Brier score is B=0.25, and Forecaster B’s Brier score is B=0.04.

      In this respect, I should strive for the maximum possible certainty…but no more. In other words, squeeze all the predictive juice out of the lemon. Silver leaves juice in the lemon.

  • Nicolas

    Profoundly grateful for the dialogue that has been generated. Very educational. Thank you for your efforts, and for continuing to engage in a civil manner under what must be difficult circumstances. Sensationalism feels so counterproductive, though I suppose it serves some necessary function. I really appreciate the honesty and transparency I find at PEC.

  • DrJim

    Sam and Nate are both behaving consistently with their backgrounds. Sam is engaging in an academic discussion and responding to critique point by point without getting personal. Nate is sitting at a poker table talking trash with his buddies.

  • Steve Scarborough

    Hi Sam. I was one who suggested not responding to Mr. Silver. But, given that you did, I like that it was fairly short and took the high ground so to speak. I say well done.

    I myself have done presidential election modeling going way back. During that time, when Mr. Silver’s 538 site first was created, prior to moving to the NY Times, I eagerly followed it each day. As a statistician, I was delighted to find someone else who had reservations about Rasmussen.

    But, since 538 is now at ESPN, it is as though everything changed. Every now and then I wander over to it and see what is going on. But, I do like to pay attention to what Drew Linzer and Simon Jackman have to say as well. Your site is my favorite, to be honest.

  • Chuck

    Nice analogy about who the real players are and the place of the analyzers. I’m a New England Patriots fan and the season was all but over last week according to all the announcers and journalists but not to the coaches and players (sorry Cincinnati) . The polls though are completely independent of the final results. Polls could motivate people one way or another.
    GOTV efforts especially.

  • Canadian fan

    I liked Sam’s response to Nate very much. I honestly don’t know why Nate keeps referring to Nevada – the race both analysts got wrong. Unless Nate’s point came down to – you were 99 % wrong, and I was only 97 % wrong. Could it really come down to that ? I like the transparency of Sam’s model. Unlike 538, there are no mysteries kept from view. It’s laudably transparent. Sam even instructs us how we can come to the same conclusions. He takes the mystery out of it. Nate does not. From that standpoint alone, it doesn’t seem like a fair discussion. But what I am impressed with the most – and not too many commentators have mentioned this – is that Sam’s writing is really beautiful, elegant, and articulate. And civil. Twitter is the natural domain of politicians – who are convinced arguments can be made and lost in a few words. But it is not the venue of the professional. Nate’s better than that.
    I just want to add that I am absolutely thrilled by the movement we are now seeing in the polls. There is real movement in Iowa, Colorado, and Kentucky – a truly remarkable development. The Kentucky poll was conducted by Survey USA – a non-partisan polling outfit that Nate gives an ” A ” ranking. The polls in Iowa have shown a narrowing race. In that light, the Magellan poll can be easily dismissed for its absurd outlier effect, as the firm is openly commissioned by Republicans. It fails, though, to distract from what the movement in the polls is showing us.

  • HenryK

    I suppose I’m the heretic here — I enjoy following both PEC and 538 and/but I haven’t enjoyed the tone of these discussions.

    I’m also a detail-oriented reader with more experience in statistical modeling than the average bear. The questions Mr. Silver asks seem fair enough: why has PEC begun to produce such different results from other “polls only” models like DKos? (No client of mine would buy that a forecast showing a 60% chance of an event occurring and another showing a 35% chance are “close enough”. Money is made along the margin and that’s a big margin.) Is it incorrect for him to assert you’re essentially going as far back as June in looking at polls?

    Why not be clearer about the effect Orman has on the race? (You could list three scenarios: GOP win, Dem win, “Orman decides”. Then readers could choose their own adventure, so to speak.)

    Why has PEC’s forecast been turbulent lately (going from ~50% to 65% Dem in 36 hours or so) after having been so steady in the past? I get that you’ve switched to a “short term” forecast, but it seems like it should have been phased in gradually — starting weeks or months ago? — instead of turning a corner all at once.

    Believe me, I have just as many questions for Mr. Silver. His form has been been poor. But you’ve used that as an excuse to throw a lot of penalty flags when you should be answering some of the more substantive queries about your model. Bravo to PEC for providing its source code but transparency also involves explaining WHY you’ve designed a model in a particular way — ideally with some rationale in the empirical evidence. You’ve both disappointed me.

    • MarkS

      Hear, hear!

      IMO, the weakest point of Sam’s methodology is the assumption that, more than 5 weeks before the election, the meta margin will regress to its June-Sept mean. It is this assumption that produces the discrepancy that Nate has called out. But at this point in time, Sam abandons the regress-to-the-mean assumption, and instead assumes that the current sanpshot is the best guess for election day. I would prefer this latter methodology to be used at all times, if only for simplicity and consistency over time.

    • DrJim

      You haven’t read the background material available on the PEC website. All of your questions are addressed – some over and over again – if you’ll just read what’s here. Some were addressed again within the last few posts. I’m sure your clients also expect you to do a little homework before you speak up.

  • JayBoy2k

    There is a lot of discussion about who will be validated on November 5th. Will the answer most likely be nobody/everybody. PEC is done with its June trailing snapshot influence and will converge in the next month to the exact polls in the last week. 538 and the other accumulators will do much the same, making fundamentals less and less of their prediction and depending on the last week’s polls. Is there something beyond Polling House effects (biased D or R) that will remain on Nov 3rd Predictions? If so, what will that be?

  • A New Jersey Farmer

    The Votemaster went from 52-47-1 for GOP to 51-48-1. Seems as though the trend is real, and spreading. Wonder if Nate’s going to go after the ‘master next.

  • Violet

    Nicely done, Sam. Nate seems kind of petulant. Probably because it would be so embarrassing for him to be wrong. I would be jubilant to see Dems hold the Senate both because I’m a Dem and because Nate is so smug and you’ve been such a class act.

  • Savanna

    I like the level of discourse here. Statisticians should be neutral and non polemical. It is about interpreting the polls, nothing more.

  • SFBay

    The basic issue to my mind is that this is all about personal pique; Nate Silver’s to be specific. He’s upset at potentially being unseated as the poll guru. And he’s afraid this failure will seal the deal that 538 on ESPN is a loser. He’s lost all perspective. I’d be a bit sorry for the guy, but he’s made it so easy to dislike him I don’t.

  • Insidious Pall

    Even, measured response, Professor. One positive aspect of this discussion is that I haven’t entirely resolved the polls-only vs fundamentals issue. For a very long time, in fact prior to any poll aggregations, I have employed my own sort of anecdotal fundamentals to the polls I look at with a rough idea of the voting histories of individual states. There being so many close races this time, the two models are cast in fairly sharp relief.

  • David vun Kannon

    Sam, one thing that has gotten brought up is the “since June” meme. But your methodology only keeps the latest poll from each pollster. Unless a pollster has never refreshed their polling of a particular race, the stalest poll you use is actually much closer to the current day. Is it easy to say what is the oldest poll you are using in each race? It might help folks undertstand the process better. Thanks for everything.

  • Davey

    Totally informative insight on how your model works, and how we might evaluate either. Thanks!

  • Michael K

    I’m not sure I follow Altman’s reasoning. If Nate is “afraid of” Prof. Wang, then isn’t that a reason for Nate *not* to single out and draw attention to a less widely known model (and risk looking worse after the election than if he had held his tongue)?

    It seems to me Nate has some genuine philosophical disagreements but also lots of misconceptions and misunderstandings.

    Clearly Nate doesn’t fully understand where the PEC probability comes from if he argues that PEC doesn’t account for how much the polls have differed from actual election results in the past.

    The RV/LV poll criticism seems like a valid theoretical point. I guess Sam’s implicit counter-point is that the effects of a few RV polls, mostly from June/July, on the forecast are too negligble to be be warrant making “secret sauce” assumptions to adjust for. It might help if we could attach a precise number to how much difference it makes.

    The criticism about favoring Michael Dukakis (or the Philadelphia Eagles) after they fall behind (because they led most of the race/game) is an odd one for Nate to make, because I believe his model makes similar assumptions. Doesn’t 538 count ALL the polls, weighing the recent ones the most? If Candidate X had a big enough lead for a long enough period of time, won’t 538 still have candidate X ahead even after they consistently trail by a smaller margin for a shorter period of time?

    Maybe PEC probabilities give months-old snapshots too much weight, but how can Nate argue one way or the other without a much deeper analysis of historical races?

  • Kenny

    Very well done. This helped me understand your process and was a great rebuttal to his criticisms.

  • David

    Good job, Sam. And a good tone to your response as well, very civil.

  • Lojo

    Nice reply Sam.

    It’s a coin flip election, so PEC could be wrong but so could 538.

    Today polls, while not predictive, show how fluent things are. I like Ernst as an opponent and Grimes as a campaigner.

    If there is a Dem surprise, Nate is going to be part of the story, that he missed the call (and the Sam made it). If there is a GOP win, Sam is not going to be part of the story because there is no surprise according to conventional wisdom (except later when they do a roundup of different prediction blogs/sites). Nate has a lot more to lose.

    It’s gonna be interesting.

    • Davey

      I think because it’s a coin flip election, nobody is wrong. If I say I only have a 25% chance of flipping two heads in a row then I do it, I wasn’t wrong. I just wound up in the one scenario in four of my totally accurate prediction where that happened.

      And that will be the narrative on Nov 5th. “We called it correctly but the results are in our majority/minority probability.” I predict Dr. Wang will be professorial and seek to make his model better, while Mr. Silver will come across as a little snot. I didn’t need complex modeling to run those odds, and my methods are totally open source.

  • Stephen Altschul

    For the past week or so, I have been unable to find probabilities for individual senate races (either snapshot or November) on your site, even though these are now clearly being updated regularly. These used to be discoverable by clicking on the link beneath the “Power of your Vote” chart, but this no longer seems to work. Is there some other way to find these probabilities, or will this link be restored sometime soon?

  • Kevin

    Whoa–did the election day probability of D+I control just jump from 51% to 67% in an hour? Does this have to do with Grimes now being favored in KY on the Power of Your Vote sidebar?

    I would like to understand better how the Election Day prediction, not simply the snapshot, could move so far so fast.

    Thank goodness–something besides Nate Silver to talk about.

  • we_are_toast

    Studies show that up to 4% of voters either make up their minds, or change their minds when they walk in the voting booth. Most Americans can’t name their house rep.

    Trying to accurately model this level of uncertainty within a percentage point or two, is a bit beyond absurd. There are so few Senate races actually in play, anyone here who watches the polls, and maybe reads about party GOTV efforts, has just as good of a chance of making just as accurate a prediction as either Mr. Wang or Mr. Silver.

    The difference between the 2 is that Mr. Silver believes all those digits to the right of his decimal points actually mean something.

    • 538 Refuge

      When I looked at 538 I must admit I wondered if the numbers really supported the number of significant digits he shows. From a marketing stand point, they are probably completely valid.

    • Matt McIrvin

      Actually it’s almost the other way around: if anything, Silver tends to exaggerate the uncertainty of his predictions as an excess hedge.

      In statistics there’s such a thing as a confidence level: you cite error bars for some quantity to, say, a 95% confidence level, and that means your error bars have a 95% probability of having been calculated to encompass the true result. If you do this a bunch of times, you should get it wrong about 5% of the time; if you don’t, then something’s wrong with your error estimate. But this is something that’s hard to convey in a popular article.

      My experience is that Silver tends to rate the probability of a black-swan prediction failure much higher than he really should, given even his own track record.

  • securecare

    Thank you Sam, well done.

  • Valdivia

    loved the clear and concise analysis you did in your reply. really speaks for itself instead of engaging the tantrum on the other end. bravo.

  • Christian

    Hi Sam,

    In the article, you write “Of perhaps greatest interest is the fact that on Election Eve in 2012, PEC called every close Senate race correctly – 10 out of 10.”

    I think statements like this (# of correct calls, who got what wrong) confuse readers.

    All of these models are probabilistic. In the 2014 race, if you’re predicting a 40% chance of Republican control of the Senate and the race were held 10 times, you should be wrong 4 out of 10 times. Your statement above implies the reader to “round >50% up to 100%”.

    Just skimming the Daily Beast article reveals how this misinformation propagates.

    The reader is better served if you take a history of predictions at 10-20%, 40-60%, 80-90% and show how they fared.

    • JayBoy2k

      Christian,
      Insightful and obvious!!! Both Nate and Sam need incorrect predictions to validate their models. If either has a perfect record, what happened probabilities and basic math?
      I had not thought of it exactly this way — Thanks
      So , how should we validate a model if perfect results are not the answer?

    • Bernd

      If one wants a single number to rank the quality of the forecast, one should look at metrics like the Brier score summed up over all individual Senate races to see who did a better job. Wang crushed Silver on this in 2012.

  • Joe

    Meta margin moved today because of new poll out of KY , the second poll showing Grimes is ahead, albeit within margin of error. Coupled with YouGov polls yesterday showing CO moving back in Dems direction, and IA as well. Indication that something is happening in Dems direction, or that the Sept polls were off and bringing down the average of D candidates and the current polling crop is bringing the averages back to where they were to begin with. Still, nothing to take for granted, GOTV!

    UPDATE: As I typed this, new poll out of IA shows dead heat.

  • Jim

    I have a question, but not about the discussion on Political Wire. Not sure where else I can ask.

    Today (10/6), the meta margin and November prediction moved slightly towards the Democrats. I’m not aware of polling today, so was wondering why the movement.

    More generally, I’m curious how quickly the meta margin and election predictions are updated? Do numbers on a given Wednesday, for example, incorporate polls up until Tuesday?

    • Steven

      Iowa poll released today has it as a tie, Colorado Poll released today has Udall in front, and a Kentucky Poll has Grimes leading. All of these combined to move the meta-margin and prediction.

      As for what dates the polls contain, it is dependent on the pollster itself. For instance, the Loras College Poll that has Braley and Ernst tied was released today and incorporates surveys done 10/1-10/3. The HuffPo pollster links on the right display all of this info.

    • JayBoy2k

      Jim,
      This is a good quote for how individual state polls work.

      “For the current snapshot, the rule for a given state is to use the last 3 polls, or 1 week’s worth of polls, whichever is greater. A poll’s date is defined as the middle date on which it took place. In some cases 4 polls are used if the oldest have the same date. At present, the same pollster can be used more than once for a given state. From these inputs, a median and estimated standard error of the median are used to calculate a win probability using the t-distribution.”

      So, most times PEC takes the last 3 polls given that covers a week — pretty soon we may get more than 3 polls in a week. Grimes and McConnell both had one poll favoring them and the new poll came out with Grimes +2. This give the medium of Grimes +2, Mcconnell +6 and Grimes +2. Which is Grimes +2 until more polls come out.

    • Jim

      To Jay:

      Thanks, but that doesn’t exactly answer my question, so I’ll rephrase:

      When exactly are the various numbers at PEC actually recalculated? Does this happen every day at a particular time?

    • JayBoy2k

      Jim,
      copied from the FAQs —

      When do updates occur?

      Every day at midnight, 8:00am, noon, 5:00pm, and 8:00pm.

  • Randall

    ‘Strength of your vote’ is exactly it, Dr.

    Btw, Loras College has IA tied TODAY – which means that Ernst has now led in just four of the last 14 polls.

    GOTV here is the whole ballgame and to likely decide the Senate control.

  • atothec

    What’s funny is that after the election no will care about any of this at all for another two years. Nate will go back to being ‘oh yeah that guy’ and Sam will be ‘what’s his name’.

    In other news Grimes has a new poll showing her +2 in KY. So that’s 2 recent polls with her in the lead.

    Really is a coin toss right now but I think R momentum has stalled, Dems will pick up some steam and win with superior ground game. Dems also have an edge in ad buys as we get closer.

  • Hobbes

    Silver’s shtick is getting a little tiresome…

    It’d be one thing if he had any desire to have a real conversation about the benefits/drawbacks of various forecasting methods but to hammer out the same incorrect talking points day in and day out and then ignore everything Dr. Wang says in response reeks of dishonesty (and is strikingly similar to how politicians behave, not statisticians.)

    I think the daily beast has it very right in suggesting Silver’s behavior is mostly due to the perceived precariousness of his position as the preeminent forecast-guy, and it’s sad to see him act so petty.

  • xian

    wow, the comments on that daily beast article are pretty thick.

  • Kathy C

    I love you, Sam Wang! I just love seeing intelligence put the phonies in the entertainment world in their place. “Scientist, ” “East Coast Elite,” “Pointy-headed intellectual,” “East Coast educated,” have all become dirty words–code words for people who someone thinks need to be discredited and pushed out of the way. I’m really shocked that Silver is trying to do this to you, but he is. The media is dumbing us down (no surprise, considering who owns it), and he is now part of it. Sad, really. I wonder if he even knows it?—Desperate for a paycheck, esteem, fame. He’s the entertainment now–Greek tragedy!

  • kahner

    I agree with Altman that this has far more to do with reputation, money and traffic than any problem Silver has with Prof. Wang’s model. There’s been several reports that 538′s been a failure so far at ESPN and that’s likely putting a lot of pressure on Silver to generate more traffic and revenue. I find convincing Altman’s point that PEC is a far bigger threat than any other site because it’s the only one making the opposite prediction as 538. If everyone’s wrong, then no one loses. But if everyone but PEC is wrong, that is a huge blow to 538 and Silver’s reputation. And no matter what, this tirade of Silver’s must be creating some increase in visibility and site traffic.

    On another note, the one aspect of the critique I find interesting is his argument that the PEC model “estimates the uncertainty in his snapshots based on how much the polls differ from one another — and not how much they’ve differed from actual election results.” Thus creating unreasonably high levels of certainty in some predictions. My understand from Prof. Wang’s previous posts is that this critique is based on a model PEC no longer uses. And that Silver is completely aware of that, yet still uses it as an attack. All Silver’s other technical critiques appear to be nothing more than debatable modelling decisions, not as he claims, “just wrong”.

    • Matt McIrvin

      But the only reason Democratic control of the Senate would have that reputational result would be a completely irrational reaction to what is, after all, a bunch of competing attempts to model an election rationally. That’s what’s frustrating…

    • Matt McIrvin

      On another note, the one aspect of the critique I find interesting is his argument that the PEC model “estimates the uncertainty in his snapshots based on how much the polls differ from one another — and not how much they’ve differed from actual election results.”

      I suppose he’s talking about his house-effect calculations.

      Sam is basically assuming that most polls’ house effects will tend to cancel and the far outliers can be dealt with using median statistics, so by attempting to calculate an individual house adjustment for every poll, you’re likely just adding extra uncertainty from any error in the adjustment calculation.

      But if we got in a situation in which some large fraction of all polls had a consistent house effect in some direction, Sam might have to reformulate. Many people have wondered in the past whether we might see concerted efforts to generate lots of biased polls to game 538- or PEC-like models.

    • kahner

      @matt McIrvin, that would surely be an irrational reaction but people are irrational so it’s certainly not to be unexpected.

  • Edward G. Talbot

    Agree with Matt, this has gotten tiresome. Most people are simply not going to accept the realities of statistical analysis in an election this close.

    I’m not sure there’s much to be gained by engaging Nate any more on these sorts of issues unless you think it will generate some additional insight you wouldn’t have gained otherwise. It really isn’t peer review at this point.

  • Matt McIrvin

    …I guess he qualifies that further down. Maybe he’s just arguing that others think this way.

    • bks

      Altman writes:

      Over the years, I’m pretty sure both forecasters have benefited from luck, which is impossible to measure. We’ll see who’s luckier soon enough.

      Unfortunately, we won’t see that at all. –bks

  • Matt McIrvin

    Altman makes the classic mistake of interpreting a probability over 50% as “predicting the Democrats will win.” It’s getting tiresome.

    • kahner

      I this that’s just slightly sloppy language by Altman. He seems to understand the idea of statistical prediction fine base on his later statement “Of course, a single election shouldn’t be grounds for validating or dismissing any statistical model. A 41 percent chance of an event is still a 41 percent chance; the event is quite likely to happen. No forecast is “wrong” unless it predicts something with 100 percent certainty that doesn’t end up happening. “