General overview 2012: the state of play
Welcome to all the new readers. Traffic is booming. First, a brief update. Several weeks ago I showed that convention bounces (and other fluct...
Senate: 48 Dem | 52 Rep (range: 47-52)
Control: R+2.9% from toss-up
Generic polling: Tie 0.0%
Control: Tie 0.0%
Harris: 265 EV (239-292, R+0.3% from toss-up)
Moneyball states: President NV PA NC
Click any tracker for analytics and data
I am impressed with the response to my question yesterday. Readers, go see the comment thread. Also, thank you all for the kind words.
Yesterday my colleague Paul Starr brought up points relevant to this discussion: (1) Voter polarization is extreme. In the Presidential race, the fraction who are undecided appears to be extremely low in 2012. The Meta-analysis indicates that as few as 2% of voters are undecided (i.e. a 4% range in the Meta-margin). (2) A May 2012 Pew Center study showed a response rate of just 9% in phone surveys, down dramatically from even 2009. Holy moly. (3) This is the first year for Citizens United and the resulting unlimited campaign finance to have an effect. I will return to some of these topics in coming weeks.
In yesterday’s comments, both old and new topics were brought up. One subject does not affect poll analysis, but is of extreme importance: campaign finance. I offer a few responses to comments, sorted by subject. Click on the boldface titles to learn more.
Do polling samples capture the voting public? Problems include distinguishing registered from likely voters, not properly capturing cell-phone-only users, and voter ID/voter suppression laws.
Identifying who is a likely voter (LV) matters, and is more prone to going wrong in off-year elections. I recommend an extensive analysis in 2010 by former pollster Mark Blumenthal at Pollster.com. The difference between registered voters (RVs) and LVs is a problem before Labor Day, after which most pollsters switch to reporting LVs. In past years I have not noticed a shift in the Presidential estimator at that time. It might be hard to see because of short-term swings like convention bounces.
The question of cell-phone sampling comes up repeatedly. To my thinking it is not a real problem, in the sense that in principle it could be fixed by adjustments in survey techniques. However, as I mentioned above, the low response rate problem does give me pause.
Voter ID/voter suppression. For the long term this is a serious issue. It matters the most in very close races. In the 2012 Presidential race it may not be a factor, mainly because Pennsylvania is not in play. But for State/district races, and as a fundamental matter of our democracy, it is a serious issue.
Campaign spending. This is the first year in which the effects of Citizens United will be felt strongly. Campaign spending does not affect poll accuracy, but the polls should be able to measure the effects. I believe it will make the largest difference in Senate, House, and state-level races, where campaigns cost less…for now. There is a reason why Karl Rove’s Crossroads GPS is working at these lower levels – leverage. I believe this is the single most important new issue to watch in 2012.
Technical issues with poll aggregation. Amitabh Lath wanted to know: if we corrected for individual pollster biases, how would we know where to set the scale? Yes, somehow one must anchor this bias, which I called d in the comment thread. One possibility would be to assume that the median ofd is zero. The justification is that on Election Day in 2004 and 2008, the Meta-analysis landed right on the EV result. If one wanted to get fancy, one could create a tool to allow the reader to anchor d using his/her favorite pollster. Even Rasmussen fans would be pleased.
The question came up of why one does numerical simulations. These are good for evaluating a model with lots of “nuisance parameters” (what an excellent name for them!). They are not a means of introducing randomness. However, in my view, when one can, it is better to understand a model well enough to calculate the exact distribution of all possible outcomes. For example, I see no reason for the FiveThirtyEight “Now-cast” to be calculated by simulation. For that, all 2.3 quadrillion electoral possibilities are contained (as they are for my calculation) in this bit of MATLAB haiku: dist=[0] / for i=1:51 / dist=conv(dist, [p(i) 0 0 … 1-p(i)]) / end. For the compulsive, add a few lines for Maine and Nebraska.
If anyone is interested, here a new comment thread below.
I will start by thanking everyone who contributed to the last thread. Both returning and new commenters drove the discussion in interesting ways. Appreciation to Amitabh Lath, Olav Grinde, Perchmerle, LondonYoung, Bill N, Wheeler’s Cat, Rachel Findley, Matt McIrvin, and many more.
There has been some discussion of how the electoral landscape has changed demographically. Fewer undecideds, oversampling and under sampling, majority minority….these are evolutionary trends that are changing over time.
But has anyone address the evolutionary memetic changes in the electorate wrought by social media and widespread internet access?
Nate had a post about how a gaffe that sticks can cost the candidate 10 points. Consider Kerry’s windsurfer shot, Palin’s Russia statement (yeah, that was Tina Fey, but interestingly that is the one that “stuck” with the voting public). Dukakis in the tank.
Most recently I think Eastwoods “Invisible Obama” moment blunted the effectiveness of the GOP convention.
I think memetic penetration and transmission have changed since 2008 in fairly significant ways.
Here the commentariat has touched on the effect of John Stewart and Stephen Colbert and Bill Maher.
In September SNL will have two primetime shows on ABC featuring election coverage.
And I think the twitter, comedy, and SNL skits are asymmetrical…its easier to mock the right? Or is that my organic liberal bias talking like Dr. Wang spoke of?
Nate had a post recently on using television coverage to predict convention bounces…are Nielsen ratings predictive in any sense?
I view those memorable events as ways for a race to reach its more natural equilibrium. In a deep sense, the existence of such an equilibrium is what motivates the political scientists. A gaffe unsticks the ball of opinion to roll to a lower, more stable point. The real mark of political genius is moving the race away from equlibrium. In this respect, the first week after the Palin VP nomination stands alone as a brilliant, desperate move.
As for predicting convention bounces…is this really interesting? They’re getting smaller. See Andrew Sullivan quoting Nate Cohn. I think the post-GOP bounce will be less than 1 point, below the resolution of most methods (though if might be visible in the Meta-margin). Does one need a fancy-schmancy model to guess that?
“I view those memorable events as ways for a race to reach its more natural equilibrium. In a deep sense, the existence of such an equilibrium is what motivates the political scientists.”
Again you are implying symmetry. Even symmetry in electoral numbers is not really true anymore. The existence of an equilibrium point requires symmetry I think.
We are not the same. And increasingly, there are more of us.
When you speak of the “equilibrium point,” given the context of other comments you make, the idea that comes to mind is a similarity in your notion of the race moving towards its “equilibrium point” and regression towards the mean (or in this case median). So if I understand you correctly, I might interpret your ideas as saying, in so many words, that there is a true median level in the EV estimator and the observed median varies around this true median (much like the notion of a true score in classical measurement theory) due to noise of various types. While events such as a convention or choice of running mate may pull the EV median away from the true median (or in your words the equilibrium point), it eventually moves towards the true median (equilibrium point).
How close am I?
More or less, though perhaps I should reconsider the use the word equilibrium. There is noise, but it is in the form of actual opinion. So it’s more like an intrinsic point or a natural point. Searching for the right word.
This seems a most interesting development:
“New York Attorney General Subpoenas Bain Documents”
http://www.politico.com/blogs/burns-haberman/2012/09/schneiderman-subpoenas-bain-documents-134093.html
Dr Wang, this is a most fascinating discussion on polling science, uncertainty, and intended/unintended bias.
I have a request: a discussion of the robustness of the electoral process. And by that I mean quite simply the opportunities (and evidence) for one side or the other compromising the integrity of the election in their favor.
This has been touched on, but seems to me worthy of its very own discussion.
To the degree that the integrity of the election is undermined, any predictions — on this site or elsewhere — risk missing the mark.
Olav that is a GREAT idea!
I will postulate a hypoth…..the robustness of the electoral process is currently being undermined by the “freed” market in information.
And the regulatory capture of the media.
I think that if there’s any state where vote suppression is actually going to swing the presidential vote this year, it’s not Pennsylvania but Florida (just like in 2000).
The presidential polls there are knife-edge, and Democratic voter-registration drives have already been essentially shut down in the state, so the law’s probably already had an effect regardless of how much of it remains in effect on Election Day. The damage may be done. The suppression is more egregious in Pennsylvania, but Pennsylvania is leaning far bluer this year. Ohio is somewhere in between.
That historical convention-bounce chart has some surprises in it. It amazes me that the conventions most often described as disasters for the party– Democrats in 1968 and 1980, Republicans in 1992–all had positive bounces associated with them, in some cases respectable by today’s standards, though the other party did as well or much better.
Good point about Florida.
Re bounces, like I said they seem transient.
A response rate of 9%? Holy moly indeed. How do you hang any sort of conclusion on top of that kind of response bias?
You encapsulate my concerns about poll analysis quite well. Response rates are low and probably biased. Weighting in subgroups runs aground on miniscule numbers (if N=300 as in some polls, then subgroups might be in single digits).
Given this, I worry that any statement other than the bottom line estimator is fraught with uncertainty. We look at a line jumping around and a-posteriori assign labels like “Paul Ryan bump”.
Same thing with the whole question of polarization. Maybe it’s just that the 9% answering their phones is polarized, and the 89% who don’t want to be bothered now have better tools to screen pollsters.
To answer your question from yesterday, my fear with the estimator is basically the black swan. It’s possible that individual uncertainties have large non-gaussian tails, that are not being sampled due to low statistics or phone-technology systematics.
Thus the probability of a correlated event of large magnitude (something that say, turns Ohio, Michigan, Wisconsin red, or Missouri and Indiana blue) is hugely underestimated.
Sorry, 91% non-responders. I just went through the two dozen or so listings on my landline’s caller-id history. One was from the dentist, confirming an appointment. The rest said “unknown” or had strange names like “Matrix technologies”. We picked up the dentist call, ignored the rest. Might have been pollsters for all we know.
Maybe we should get T-shirts: We are the 91%.
@Amitabh
Thus the probability of a correlated event of large magnitude (something that say, turns Ohio, Michigan, Wisconsin red, or Missouri and Indiana blue) is hugely underestimated.
Not by President Obama.
I think Bibi was planning to hit Qom during the joint exercises at the end of October when he would have 5000 american troops as hostages.
Obama has now downscaled the exercise to warn Bibi off. And Bibi is furious.
http://www.jpost.com/DiplomacyAndPolitics/Article.aspx?id=283353
I think black swan events would be another worthy post. What would possible black swans look like in this election year?
@ Matt
if this Marist poll is accurate Obama has a lock on Florida.
but then again Marist polled the cell demos.
http://maristpoll.marist.edu/index.php?s=cell+phones
@Amitabh Lath: That’s an excellent point!
The sizable group of voters that are savvy enough to screen out pollsters — what do we know about their demographics and voting preferences? What can we know or surmise…?
Seems to me that is a huge unknown, a separate factor from the “cell phone only” issue possibly skewing the polls. Any thoughts on this?
One more thought: If someone was really unhappy with both major candidates, couldn’t they start a social media campaign urging respondents to lie to pollsters about their preferences? Perhaps with a particular skew… Given the low sampling rates, it wouldn’t really take much to throw the polls way off, would it?
@Wheeler’s Cat: Apologies for misreading your post yesterday.
I reflexively hang up on pollsters these days just because I assume that the vast majority of them aren’t really pollsters, but either scammers of some sort or push-pollers.
You didn’t actually address conspiracy theories. Can you address bias in the colloquial sense – is Rasmussen, or any other pollster/s, consciously, intentionally, manipulating his results to drive the narrative.
I really appreciate this site and happily it doesn’t appear to have been discovered by the usual suspects yet (and I say this although some of it is rather arcane). While I can understand and accept the broad conclusions there doesn’t seem to be much discussion of turnout. In summary as you have pointed out we have a highly polarised electorate which means the base vote for both parties is high, although the Democrats might be slightly higher. However the Democratic ceiling is 10-15 million votes higher than the Republican max of around 60 million. I suspect the entire voter suppression scare is somewhat overblown. Sure the Republicans are trying to suppress but third law of physics is likely to produce more motivation so probably a wash. And if turnout is at 2008 levels which I suspect it will be (purely worthless speculation of course) then not only does Obama win comfortably but the Dems keep the senate and make gains in the house unless there is a lot of split ticketing which doesn’t strike me as plausible.
Perhaps we need some momentary comic relief?
NEW POLL: ROMNEY TRAILS EMPTY CHAIR
NEW YORK (The New Yorker) — In a development that the Republican campaign is sure to find troubling, a new poll of likely voters showed nominee Mitt Romney trailing badly behind the empty chair Clint Eastwood talked to onstage at the Republican National Convention in Tampa.
When asked the question, “Who cares more about people like me?” 37 % of voters responded “Mitt Romney”, while 52 % said “Chair”.
The poll numbers for the chair represent the largest post-convention bounce for an inanimate object since the nomination of Michael Dukakis, in 1988.
http://www.newyorker.com/online/blogs/borowitzreport/2012/08/poll-romney-trails-empty-chair.html
To take the bias out of the 9%, I presume the pollsters bin their data and apply weights. You can bin in age, gender, race, education, or heck, zodiacal sign if you want. The weights presumably come from some model of the parent sample.
If this is done properly then it doesn’t matter that mostly retired folks or unemployed or graduate students are answering the pollster phone calls.
But, who knows how to do it properly? After all, you are trying to correct for a sample of people who (by definition) you cannot contact; you don’t know what they are like. Your model of these non-responders (eg: those 25 yr old Virgos who don’t pick up) determines your final answer.
Any given model is horrible (lots of commenters on this site pointing out the Rasmussen model seems particularly faulty). But (hopefully) through the magic of the Central Limit Theorem a compilation like Sam’s should arrive at a well defined central value with gaussian uncertainties.
(Yes! Even if the inputs have uncertainties that are far from normally distributed)
Asking any more of the compilation, like trying to decipher by the time-dependence what people think of a VP choice or gasoline prices or whatever, would make me nervous.
Re Amitaba:
On the possibility of non-Gaussian tails that underestimate the chance of large-correlated events that shift multiple states:
Nate’s model explicitly recognizes this problem. One of the advantages of Nate’s model is that it appears to be based on a larger sample of historical election data. Please correct if I am wrong Dr. Wang, but it seems that this model has only been run with regard to the 04 and 08 elections. I know that Nate has built different assumptions into his model based on polling data from elections back as far as the 60s. I would love to plug the polling data from these previous elections into this model in order to test its accuracy against the final result.
I don’t know exactly how Nate factors this tendency for large-scale correlated shifts into his model, but he has discussed the issue before, and he has said that he does attempt to account for such uncertainty. This is one of the reasons Nate’s model has so much more uncertainty than Dr. Wang’s model.
This is an interesting point to me. If you compare the estimated EV totals between these models, you will see that they are almost the same. However, Nate incorporates considerably more uncertainty. His probability of an Obama EV victory is in the low 70 percentile range, while Dr. Wang places it I the high 80 percentile range. If you compare the EV distribution range in both models, you’ll see that it is MUCH wider in Nate’s model. 538 projects a non-zero possibility for Obama’s EV total to wind up below 150 or in excess of 400. This is a much wider range of possible outcomes than in found in Dr. Wang’s model.
Nate’s model incorporates a huge amount of additional variables beyond the simple polling results. One thing he has added this year is economic data. Prior 538 models did not incorporate economic data. I think this is a very big mistake on Nate’s part, because it becomes hard to distinguish between variables. In other words, the effects of economic peformance that we care about for election forecasting should already be baked into the polls. For Nate to add economic factors in addition to the polls, it seems that he is counting some of these factors more than once. However, Nate’s model also makes sophisticated adjustments based on polling house effects and other factors that may be more relevant.
In sum, both models are looking at the same underlying polling data, and point to a roughly 2 point Obama popular vote margin, which translates to a median outcome of roughly 300 EV for Obama. 538 incorporates more uncertainty, but possibly suffers from over-sampling certain variables, especially economic variables in the 2012 version. Dr. Wang’s model is much simpler and more elegant, and is notable for it’s much lower level of uncertainty. But it’s quite possible that this model underestimates uncertainty.
IMO, the best way to test this would be to dig up historical polling data dating back at least to the 1968 election, and run the model for these previous elections, testing it’s predicted result against the actual final EV totals. If Dr. Wang’s model continues to be very accurate when applied to these past elections with known results, I think we can be more confident in the low levels of uncertainty incorporated by this model.
Rasmussen could well be correct in its model. No way to know. Best to lump them all together as Sam does.
I would be loath to add any bias adjustment unless I could do it blind (no knowledge of the pollster identity or their published results). Otherwise human nature being what it is, I might tweak things to get my preferred candidate the best score.
Also, adding a bias might move the central value. but would it also blow up the error bands? You would have to add the uncertainty in the bias variable in quadrature, no?
As to Sam’s question (also badni above), how would Rasmussen (or any other pollster) benefit from cooking the books? They already pay a big price for being a few points off the mean. Influential bloggers like Andrew Sullivan leave Rasmussen off when they display charts. But do their results change anyone’s minds about who they are voting for, or if they are voting at all?
Tim – no, you are not correct about my calculation, though many of your other statements are true. The short answer is that as far as I can tell, I have accounted for uncertainty in full. Specifically:
(1) The snapshot is precise, and ought to be. It landed on the final EV outcome in 2004 and 2008. It is mostly optimized, though the uncertainty band ought to be smaller, as I have detailed in the previous post’s comment thread.
(2) For future prediction, I account for small correlated shifts using detailed time-histories from 2004 and 2008 to calculate the SD of these shifts, as I detailed several weeks ago. Please read that carefully.
(3) The possibility of an extreme shift is entirely accounted for using a long-tailed distribution (t-distribution, 3 degrees of freedom). This particular tail was chosen by inspecting time series for several decades. The cardinal advantage of this approach is that you can work out for yourself by hand whether it is OK. I encourage you to do so. For a few examples of past races, read this. There are no specific mechanisms posited. This will be impossible for decades, perhaps never.
The main problem with Nate Silver’s approach is that he made many detailed assumptions of unknown truth. The cumulative effect is to add uncertainty in an uncontrolled manner, leading to obvious errors like the <150 EV result you cite. However, it seems gratuitous to pick that calculation apart separately from political science-based models, which are research tools, not "true" models. That is a subject I addressed here.
My basic take is that Nate Silver is an excellent resource for color commentary. He also likes to write about individual data points, and has good intuitions. But the modeling is not an example to look up to.
Amitabh and badni – The obvious answer is that Rasmussen is part of Fox/Republican messaging. Robopolling does not cost much. Presumably this messaging apparatus pays for the polls. Morale is never more important than when your side is a little bit behind. In this case, Rasmussen plays a central role.
And yes, his LV model might be correct on occasion. It is bad statistical procedure to simply disregard the data.
Tim, yes Nate Silver probably does correct for correlated multi-state effects. He corrects for a lot of things. He has pollster-specific corrections. He corrects for convention bounces (in the Nov 6-cast). He weights individual pollsters for each state and displays the weights in cellphone-like bars to the side. All very cool.
But.
You cannot allow for a black swan. You have either seen one, or you have not.
What Nate may do is add terms to his equation on a county/state/region/nation -wide basis. He can tweak these coefficients according to some model of voter behavior. Then run his pseudoexperiments and get his distribution. Of course his distribution is going to be wider than Sam’s, he has so many more variables to scan over.
But the polls that go into this effort (Silver’s and Wang’s) are highly imperfect. You cannot tweak them into giving you more information. What you can hope for is that integrating over multiple polls in multiple states cancels out multiple ills.
That’s why I for one only look at the integral above 270. Currently that integral for 538 Now-cast is 73%. I don’t recall Sam’s integral p.d.f > 270, but it is probably in the mid-80%.
I would call that agreement within the uncertainties.
PS: Don’t get me wrong, I admire what Nate Silver has done. But I prefer Sam’s approach because it is simpler, and the data does not call for anything more complicated.
Dr. Wang,
Thank you very much for your response. I had already understood points 1 and 2, but point 3 was new to me (I’ve been reading posts here for a few weeks, although this was my first comment). I didn’t realize your model was already using fat-tail distributions. With this knowledge, I am confident that you are not underestimating uncertainty. This is very good news to me as a liberal progressive.
It is very impressive that the snapshot landed exactly on the final EV count in 04 and 08. I would still like to see if this would have held true in previous elections through the decades. I realize this is time intensive, and I’m not sure where the polling data can be found, but it would still be a nice test of the model, even if it doesn’t change the current predictions.
Thank you, Tim. In 2008 the method gave a 1EV error due to NE. The actual uncertainty is higher, probably +/- 5 EV depending on what is close then.
I like what Nate does in many respects. He makes this kind of thing fun and popular. In 2004 it was more cult-y.
Amitabh,
Yes, I prefer Dr. Wang’s model over 538 for the same reasons, especially now that I know that it uses a fat-tailed distribution to account for relative likelihood of a black swan event. That was the source of the misunderstanding in my previous post.
I think the idea that there is a true median point is interesting and probably very accurate. I would like to add that perhaps that this median can change given events that impact the electorate. The economic crisis, for example, in 2008 probably increased the median point in Obama’s favor which is why he ended up with a larger margin of victory than was predicted most of the summer and even early fall.
Re: cooking the books, I think the argument is that it is beneficial to bias in favor of your preferred candidate throughout the campaign. What a biased pollster might do, however, is remove the bias toward the end. It’s the final result that matters in the final judgement and by that point the pollster can’t do much to influence the election anyway.
@Matt: It would be very interesting to analyze the “movement” in the Rasmussen polls in the last 4–6 weeks prior to the election, perhaps compared to other polls or the median/mean of these.
Is anyone aware of this having been done?
Is the study of polls/pollsters, and their roles in the elections, part of present Political Science or Statistics courses at American universities?
Matt on the “true” median.
I agree totally. I have actually suggested to Nate a number of times that he use Wilcoxon rank-sum to find the true median of “house effect”.
Its empirically obvious that Rasmussen does exactly what you suggest, and Nate and Mark Blumental have written posts on it.
The biased pollsters remove the bias towards the end so they can keep credibility. But in Rasmussen’s case his voter model (which is secret btw) led him to whiff in Colorado and Nevada in 2010. In Colorado he predicted Buck and Tancredo and in Nevada he missed the Senate by 10 points.
Nate assumes a gaussian distribution of pollsters. So he believes “averaging” cancels out red/blue house effects. Dr. Wang does actually entertain the idea of political asymmetry, but believes in the CLT and weak-convergence in the long run.
I blame his neuroscience background for the heresy.
The bounce or not to bounce movement we see in the polls right now will make an interesting field study.
http://fivethirtyeight.blogs.nytimes.com/2012/09/03/sept-2-split-verdict-in-polls-on-romney-convention-bounce/
Nate does not mention that PPP is also showing zero bounce.
Olav, I did notice that Rasmussen went into plus-Obama territory right before the Tampa convention. This I found curious as they had been consistently +1 Romney forever.
Then after Tampa they quickly (too quickly perhaps) went into +4 Romney.
Could this have been a ploy to feed the “convention bounce” storyline? If so, they weren’t +Obama for nearly long enough pre-convention.
Or they may have changed their LV filter as we head into September. But it smells bad.
@Amitabh
No Rasmussen didnt change their LV filter. Rasmussen is a robopoll house.
Let me ask you a question….how do cell-onlies become LVs if they never get asked if they are likely to vote?
Dr. Wang thinks this is a trivial problem, because you can weight the samples to give cell-onlies correct representation.
But how do you know if voters that don’t get polled are LVs?
If we could always adjust Ras polls D+2, or PPP R+2, that would be one thing. But if Ras is showing relatively strong D numbers in, for example, Wyoming, but strong R in Florida, so that on average his state results look less biased, or if he boosts Obama for a day so R’s convention bounce looks bigger, that is categorically different from an imperfect LV model. How do we figure out which it is.
Olav – Some time ago, Charles Franklin led the way on calculating pollster house effects. Perhaps poke around his current site and Pollster.com.
Badni – When looking for these house effects, it seems to me that one must, at baseline, assume uniform methods across states and honest polling. Digging deeper has a certain element of going full-tinfoil-hat. If a pollster switches methods mid-season, that certainly complicates things.
If you want to play with house-effect calculations, here is an approach that is easy and gives a rapid sense of how large they are. Compile a long list of all polls occurring at close intervals in time, e.g. 7/22-24 PPP and 7/24-26 Rasmussen in Ohio. These can be found at RCP in convenient, tabular form. Calculate the differences between consecutive polls, then sort the differences by pollster. The average difference for a particular pollster gives a relative house effect. At the end, take all the house effects and slide them all up (or down) by a constant amount to make the median zero.
@Sam, I appreciate that Charles Franklin and Nate Silver have calculated pollster house effects, i.e. biases. But my question concerns the investigation of this bias changing over time — especially shortly before an election.
Now, you may consider this to be “going full-tinfoil-hat” (I disagree), but I honestly wonder whether there has been any research into this.
Olav – Well, there is enough statistical power to test that idea if you combine states. It is easier to assume that pollsters treat different states the same way.
Slightly OT…. This is a guess, but I’d be interested in thoughts:
The reasons we don’t see that much of a GOP bounce is three-fold, imho: 1) not many people watched, hurting their chance to get persuadables, 2) it wasn’t on message on Thursday — Eastwood went viral and took out any messaging to people who didn’t watch the convention to focus at all on Romney’s speech, 3) the GOP electorate was already fairly fired up before the Convention because of an animus toward Obama — their voters were already likely to be “likely voters” in a sample. I’m not sure if that’s true of D voters before the DNC.
Is the same likely to be true of the DNC? Yes, of course, it will be a smaller audience than 08 (Wed night football alone will do that). But Bill Clinton and Obama *are* better draws than what the GOP had in Tampa and are more likely to be focused on persuadables than any of the GOP speakers, including Romney/Ryan, were.
Further — and this might be an ill-informed guess — but aren’t D base voters/D-leaning persuadables less likely to be “likely” voters until it gets to be closer to the election? In other words, they’re much more not likely to be “likely” voters pre-DNC (and then pushed to “likely” by a Convention) than a GOP base or persuadable voter?
I’m not saying there is any chance of a 10 point bounce (those days are gone — America doesn’t watch one thing on one night that much anymore, particularly a political speech), but a larger than 1-2 point convention bounce — say 3-5 points — for the DNC might be possible, given the speaker lineup (just Bill Clinton and Obama) and the fact that it isn’t as aimed at the already-converted.
Again, this is a theory and it may not match up with what will happen. Interested in your thoughts.
Chris R – Bounces are smaller than they used to be. To my eyes the GOP convention did not appear to be an inspiration. Lots of bad press. And that chair. That chair. I’m going to guess a 0-1 point movement…in either direction. See my post tomorrow.
As for the Democratic convention, I think it won’t be more than 4 points. Movement this season has been in a narrow range. Voters are polarized. However, a lasting change will tilt the Senate and House. I am not very interested in the Presidential race, which I view as mostly determined.
An excellent speaker for the Democratic National Convention, would be Robert Gates, who served as post-9/11 Secretary of Defense under President Obama as well as Bush.
Gates would be a very powerful voice to underscore that President Obama and his team has kept America safe. On Obama’s watch, the USA finally brought justice to Usamah bin Lādin, effectively neutralized Al-Qa’ida, and helped usher in democracy in numerous countries that were formerly dictatorships.
Oh, and Obama brought the troops home from Iraq, as promised!
With all the Faux News noise and GOP propaganda, people tend to forget this. It’s time to remind them.
Moreover, Robert Gates is a Republican — and a far better choice than Eastwood.
House races: Ok, here’s a ? for you, Sam. When’s the last time an incumbent President had coattails, where the incumbent WH party led a significant change in the alteration in the House?
Off memory, I don’t think it happened in 2004 and 1996 (or 1984, for that matter). *IF* the early projections you posted on the House are true, this might be a historical event in that regard.
And there might be a good reason for it (and why the media falsely refers to each Congressional election is a “wave” election, imho): the demographics of a midterm election (older, whiter, favors party out of power) lead to a skewed result compared to the demographics of a Presidential election (in which younger, more Latino, more AAs vote).
Also, the chair? I wonder if more people heard ridicule of the chair than Eastwood actually talking to the chair or Romney’s speech. Actually, I don’t wonder. There’s no question that’s true.