Princeton Election Consortium

A first draft of electoral history. Since 2004

How Should Volatility Be Defined?

October 9th, 2016, 9:00am by Sam Wang

Several of you point out that my analysis of Presidential races 1952-2016 in The American Prospect appears to conflict with an assertion by Nate Silver about this year’s Presidential race. Yesterday he discussed why he thinks 2016 is a year of high “volatility.” In the piece he says that he is preparing a more detailed analysis of the topic. In anticipation of that, now seems like a good time to describe my own ideas of how volatility ought to be analyzed. If we can agree on our terms, we can all avoid confusion – and maybe some pitfalls.

You may regard the topic to be dry. However, volatility is absolutely central to understanding what is happening in our age of extreme polarization.

First, let me list qualities that I do not regard as volatility, properly speaking.

1) I do not mean emotional volatility. The entire political season has been emotionally fraught. A wealthy reality TV star, Donald Trump, has executed a hostile takeover of the Republican Party. He has used the Presidential nomination as a platform to appeal to Republican voters who have, in many ways, been kept away from the party decisionmaking process. He is unlike any nominee in history. And oh yeah, he exemplifies racism, misogyny, and has been caught on tape bragging about sexual assault.

It’s a bit hard to tell here, but I do get the sense that Silver may have inadvertently let this colloquial interpretation get into his analysis. He says: “how I see [this general election] personally — is that it’s characterized by high volatility and high uncertainty.” I think it is a mistake to mix our personal judgments of what a weird year it is with quantitative measurements of opinion. This is definitely not Six Stages of Doom territory – but some caution is in order. I am in total agreement with the idea that 2016 is freaky. But let’s analyze the data, not the drama – and keep them separate.

2) Silver introduces a novel definition of volatility: responsiveness to events. This is not commonly accepted usage! In finance, it is defined the same way as variability: volatility is the degree of variation over time, as measured by the standard deviation. It’s a simple concept, and it is what I plotted in The American Prospect and here at the Princeton Election Consortium.

The classical definition of volatility does not get into the question of what triggered a particular change. There is a good reason for this: we don’t always know what the causes are! It is simpler to just measure the standard deviation, and use it as a description.

(As an aside, I should point out that standard deviation does need to be calculated to avoid contributions that are unrelated to true changes in the driving variable, public opinion. For national polls I used the Huffington Post average, and sampled every two weeks. For state polls I used the Meta-Margin. These procedures minimize the impact of pollster variability on the individual data points.)

I question the premise that opinion has been particularly sensitive to events. The Meta-Analysis, which shows the impacts of events more clearly than FiveThirtyEight’s approach, does not show many features compared with the 2004-2012 elections:

In addition, there simply hasn’t been much movement in national polls: from June 2016 to now, the Clinton-over-Trump margin has only varied between 2 and 8 percentage points, a range of 6 points. This is not all that wide, as I have discussed. As a contrast, look at 1964, 1976, or 1980, all of which showed a range of 30-40 percentage points, as defined by two times the variation in Democratic vote share:

In fact, I would say that whatever our emotional response to this year’s events, the most remarkable aspect of public opinion is how little it is changing. That is my central argument for stasis in modern times. The craziness of the post-Kennedy-assassination Johnson-vs.-Goldwater race (1964), the post-Watergate rise of Carter (1976), and the Iranian hostage crisis (1980) all had far greater effects on public opinion. A long view is necessary.

3) It is also important to exclude volatility during the primary season. If you look at Silver’s graph, he seems to be comparing March-September 2016 with the same period in 2012. That is not a fair comparison. In 2012, President Obama was the unopposed leader of the Democratic Party, while in 2016, Hillary Clinton was locked in battle with Bernie Sanders for the Democratic nomination until June. And indeed, the first half of the year showed more variation in the Clinton-versus-Trump margin than in Obama-versus-Romney.

Indeed, if you look at the FiveThirtyEight graph for only June to October/November, the range of national poll margin variation is quite similar. I did my best to extract the data points to calculate the standard deviation, and I get SD=1.3% for 2012 and SD=1.8% for 2016. These values are so small that they contain a fair bit of polling noise. The difference of half a percentage point is not significant by the F-test for equality of variances and the difference is even smaller in my own dataset, so it’s certainly dependent on smoothing procedure. The Meta-Margin, which has less noise than national polls, indicates that 2016 was the less volatile year (as is apparent in the first two graphs in this post).

Now that I think of it, March-May could be contributing to why Silver is claiming that 2016 is more volatile than 2012. If so, I think this is a significant analysis error. If he’s going to include the primary season, at a minimum he should compare 2016 with other open-election years such as 2000 or 2008. Alternately, I recommend instead that he focus (as I have) on the general-election period, which I take as starting on June 1. This allows a fair comparison of all election years.

Once we have all of this sorted out, I think it would be excellent to see his analysis extended to pre-1996 data. He writes “The volatility in the polls in 2016 is pretty average by historical standards.” However, this is not true, unless by “history” he means “2004-2012.” He has an extensive database over there, and I would love to know how he sees the long-term trends that I have written about.

Tags: 2012 Election · 2016 Election

55 Comments so far ↓

  • DrJoe

    Dear Prof. Wang,
    As an academic, I’m a supporter of this site. As a baseball fan I also am sympathetic to the efforts and arguments of Nate Silver. Additionally, as a seismologist, which makes me one of those people who thinks and reads a LOT about time series analysis. This discussion of polling volatility/variability has been percolating around in the back of my mind for the past couple of weeks. During my power nap this afternoon, it suddenly occurred to me that one possible explanation source of the divergent conclusions about “volatility” could be the presence of a real period (or frequency dependency in the variance of polling data. This is one of the things for which seismologists and climatologist (and others) always have to check and adjust, and for which we have developed an extensive battery of analytical tools. Do you neuroscientists and political scientists do the same? If so, have you tried applying this to your polling data to see whether or not such corrections might be significant? If not, would you be interested in checking this out sometime after November 8? I hope you don’t mind me asking. I’ve been surprised many times over the years by cross-disciplinary gaps in familiarity with specific analytical approaches.

    • Sam Wang

      Always open to new tools, though my general advice to people in any mathematically-oriented discipline is to ask whether you really think deploying a more-complex analytical tool is necessary or desirable. Often a simple calculation can foreclose the need to get fancy. It is the experimental physicist in me speaking.

      In this case, the standard deviation is low and the graphs for 2008, 2012, and 2016 look similar, so there seems not to be a reason to look further. You can also examine the peak-to-peak change throughout the general election campaign and the largest 1-week changes.

      2008, 2012, 2016:
      Standard deviation: 2.5%, 2,0%, 2.2%.
      Total swing: 12%, 7%, 12%.
      Largest one-week change: 7.6%, 4.5%, 4.3%.

      Further analysis seems unnecessary.

      In this case, the deep reason for doing it appears to be “famous guy says there is a phenomenon, without evidence.” I guess you can go down this road, but it seems more useful to ask why we think this year is volatile. The answer is that the campaign is emotionally volatile. However, intuitively, do you really think there are voters who change their minds every few days, and that they do it in unison?

    • DrJoe

      Thanks for your answer. Believe it or not, I’m not just trying to find excuses for the 538 conclusion. The contrast caught my attention because both that site and yours are famous, but after that it’s just a starting point for thought experisments. A more important motivation is that, in my experience, careful analyses of even low S/N features can detect robust signals (e.g. the seismic interferometry revolution of the past 15 years) relevant to testing causal theories (e.g. polarization or gerrymandering). Furthermore, when I “eyeball” your sequence of time series plots above, I get the impression that even if the relative short and long period variability changes have tracked one another closely over the past three or four presidential cycles, the 86-94 vs 2000-2016 contrast looks more dramatic at long periods (caterpillar “bendiness” for not-technical readers) than short ones (caterpillar “fuzziness”). An alternative class of causal explanations that comes to mind is changes in the nature of polling: tech developments ivr, mobiles, internet; but also the increasing number and variety of pollsters adjusting to new technology in different ways. Do you have any comments wrt these issues, or reading recommendations?

  • AySz88

    Unfortunately I missed this post earlier, but I have to agree with Brendan and point out that this calculation looks incorrect (even in the financial sense), due to missing all time and order dependence in the data points.

    I can think of two possible ways of doing this…

    One is to fit the parameters in a random walk model to the data (by minimizing log likelihood). Maybe bootstrap for confidence intervals.

    Or, look at where the energy is seen in a Fourier transform. To deal with noise, either discard the high frequencies, or preferably calculate what the energy from noise (given the poll sizes and methods) is expected to be at those higher frequencies.

  • Jay Vaidya

    You are right to point out that we should not be to0 Humpty-Dumpty-ish in re-defining “volatility”. Yet, if Humpty Dumpty has a point regarding a useful currently unnamed concept, it may be better to give it a new name, and to continue discussion regarding that useful concept.

    Even if one does not call it “volatility”, a simple measure of the “flippiness” of the favored candidate throughout the campaign season would be convenient. You are right that this metric should not be overly subjective. Having to judge the newsworthiness of events that move the horse race polls is very subjective. A simple measure could be the coefficient of variation of the partisan lead through the campaign season, either in percent or EV. I.e.,
    Absolute value of {(SD of %partisan lead)/(mean of the %partisan lead)}
    “%partisan lead” is the same as the vertical axis in the table of graphs.
    Alternatively, one could use the partisan EV-lead.
    The coefficient of variation for EV lead is more likely to blow up if the mean is forced to be an integer: the denominator could equal zero exactly. It is unlikely that %partisan lead would ever equal exactly zero.

    In a winner-take-all election, a large absolute value of the standard deviation of the partisan lead may not be very newsworthy if the lead is so large that the 95% (or 67%) range always remains in favor of one party. The F-test for variances may not be the best test to refute Nate Silver’s interpretation of his graph conceptually.

  • Bela Lubkin

    In “Declining volatility … 2004-2016″, 2016 has a different left scale than the other years, making them harder to visually compare. 2016 would nearly touch the top if the same scale were used, but as it would still fit, that seems OK.

  • Bela Lubkin

    Does enough historical data exist — both polls conducted and poll results still obtainable — to “go back in time” and compute meta-margin graphs for earlier Presidential campaign seasons? I imagine the data gets worse each 4y you go back; is even 2004 possibly computable?

  • Lorem

    What I got from Nate’s article is that he thinks this latest story will cause polls to move detectably away from Trump. Also:

    “Let’s not naively insist on taking a wait-and-see approach regarding the events of the past 24 hours.

    But for now, I’m only willing to make a prediction about the direction (bad for Trump) — we’ll wait for the polls to measure the magnitude.”

    Profound, informative analysis all around.

    It seems he’s interested in a sort of “am I going to get a story about the event changing polling numbers?” definition of “volatility”. Which, to be fair, doesn’t sound completely uninteresting – just very difficult to formalize and assumption-dependent.

  • Matt

    per 538′s last podcast, Nate’s def of volatility (or at least uncertainty) has a lot to do w/ what he sees as unusually high no. of Undecideds this year, which for him includes the declared 3rd-party vote (which he’s still certain will crater at the end). Not agreeing with him, just noting.

  • Jack Chin

    Dr. Wang,

    The bug promise is a nice tradition for when the cake is baked, like pumpkins at Halloween or green beer on St. Patrick’s Day. No one wants you to actually have to eat one. Just sayin’

  • Scott J. Tepper

    Using your Electoral College vote as the measure, Mrs. Clinton has led the whole time and the race has been very stable.

    Nate Silver wants there to be volatility to generate clicks. He’s pretty much sold out his site with misleading headlines and inane articles on pretty much nothing. The current one asks how many times Mr. Trump interrupted Mrs. Clinton in the last debate. And says it can’t be accurately quantified.

    Strange site for a statistician. Things have changed since I followed him in 2008.

  • Alex

    Good points raised by Dr Wang here, regarding the timeframes under consideration. However, I have a comment in line with what Brendan mentioned earlier: in the financial context, volatility is a measure of the variation of the returns (differences between consecutive observations) not the variation of the time series itself. This explains why the standard deviation of the meta-margin does not necessarily reflect the notion of “volatility” in the way most people think about it.

  • Ed Wittens Cat

    apparently part of Clinton’s strategy for tonite is to try to appeal to GOP voters in the television audience.
    unsure how this can be successful in a hyper-polarized environment.
    consensus is Trump will make another attempt to apologize and then pivot to attack Clinton.
    i think…Trump owes Ivanka a personal and separate apology…its unimaginable to me how she must feel after the Howard Stern tapes.
    Her father has destroyed any hope she had of having a political career, or ever being taken seriously again…
    its just awful awful.

  • Mark Fishaut

    Whoa whoa whoa- is this an election contest between Wang and Silver?!

    • Sam Wang

      Actually, I am just trying to convey some concerns before he makes his analysis final. The ideal outcome is that he addresses the concerns. It’s like peer review, except not anonymous.

    • 538 Refugee

      Mark. This is just a part of the checks and balance system required in all healthy science based endeavors. There is some real crap science getting into the literature because of marketing decisions and no one checking the work. This kind of thing is actually very healthy. Silver is the defacto ‘go to guy’ for the media so his work requires scrutiny.

  • Daniel Litt

    Maybe a better term for what Silver is analyzing would be “elasticity.”

  • Brendan

    One problem I have with the standard deviation is that it doesn’t take into account the temporal dimension. Imagine a hypothetical scenario in which candidate A leads by 5 points every week for 10 weeks, and then candidate B leads by 5 points every week for 10 weeks; compare that to a scenario where candidate A and B each lead by 5 points in alternating weeks for the whole 20 weeks (i.e., trading the lead every week).

    Both scenarios have the same standard deviation, but I don’t think it’s unreasonable to say that the second is in some sense more volatile. There’s no need to try to connect it to responsiveness to particular events. It’s just that it goes up and down a lot, whereas the first one just has one big down. I’m not sure what the best way to capture this is, nor do I necessarily think it’s what’s going on in this election, but I do think there is value in considering volatility measures that somehow incorporate the temporal dimension of the data rather than treating all polling averages as static data points without a temporal ordering.

    • Sam Wang

      I am okay with this idea. However, it requires some kind of spectral analysis, which is challenging to do properly. It could be done with the Meta-Margin (my preference) or some suitably de-noised version of national polls. Controls would be necessary to test whether the frequency of polling matters.

      But before we go there…go look at the national margin. There is no such fluctuation of the type you describe. The graph at FiveThirtyEight is sampled at 2-week intervals, so the idea cannot even be tested. At some level, you are defending a half-baked statement that is not yet supported by evidence.

    • Brendan

      Just to clarify: I’m not defending Nate Silver’s statement. I’m not arguing that the current race is actually volatile. I’m arguing *against* the theoretical notion that the volatility is just equivalent to the standard deviation.

    • Dave Rodland

      (Insert obligatory de-lurking commentary here. Hi Sam, nice work!)

      This is a problem with temporal resolution for your analysis, rather than standard deviation per se. That is, in Brendan’s hypothetical scenario, measuring standard deviation in polling averages over ten 2 week intervals will give very different answers for Scenario 1 vs. 2. (I leave the 1-week window as an exercise for the reader to consider.) So ultimately the question becomes: what is the appropriate temporal resolution for collecting enough data to be meaningful, and separating polling variability from directionality in public opinion? The MM may be the ideal sort of tool for resolving this.

      This question is complicated by factors ranging from time-lag between events and polling to the margin-of-error issues for individual polls, variation in likely voter screens among pollsters, in-house and methodological biases, etc. You can see this for yourself by playing with Pollster’s trendline smoothing tools – the ‘more smoothing’ option gives you the very boring story Sam has been writing all season, while the ‘less smoothing’ option gives you something I suspect looks more like the Argental One’s models. I think if you go looking for volatility, you can refine your model and redefine your terminology to make it pop out. (I deeply appreciate the Humpty Dumpty reference earlier. Well played, sir!)

    • Chaz

      I think you’d want to calculate something like the Allan Variance to distinguish between short and long term fluctuations.

    • Amitabh Lath

      I would be very careful about introducing a temporal dimension (I said “first derivative” below) or a some spectral.harmonic analysis. One of the basic rules is do not bin your data finer than the resolution of your measurement device. In case of poll aggregates or Meta-Margin that’s probably a couple of weeks. Anything happening at higher frequencies is probably an artifact.

      A zeroth order way of adding a temporal dimension to the volatility metric might be how often does the lead change. In 2008 McCain did snatch the lead for a while. In 2016, it has not changed ever (and isn’t going to now). Was 2008 more volatile than 2016?

    • Sam Wang

      Gold star for reminding us all about the fundamental bandwidth limit of any kind of time-series data. One *could* imagine getting ~1 week resolution because poll sampling period is 2-7 days. For example, in 2008 and 2012, movement in the Meta-Analysis went in cycles of 1 week.

      To your question: several reversals in 2004, mayyyybe a reversal in 2008, no reversals in 2012 or 2016. So by that metric, 2016 is not volatile.

    • AP

      Maybe the concept we need is “autocorrelation“?

  • Amitabh Lath

    To quantify Silver et al’s gut feelings one might define an “instantaneous volatility” based not on the magnitude of the changes but the first derivative.

    If the polls change by three points in a couple of days after three months of flat, and then go back just as swiftly a week later, the common sense volatility metric will rightfully ignore it, but people invested in horserace narratives will feel topsy turvy.

  • George

    I’d be interested in seeing your four charts of volatility as measured by EVs replaced by the same four charts, except looking at meta-margin. Thanks.

  • Runner

    If I understand the data and Sam’s point correctly, tonight’s debate will likely have a great effect upon emotional responsiveness, but will most assuredly have no effect upon volatility.

    If that is true, then while perhaps being great entertainment, tonight’s debate, however it goes, will have little or no effect upon the result in November’s presidential election.

    • Some Body

      I think your prediction will be strictly impossible to verify. Whatever movement there will or will not be in the polls will include both the debate and Friday’s tape (and the response to it, especially by Republican politicians), so we won’t be able to tell the probable causes apart from one another.

    • felix

      agreed. And that is why the GOP stopped putting money into Trump and focuses on the down-tickets. As per Sam’s recommendation, we should all be putting money into the down tickets.
      In the end, Trump or Clinton will win. At this point, Clinton is the probable winner. But Trump still has a 5% chance, which is 1 in 20….

    • John Gilbert

      An idea based on the comment from Some Body, regarding time series: If a model includes a predict over time (for instance a set of Bayesian assumptions), any large deviation from the predict can be seen by taking the difference between actual and the predict (a-p). Summing this difference gives a sensitive function that might be used to temporally locate large changes that occur over a short interval, or smaller changes over a longer interval (eg, we could probably see a difference between the impact of the Friday stories vs. Sunday’s debate). Running out the (a-p) curve long enough, could help to overcome the fact that polls require several days to collect, or that there may be days with no polling. An event with large repercussions should bend the (a-p) curve enough to trace back the change to a particular date/event.

      (a-p) can be used to generate an auto-correction function (ie, what does it take to make each poll result match the expected mean; this may be the same as spectral analysis, with which I’m not familiar). The mean and s.d. of this function gives a measure of the goodness of the model, and should show especially large deviations when events occur that are clearly not modeled. Minimizing the s.d. by applying weights to large outliers could be fed back in to the functions (a-p) and to (sum of a-p) to give an idea of the long-term trend and timing of unmodeled events.

      For a physical system with a simple model but complex actual situation, I find (sum a-p) and the auto-correction process to be useful in tracking changes in time series data. Perhaps this analysis would also work with polling data.

  • Some Body

    A couple of points:

    1. I think it’s a bit misleading and inconsistent to place a chart (from Wlezien and Eriksson, I suspect?) with data going 300 days prior to election date as “proof” of greater past volatility, and shortly afterwards to argue for excluding anything before June 1. Did you measure SDs for elections before 2004 by the same method and within the same time frame as the one you used for later elections? If so—give us the results; if not, at least be more careful in how you word your claims here.

    2. I’m also not sure excluding results from the primary season is such an obvious analytic mistake. Sure, when an incumbent president runs essentially unchallenged, that creates a different dynamic. But there is any number of other factors that also influence the dynamic. Was it a short or long primary campaign on each side? Was there a clear front-runner all along? It’s telling that 2008 had (by a peak-to-peak measure of national poll averages from RCP) substantially less variance than 2016 if you count from a year before the election up to late September (the financial crisis of Sept. 2008 did move the polls quite a bit that year). You’d expect otherwise—both Obama and McCain won the primary “from behind”, and the Democratic campaign was long and grueling. So maybe it would be better to just “count geese” for all campaigns, and ignore any particulars, including the difference between incumbent and non-incumbent years.

    3. I guess the definition of “historically” is no less important for this debate than the definition of “volatility”. I also suspect Silver has only relatively recent campaigns in mind. But we’ll see.

    4. Last, but not least, the frequency, quantity, and quality of polling can distort any measure we use. For 2016, I strongly suspect it’s a certain decline in the frequency of state polling (especially earlier on in the campaign) that is responsible for the unusually stable meta-margin, rather than anything about the actual public opinion measured. But you’d have better access to the data needed to confirm or disconfirm this suspicion. For campaigns 40 years and more into the past, the general scarcity of polling (compared to today’s standards) may have, on the contrary, increased variability.

    • Jeremiah

      I take issue with some of your points:

      1. I don’t think the Wlezien and Eriksson data is misleading even if it does go back 300 days. The x-axis is marked and one can easily see the data from -150 onwards which is approximately where June starts.

      2. Including data from the primary season is not a mistake but one has to be clear what type of volatility you are measuring.

      4. Even though polling data was scarcer in older campaigns pre-1980 it does look like the trends were about as noisy as in recent times. Looking at 1980 it seems that the polling data has a +/- 5 percent noise in it which looks about the same as recent data.

  • Arnold


    Does the low volatility of recent races inform your decision about how many state polls to include in the snapshot calculation? I think you currently use a week’s worth or 3 polls, whichever is greater. But maybe a more polarized electorate means that you can cut noise further by including several weeks of polling in the median?

  • A

    Sam, do you still feel that even with this kind of bombshell in play, that the SD of 3.0 will hold through the end of the election?

    Would the metamargin need to move past Clinton +7 for it to go out of range? And do you see that as being feasible due to the nature of so many undecideds possibly now having reason to break massively in favor of her, Trump losing some support from his base, etc?

    Thanks in advance. So far I do believe your hypothesis is totally bearing out both in the mathematics, but even in my conversations with republicans who support Trump.

    Polarization IS by far the biggest factor in this election, imo…

  • Cyril Smith

    First, plaudits to Dr. Wang for an analysis based solely on polls.
    Second, volatility may refer to the STD of win probability.
    Third, markets synthesized by Predictwise treat 2016 as a trend for HRC win probability: higher highs and higher lows, with the most recent breakthrough high at 87% win probability. PEC (and 538, NYT, etc) are fundamental inputs for these markets. The Predictwise win probability graph looks like a low volatility trend compared to financial instrument trends.

    • Sam Wang

      Bad measure. Probability is a strongly nonlinear function of average national margin (or alternately, Meta-Margin). It is also model-dependent.

  • Matt McIrvin

    I also think that Sam’s calculation of a lower range of variation for 2016 is probably fairly dependent on his median-based averaging procedure.

    If you look at’s EV graphs for 2016 and 2012, 2016 looks more volatile, mostly because Hillary Clinton has higher highs than 2012 Obama did. However, the two years still look more similar than different.

    And some of the apparent volatility may just be noise coming from the fact that there are just fewer polls this year (I don’t understand why, but there are). will use the single most recent poll in a state if there aren’t multiple polls in a single week. That would, I think, tend to produce more dramatic reversals if polls are sparse.

  • Tony Nickonchuk

    Interestingly though, Silver himself, in the piece you refer to, states “The volatility in the polls in 2016 is pretty average by historical standards, in fact.” So I don’t understand why, if he thinks the volatility is average, he’s going to spend an entire analysis showing why that is not the case.

    • 538 Refugee

      Fresh content drives ad revenues. If he keeps tabs on the ‘competition’ he may also be trying to do some differentiation. Last cycle I seem to remember him being a tad more directly confrontational. His Trump miss has taken his legs out this cycle I think.

      Even his opening sentence is problematic. “Let’s not naively insist on taking a wait-and-see approach regarding the events of the past 24 hours.” Fresh polling is showing the rank and file of the Republican Party are upset that the top is bailing on Trump. They have bought so deep into the narrative that Hillary=Satan that they don’t care what Trump does. In his own words he could shoot someone.

      “A wave of Republican officials abandoned Donald Trump on Saturday, but, at least for now, rank-and-file Republicans are standing by the party’s presidential candidate, according to a new POLITICO/Morning Consult poll conducted immediately after audio was unearthed Friday that had the GOP nominee crudely bragging about groping women and trying to lure a married woman into an affair.”

  • Ebenezer Scrooge

    I don’t see why the most mathematically tractable definition of volatility is necessarily the most useful one in a particular context. Math is unreasonably effective in physics, but less so in the social world. (Repeat after me: human life is not a normal distribution . . . )

    That being said, I don’t understand Nate Silver’s roll-your-own version of the word, and do appreciate Sam’s quantitative demonstration of greater partisan stability.

    • Ed Wittens Cat

      I cant let this stand.
      Mathematics is the only way we can even begin to understand Nature, and human societies are part of Nature. Witness the tsunami of recent papers on the reproducibility failures of the soft sciences. Its just that we are only at the beginning of the Complexity Revolution.
      Societies are complex adaptive systems, non-linear systems like Scrooge points out.
      Complexity math will model them.
      There are underground groups in academe working on the mathematics of complexity, much like Feyman’s Physics X Project last century.

    • Ed Wittens Cat

      And i dont mean to be rude– but anyone that reads PEC should understand by now that mathematical statistics is the absolute gold standard for excising bias and emotion from human analysis.
      “data over drama” will hopefully become as famous as Silvers “demographics is destiny”.

  • Joel

    As a biochemist, I can appreciate Silver’s appropriation of the *chemical* definition of volatility…….

    Won’t work, though.

  • Ed Wittens Cat

    Spot on Dr. Wang
    Silver is confusing volatility with responsiveness. 21st century cloud connectivity accelerates responsiveness in amplitude and frequency.
    Look at SNL last night– less than 30 hours to weave the leaked sexual predator tape into Pence’s debate response.

  • Tyler

    I think that Nate Silver points to the large number of undecided voters in 2016 as a source of uncertainty. How does that factor into Sam Wang’s analysis, and what is the relationship between volatility and uncertainty in this year’s election?

    • Josh

      It’s true that in 2016, there were a larger than usual number of undecided voters…in May.

      Today, roughly 4-5% of voters are undecided, about in line with the last several elections.

  • bks

    I think of volatility in the Presidential race as the number of apparent lead changes. (That’s a heuristic, not a definition nor a critique of Sam’s argument.) So when I look at the 538 Clinton vs. Obama charts I see zero lead changes (neither C nor O falls below 0), so neither race is volatile, sensu lato.

    • Sam Wang

      “When I use a word,” Humpty Dumpty said, in rather a scornful tone, “it means just what I choose it to mean—neither more nor less.” “The question is,” said Alice, “whether you can make words mean so many different things.” “The question is,” said Humpty Dumpty, “which is to be master—that’s all.” LEWIS CARROLL (Charles L. Dodgson), Through the Looking-Glass.

  • Matt McIrvin

    I would not be hugely surprised if the crazy endgame we’ve gotten into ultimately pushes the SD for 2016 over 2012.

    But that’s really not saying much; “more volatile than 2012″ is a very low bar.

Leave a Comment