Princeton Election Consortium

A first draft of electoral history. Since 2004

Ties, damned ties, and statistics

November 27th, 2008, 12:30am by Sam Wang

Regarding the Minnesota recount, reader RC points out: “By any meaningful scientific standard of measurement, the vote in Minnesota is a tie, and the recount process is just a mechanism for adjudicating a tie rather than a way of determining overall voter preference.” If RC is correct, this gives a different way to think about the recount than what partisans are saying.

Let’s think about what’s happening. Approximately 2,000,000 people voted for Coleman and Franken combined. Their supporters were as close to equal in number as one can imagine. We often speak of polls being an imperfect sampling of the likely-voter pool. But what about the pool of actual votes cast? In some sense, is that also a sampling of likely voters?

First, the math. The right tool for thinking about this is the statistics of the binomial distribution, which describes the distribution of all possible outcomes in a two-choice situation with fixed probability p. For N two-outcome trials (i.e. N votes cast), the average outcome is N*p and the standard deviation is sqrt[N*p*(1-p)].

Now think about a model in which each voter simply flips a coin to decide who to vote for. In this case p=0.5. For 2,000,000 votes, the average number of people voting for Franken (or Coleman) is 1,000,000 – with a standard deviation of 707. Therefore a random-choice situation would generate margins that typically deviate by much more than the margins that have been reported. By this criterion, the election is indistinguishable from a tie.

But this isn’t quite right. When asked, most voters declare themselves to be decided. Is there still variability in the outcome? Indeed yes. People who are sure that they will vote may not always get to voting stations. Imagine that each major candidate has 1,050,000 supporters. Further imagine that those supporters have only a 95% chance of actually voting – because of work, health, weather, whatever. In this case, the standard deviation is still 223 votes. The popular vote margin between Coleman and Franken would have a standard deviation of sqrt(2) times this: 316 votes, larger than any recent reported margin. So the current situation is, by any statistically reasonable standard, a perfect tie.

What should we think of the fights, ballot challenges, and uproar on both sides? RC put it so well that I will simply paraphrase.

I think elections are best understood as a compromise between several different goals:

1) A tool to measure voter preferences accurately,
2) A fair method of choosing between alternatives, and
3) A ceremony celebrating our commitment to democracy.

But this [last goal] fails if the system is not perceived as fair.

So, balancing all these things, along with considerations of speed, cost, and efficiency, we try to make a system of clear, transparent rules, agreed upon in advance, understanding that the results can never be perfect, and then let the mechanism play out, accepting the final result (where rules for challenges are also part of the mechanism). When the results are closer than any reasonable estimate of experimental error, I think we should just accept hand recounts as a kind of coin flipping, with the added advantage of being able to detect whether there were systematic errors biasing the system.

I advocate just sitting back with the popcorn and enjoying the show at this point.

Well said. Pass the popcorn.

Tags: 2008 Election

21 Comments so far ↓

  • Recluse

    If the intent of having “elected” officials is to represent the populous, it seems a statistical tie means that the two candidates need to share the office. (Grow up folks. We can get along. We must.)
    Each, to argue issues for those they represent. And possibly, alternate which of them is the final arbiter per established scheduling.
    Public debate between them could also be an option for constituents to respond.
    A recount, if appropriate, should still be available after the initial election. If a difference is still below an established percentage, the vote is a tie. And shared occupancy of the position is the result.

    I disagree with the comment “that the process is playing out in an reasonably fair way.” Months have been wasted in the clawing for straws. Is the desired position so casual that the incumbent can spend months totally dedicated to staying in office? Is it possible that some duties have been delayed or ignored that define the functions of the position?
    Also, the contention between the opposing camps is nothing but exacerbated. Reducing the likelihood for future compromise.

  • robin

    Thanks for the comments, Lorem. I think they are important, especially since I am going to disagree. ;^) My initial point was that voting is several important things at once: a measurement tool, a choosing process, and a dedication ritual. Losing sight of any of these risks trouble. When the measurement yields a tie in any meaningful sense of the term, the other aspects are still important. As others noted, the choosing process must _feel_ at least reasonably fair and legitimate to the population or else social cohesion frays and eventually breaks, destroying people’s commitment to the system. Although there are (to my mind contrived) economic arguments for why people vote, despite the vanishingly small chance of their vote being decisive, there are simple, obvious, emotional reasons easily apparent to anyone who who has spent time at a polling booth. Voting is an emotionally powerful ceremony, where we show our belonging to and dedicate ourselves to the larger community of society. I suspect this taps into our instincts of social cohesion. I like to needle my economist colleagues by saying one of the reasons they are wrong so often is they want economics to be like physics, when actually it is more like applied primatology. I think the same often applies to quantitative political theories. I think your last point is the most important (forgive me if you were just being tongue in cheek): We _wish_ we could find social optima through some process like voting, but this is a very dangerous illusion. First, for most complex issues, such an optimum does not exist -the complicated, non-linear tangle of human preferences don’t sum to a single point even within an individual, let alone a complex society. Further, people who are certain they know the optimum have historically done very bad things to other people in the name of achieving it. Rather, democratic processes (ideally) allow groups of people with incompatible preferences to find some _reasonable_ set of choices they can live with without coming to blows. I do not make a fetish of democracy. I imagine a time could come when some mix of sophisticated polling, sampling, focus groups, juries, blue-ribbon panels and maybe artificial intelligence could do a better job, but for now, I think, to paraphrase Churchill, democracy is the worst possible mechanism for _seeking_ (not finding) social optima, except for everything else we’ve ever tried. To bring this all the way back around, I think in the Minnesota case, we should recognize that the measurement gave a tie, that the process is playing out in an reasonably fair way, and that at the end of it, the outcome should feel legitimate and give us some pride at being able to do this sort of thing peacefully.

  • Lorem

    robin, if we are indeed concerned with citizens’ (as you stated in your latest comment) and not voters’ preferences, I would argue that that opens yet another can of worms. For example, I would consider myself to be highly pragmatic, so, even if I lived in Minnesota (and if I were a US citizen), I would personally not vote at all. You see, even if I strongly prefer one candidate over the other, my vote is only worth casting if it would be the single vote that breaks a perfect tie between the two candidates. I think that for all intents and purposes this probability can be considered negligible. As such, my expected pay off would be greater for staying at home than for dragging myself off to the polling station. So, the vote will perhaps undersample people who are predisposed to act pragmatically in such decisions, who may, in turn, lean towards one candidate predominantly over the other. So then, the election is going to be an even rougher approximation if the objective is to estimate the citizens’ will.

    Besides, I think your list of election objectives unfairly excluded a point that I would personally wish were at its forefront: “choosing the socially optimal candidate” (that is, let’s say, one who would implement the best policies), and, to be frank, I do not think that that objective is being met at all, except sometimes by chance.
    Forgive me while I drift further off-topic, but, perhaps the real option that should be considered is a radical change in the system as a whole instead of a minor tweak. And, in breaking with the somewhat pervasive formality and thoughtfulness of this discussion thread, I move for a declaration of me as dictator. I promise I’ll be benevolent.

  • Mark

    I apologize for the typos in my previous posting…

    Also, I did not mean to speak ill of statistical methods; I’m a believer in statistics, too. I’m also a mechanical engineer, though, and my comments were written from the perspective of improving the fidelity of the voting system.

    Statistics are good for analyzing the will of a population; but it should not require analysis to determine the will of an individual voter. The machines are not measuring devices, they are congnitively controlled discreet-input devices. The designers should therefore hold themselves to a very high standard of accuracy in the mechanical functioning of the system.

  • Mark

    I’m a friend of Robin’s from Caltech, so I can vouch that he’s always talked this way :-)

    I also took some of the same economics and math classes. There are lots of ways to use statistics to analyze the problem. There are also lots of alternate voting mechanisms which may be improvements over the current standards. Like alcatholic I like Instant Runoff voting but it introduces a new level of vote tracking (the reassignment of votes) that I understand haven’t been well-implemented in the few locales that have tried it so far.

    But back to the reality of deciding an election. The standard for winning isn’t statistics. It’s greatest number of votes. 1,250,001 wins over 1,250,000.

    I know that’s a daunting goal in a world of imperfect ballots and election machinery. Acknowledging those imperfections, it should be remembered that voting devices are not the same as scientific detectors (which are trying to detect the state of a system without disturbing it). These are input-and-recording devices which are (supposed) to log a single unambiguous response (write-ins add some complications). From a machine-design standpoint, it should be possible to apply high-standard methodologies like six sigma to get a very low vote-logging failure rate. Likewise the transfer of votes from the voting stations to the central counting location.

    (I’d also like to put in a word for the importance of transparency of the vote counting software. Nothing in the process should be hidden.)

    Naturally, reality won’t approach the high-tolerance vote counting goal even if the machines are well designed because humans are imperfect. Some will have comprehension problems, and some have physical difficulties (imagine trying to vote with advanced Parkinson’s). Allowances for assistance won’t always be heeded.

    It is possible that 21st century technology may eventually step in to help with human imperfection. If Stephen Hawking can communicate with a computer, the folks in the nursing home should be (eventually!) be able to get there votes into a voting machine. Making that level of interface reliable and affordable is one of the technical challenges for this century.

  • Lee

    When kids trying to decide what game to play next come up with a tie vote, they usually decide to take turns. Perhaps for margins of victory that are less extreme than 52-48, the office could be divided in time between the two candidates. Each candidate would get time proportional to his/her percentage in excess of 48.

    For a vote where the candidates get roughly 1 million votes each, every 100 votes of margin translates to about a negligible three days out of a six year term. It wouldn’t be worth fighting for.

  • robin

    This gets back to what I said about the multiple meanings of elections, and the tension between voting as an accurate way of measuring citizen preference on the one hand, and as a fair and legitimate-appearing mechanism for choosing between alternatives on the other. All voting systems contain the potential for paradoxical outcomes and you just have to choose which ones you most want to avoid. In addition, citizen preference is itself a very complex, non-linear (maybe even non-transitive), multidimensional variable: for example, in the difference between a utility-maximizing choice averaged over the population versus a minimax choice where one seeks to minimize the fraction of people for whom the outcome is so totally unacceptable that they would work to break the system.

  • alcatholic

    Were it not for very low chance of socio-political acceptance, I would be very excited to see Instant Runoff voting implemented. Even with only a 1st and 2nd choice scheme, I think it would greatly lower the likelihood of tied elections. Politically the greater range of candidates I think would help break up 50/50 splits in public opinion.

  • Observer

    Glen, these are all useful thoughts.
    You make me wonder, if there is probably a political science literature out there that has concerned itself with these close-election dynamics. If there is, I’m not familiar with it.

    I continue to agree with you that legitmacy of the result in a close election is a paramount consideration.

    Based on this discussion, I’m leaning to this idea;
    If the election is within the tie confidence interval, the outcome should be to elect the challenger candidate. If, after a full term of office, you can’t get better than a tie vote, change should be the favored outcome.

    Not sure whether or not I would want this to apply at the presidential level, where the stakes domestically and internationally are at the maximum.

    (The old rule for classic-chess world championships was that ties went to the incumbent. That had the unfortunate consequence that incumbents tended to play very conservatively, hoping to force challengers to play very risky approaches with a higher risk of losing games.)

  • Glen Tomkins

    Matt McIrwin,

    The intent of either the coin flip, or my suggestion of a revote, is not to prevent or obviate recounts, or even the legal contest of close votes. These would still be carried out to their end (thought perhaps not the bitter end of an endless legal contest) in order to arrive at the vote difference that would count in determining if a revote, or coin flip, is necessary.

    The intent of the revote for elections so close that the vote difference is smaller than the counting and tallying variance, is to prevent elections being decided essentially ex post facto, on the basis of counting or n0t counting unforeseen classes of vote so small that no pre-existing rules unambiguously cover them. You have to finish an election under the same rules you started it with for it to carry legitimacy. But if you get out your electron microscope, you can find tiny packets of votes that you can’t honestly categorize as valid or invalid under the rules you started with. In races where such tiny packets of votes matter, whether the court ends up counting such packets or rejecting them, either way, it has advanced the case law on the subject by making up new law to cover such unforeseen cases. So I say, let the courts have at that process of creating new law. But let’s not let this election be decided by the courts on the basis of new law. If the result is so close that you need that microscope to discern the winner, if the winner needs some new judgment on vote validity to squeek past his opponent, then you just do the election over again until you have a difference large enough that it does not rest on unforeseen classes of disputed votes.

    I see in your objection raising the infinite regress, that would apply to coin tosses as well as revotes for close results, that you have studied Zeno the Eleatic and his paradoxes. Whatever assessment of the reality status of apparent change the Zenonic paradoxes might be thought to force us to, I think that, for revotes or coin flips, a simple legislative fiat would suffice to decide results contested as close to the revote (or coin flip) confidence limit. The law would say they go to revote unless the courts are through with the contest by a certain date after the election.

  • Glen Tomkins


    The idea of cleaning up and modernizing the rules the courts use in deciding close elections gets at my main concern over elections, especially presidential elections, which are so close as to be indiscernible from ties. What we end up with in elections this close is that, essentially, some court awards the election to one side or the other, based on allowing or disallowing the counting of some class of ballots so few in number that they don’t matter in 99.99% of elections, therefore their handling never gets any standard administrative practice, much less case law, developed around it. As a result the election is decided by essentially new law propounded from the bench after the fact of the close election. This leads to legitimacy problems, especially acute when the presidency is at stake. So, yes, getting the rules set in concrete ahead of time, and in such detail that they cover the proper way to validate every conceivable class of votes, however small, that might conceivably be disputed, would solve the legitimacy problem, assuming courts that would apply this unambiguous and complete set of rules with undoubted fairness. And legitimacy in the eyes of all parties is what prevents wars of succession, which I think is the most important function of any scheme of elections.

    This would be a good solution, in the sense of meeting the theoretical requirement of insuring a succession that everyone would regard as legitimate, except that it’s impractical. The genius of the common law is that its inductive method — you get to general rules based on what seems and works as just in an accumulation of many specific cases — actually works in a world filled with systems so complex that no one can deduce sound specific procedures from general rules. Yes, we have codified law, but that only works when the results of centuries of common law experience form the basis of the code. You could write this set of rules you’re referring to, but they wouldn’t anticipate any number of special cases that only come up in races this close. Even worse, the march of technology, and new statutory requirements, would continually introduce new special classes of votes of doubtful validityfor which your election code won’t have any answers, at least not until we’ve had close elections that force the courts to look at these specific circumstances and start inducing rules to cover them.

  • Matt McIrvin

    ..on the other hand, I suppose such situations could be resolved with another coin flip, and so on, and so on… do it right and you just give the win probability a smooth binomial tail.

  • Matt McIrvin

    I recall Stephen Jay Gould proposing a coin flip in 2000. But besides Robin’s objections concerning perceived legitimacy, this kind of solution has the additional problem that it really doesn’t solve the initial problem. So you propose that elections where the margin is less than N be resolved by coin flip. Now suppose the margin is really, really close to N. Do you flip a coin or not? It’s a tie between being a tie and not being a tie. Somebody’s going to object. Recount time!

  • Observer

    Glen, you are of course right about the nature of the uncertainty problem with close elections. Your solution (keep revoting until somebody ‘really’ wins) is not practical, though. Your comments leave out of account the burdens of revoting: financial cost of revoting is heavy, especially at the level of an entire state (or several states in a presidential election); inconvenience burden to the citizens of having to pay attention to and participate in revote(s), and the time cost if revoting delays an electee from taking office at the regular time.

    The legitimacy issue that you raise is real.
    Perhaps, a better solution is to clean up, and modernize, the rules the courts are instructed to use in reviewing contested outcomes.

  • Oz Observer

    The “pass the popcorn” response is very appropriate in a “game” situation, where the outcome doesn’t much matter. And this is probably the case in Minnesota, where as Garrison Keilor reminds us, all the children are above average. Barrackers like me will go home and either kiss the wife or kick the cat, but we’ll get to work on Monday curse the umpire (win or lose) and get on with life.
    Of course in 2000 a few hanging chads made the difference between an unnecessary war and heavy damage to the US’s reputation, on the one hand, and (perhaps) effective action on Global pollution, and an undercutting of the support for terrorism.
    I think the essential point is that the process must be seen to be fair, as does seem to be the current case.

  • Glen Tomkins

    The measurable uncertainty

    In a vote this close, any one of many different aspects of the inherent uncertainties built into the voting process are large enough to give lie to the theory on which election law seems to rest — that we can determine outcomes meaningfully down to even a single vote difference. This is obviously nonsense. Whether it’s failed intention to vote, as discussed here, or misvoting due to bad mechanics of voting, or failures in counting and tallying, any one of these steps clearly introduces variances that dwarf the size of such a close result.

    The obvious conclusion is that the idea that voting can produce meanignful results down past variances inherent in the process, way past that and down to the current legal standard of one single vote difference being decisive, is wrong, and the theory needs to be replaced. Every state’s voting process needs to be tested and rated empirically for the size of its variance, then, based on whatever level of confidence is thought desirable, a confidence limit needs to be set within which results are treated as ties. Ties should be revoted, preferably between only the two top vote-getters. But even if run under the same format as the original, tied, vote, revoting will eventually force the electorate to get off the dime and actually decide for one candidate or the other by a margin beyond the confidence interval.

    Perhaps the only part of the process truly amenable to reproducible testing would be the counting and tallying process. I’m not sure you could objectively test for the effects of poor ballot design, for example. But a system that allows for only part of the unavoidable uncertainty is better than one that pretends there is no problem. So I would advocate changing our basic theory to one that is reality-based, and requiring all results so close that they lie outside, say, a 95% confidence interval just for counting and tallying variance, be voted again.

    It may not seem worth the effort to change our system just over a Senator here, or a governor there (WA state, 2004). If the result was effectively a tie, then whoever the process lets take office is a good enough approximation of the people’s will. You could flip a coin to resolve such ties, and if the non-reality based legal process we have now for resolving such elections is essentially random, well, that’s no worse than a coin flip.

    There are two reasons to not accept such an inertial solution for this problem. For one thing, it encourages legal gamesmanship in close elections. Letting the courts decide, when the theory we make them decide under is faulty, is not going to result in random outcomes, like a coin toss. Much worse, it will systematically give all close races to the side that is pushier, more willing to play politico-legal hardball, and has been more attentive to packing the courts with political apparatchiks. We don’t want a system that rewards any of these things, or we will encourage this behavior in our parties.

    Perhaps more importantly, there is one race, that for the presidency, that we don’t want opened to either chance, and much less to politico-legal gaming, not even once. We can’t afford even one more Dubya. And because of the obvious partiality of Bush v Gore, next time this happens, the likely impact won’t be just another random unqualified president, but the side that loses to the unfair process not being such good sports about it.

    One of the many downsides of letting our system degenerate into an elected dictatorship by the president, is that the presidential race isn’t just the most important thing in politics, it’s the only thing. A result in a presidential race seen as illegitimate or unfair could result, in the extreme, in a war of succession. FL 2000, and this race, and the Rossi-Gregoire race in 2004, have all been warning shots. We need to take heed and react accordingly. The rules have to be changed before the next election, because after it will be too late.

  • Bruce (B)

    I was thinking that since it IS a tie, they could just split the 6-year term in half, signing a deal with the Governor as a co-signer and witness, that Coleman serve the first 3 years and then resign, and Franken be appointed to serve the second three years. We can’t be sure who will be governor in three years, though, and I get the feeling both sides would rather go for broke. I suppose it would also set a bad precedent to pro-rate the candidates’ days of incumbency based on their share of the popular vote. OTOH, we would have been better off to have had 2 years of president Al Gore early in the first Bush term. Perhaps there could be a term-splitting rule that kicks in when the gap is less than a tenth of a percent.

  • robin

    Thanks, Sam, for quoting me (even using my old nickname from Caltech, where we just missed overlapping as undergrads, many moons ago!). “The math” (a term now ruined by Rove) is interesting, but I don’t want to lose track of my central points about the multiple meanings of elections and the idea that within some margin of closeness, a vote count is no longer meaningfully distinguishable from a tie. I do want to echo JJtw’s use of the word “legitimacy”, which I agree is better and important. The sense of legitimacy is an extremely powerful stabilizing (or de-stabilizing) force in societies, democratic or not. Bringing this all the way back around to Professor Wang’s research, I think it is a reasonable speculation that the power of this emotion is at least in part innate, part of the neurobiology of social primates. When we design election rules, we have to work not only with the laws of statistics, but with the substrate of primate psychology as well.

  • Alexander Yuan

    I don’t think the binomial distribution should be described first as the “right tool” here? I see that it’s corrected later, but presenting it probably muddles the math.

    (An explanation – For polls, a rationale for the binomial approximation is as follows: Since polls avoid calling the same person twice, the probabilities change by tiny amounts for each person called (e.x. if you first call a Franken supporter, then the second call is slightly less likely to be a Franken supporter because you can’t call that person again). So, the pool of unsampled voters changes very slightly with each new sample. But the poll samples make up an *extremely* small proportion of the population. No matter what, it’s unlikely you changed the pool of remaining voters much.

    Now instead suppose you weren’t checking whether you were calling someone twice (i.e. you were sampling from the same pool every time). It’s still extremely rare to accidentally sample the same person twice. This is approximately the same model as before, just simplified. Thus sampling without replacement (what the polls do) is almost the same as sampling with replacement (binomial distribution).

    The math ends up far simpler by using the binomial distribution to approximate what is happening. (IIRC, the binomial slightly overestimates the variance, which is erring on the side of caution from the perspective of pollsters.)

    In the actual vote, however, the number of voters was a rather significant proportion of the likely voter population. If you randomly pick a likely voter after having sampled most of them already, it’s pretty likely you’re re-sampling someone that you already accounted for. The binomial would significantly overestimate the s.d. here, as the numbers in Wang’s post show.)

  • JJtw

    And by Kerry, I mean Gore. :P

  • JJtw

    You and RC hit the nail on the head. I would only suggest, rather than saying “fair” way of adjudicating ties, I would say a recount is the way of adjudicating ties that gives the ultimate outcome the most legitimacy. Flipping a coin would under most recount scenarios that are this close be nearly as fair of a reflection of the “voting public”.

    This is one reason that I didn’t feel that Bush subverted the public will in 2000. He did hijack the process and undermined his own legitimacy (at least in my eyes) in the way the recount unfolded. But, I am under no impression that the Florida electorate preferred Kerry any more than it preferred Bush.

    In, MN, both candidates have broad support that is fairly evenly split. Either one would be a fairly accurate reflection of the voting public’s consensus. But letting the pre-determined rules play out fully I think is essential for increasing the legitimacy of the election results. In that sense, I think the adversarial posture in the partisan wrangling right now will help ensure that the final choice is played out as closely to the pre-agreed rules as possible. So I can feel patriotic nibbling that popcorn and enjoying the “fighting”.