It can be good for a laugh to view U.S. citizens as people who don’t have a clear idea of what’s happening outside their borders…but still want to launch a military attack. This week in The Monkey Cage comes a finding that feeds that view. The headline is impressive: “The less Americans know about Ukraine’s location, the more they want U.S. to intervene.”
But how big is the reported effect, and what does it really say about American attitudes?
In teaching my students how to evaluate research with a skeptical eye, I usually make a distinction between two ways to look at a discovery:
- Is the finding true for large groups, averaged? If so, it is a discovery and publishable in an academic journal.
- How large is the effect? The larger it is, the more likely it is to matter in a practical sense.
In many domains of research, ranging from medical research to political science, it is easy to confuse these points of view, especially when all you read is the summarized result. In the case of the Ukraine survey, which point of view informed the headline?
First, let’s consider the result that has led to such merriment. In a sample of 2,066 Americans:
- About one in six (16 percent) Americans correctly located Ukraine, clicking somewhere within its borders.
- Only 13 percent of Americans supported using force.
- The less accurate our participants were [at finding Ukraine on a map], the more they wanted the U.S. to use force…[this effect is] statistically significant at a 95 percent confidence level.
At face value, this sounds pretty bad: there is a statistically significant difference between those who can find Ukraine on a map, and those who can’t. The implication is that ignorance leads to a desire to intervene militarily.
However, is that really true? First, note that there isn’t a massive groundswell for an invasion; 13% is quite small. But here is something that is harder to appreciate: it does not take a large difference between groups to get statistical significance. Assuming there is large difference is a common logical pitfall.
For example, imagine two equal-sized groups: those who can find Ukraine on a map, and a second group thinking it’s somewhere in the vicinity of Nebraska. Given the size of the original sample, 331 people found Ukraine correctly. If attitudes toward military intervention broke down as follows (oppose vs. suport), the following overall pattern would be enough to reach 95% statistical significance (one-tailed Fisher exact test).
|Can find Ukraine||298||33||10.0%|
These are just made-up numbers, since I don’t have access to the data. But even if the difference is larger, I think the point is likely to be similar.
The researchers have succeeded in finding that desire-to-intervene in Ukraine is correlated with geographical ignorance of where it is. As they point out, this is consistent with past findings of a similar nature. But it could be driven by a difference as small as that between 10% and 15% – which does not seem all that notable.
Basically what I’m saying is this: nobody really wants to invade Ukraine, and it seems counterproductive to beat up people who can’t find it on a map. After all, if someone thinks Ukraine is next to Kazakhstan, yet shares your policy preference, is it advisable to cast aspersions on that person’s intelligence?
For more about the perils of “95% statistical significance,” check out this excellent Nature article on p-values and effect sizes.