How to read polls like an expert — or, at least, not like a newbie

With just over a month until the midterm elections, those paying attention to politics will see an increasing flurry of polling centered on individual contests or the broad trend of views of President Biden and how people plan to vote in House races.

That flurry of polls will have a secondary effect of news articles about the polls. And those news articles will stretch somewhere along a spectrum that runs loosely from “offers useful information and context” to “tries to get you to click by casting a poll result in an exaggerated or inaccurate terms.” Candidates, meanwhile, will be doing their candidate thing, casting whatever poll comes out in the way that is most likely to prompt you to give them money. (In the last month of a campaign, every candidate is just trailing his or her opponent, a deficit that can be all-but-erased with your $10 recurring contribution.)

Sign up for How To Read This Chart, a weekly data newsletter from Philip Bump

Having already suffered 13 bouts of apoplexy after seeing headlines misrepresent what a poll indicates about a race, I decided that I would cut to the chase and ask people who actually look at or conduct polls for a living to offer basic guidelines for their consumption. I asked each to give three tips for laypeople encountering a poll; being good with numbers, each came in at precisely three.

Their responses are below, with light edits and underlining added for emphasis. At times, you’ll notice that the advice is similar. Think of their advice itself like a poll; those recurring responses, then, are in the lead for importance. Outside the margin of error, even.

Ariel Edwards-Levy

Polling and analytics editor at CNN

1. Polls are not precision instruments, and expecting them to give you exact answers will make them a lot less useful to you than treating them as one tool for gauging broad public sentiment.

2. Sometimes, when polls don’t agree, the nature of that disagreement can tell you a lot in itself. For instance, that there’s a lot of uncertainty about which voters will turn out for a specific election, or that most Americans aren’t very familiar with an issue they’re being asked about.

3. Transparency matters: At a minimum, any poll should have a clear explanation of who conducted and paid for the survey, how the pollsters chose and contacted the people they surveyed, what precisely those people were asked, and what steps the pollsters took to ensure that their survey was reflective of a broader population.

G. Elliott Morris

Data journalist at the Economist and author of “Strength in Numbers: How Polls Work and Why We Need Them”

1. Take the margin of error and double it. Remember that a poll is a sample of a larger population. Every pollster reports (or should report — if they don’t this is a red flag) a number called the “margin of error” that tells you how wrong their poll could be based on the chance they talked to an unrepresentative sample of that larger group.

But one thing I write about in my book is that a single election poll is subject to much more error than just this “sampling error” alone. For example, there is the chance that members of one party are less likely to take their polls than another (which is what happened in 2020 and 2016), and there is error in predicting who is actually likely to turn out to vote. So historically, the distribution of errors in election polls is about twice as large as the margin of error implies.

Which leads me to the second point:

2. Aggregate polls together. Because individual polls are subject to so much error, averaging them together gives you a better idea of the shape of opinion on a given subject — whether that be who people are going to vote for or whether they favor a certain policy. This also helps you avoid overreacting to any phantom swings between polls that happen because of the high degree of noise in any one poll.

3. Finally, think about the process that created all the polls you’re looking at. Scrutinizing an individual pollster’s methods can help you determine if their numbers are more trustworthy. If a pollster conducts a poll only of people who have a landline and does not adjust (or “weight”) their sample to be representative of cellphone owners, for example, they are going to overestimate support for the types of things people who have landlines favor.

But this also helps us remember that the average pollster is generally subject to the same types of bias as all the other pollsters are. If the “data-generating process” (as statisticians like to call it) for one poll is biased because Democrats or Republicans or elderly people etc. aren’t answering its interviewers, then other polls are also likely to be off. This helps you calibrate expectations for errors ahead of an election.

Emily Guskin

Polling analyst at The Washington Post

1. Polls are not forecasts! They are snapshots in the time the poll was taken and reflect what people are thinking when the poll was in the field, not in the future. Likely voter models are just that — imperfect attempts to represent a future but uncertain population.

2. At best, polls can provide roughly accurate estimates of candidate support, but they are not capable of saying which candidate leads in close races. For vote choice questions, if the two candidates’ percentage of support are closer than double that error margin, the difference is likely not significantly different. The error margin also doesn’t account for other sources of error, such as one candidate’s supporters responding at different rates.

3. There’s plenty interesting stuff asked beyond just the overall horserace result. Look at how support breaks down by different voting groups and read the rest of the topline or the poll story to see what people think about issues related to the election. There’s a lot to learn from what people think about important issues.

Voters divided amid intense fight for control of Congress, poll finds

Nathaniel Rakich

Senior elections analyst at FiveThirtyEight

First, remember that although polls (at least, scientific polls from reputable firms) are still the best tool we have for predicting elections or measuring public opinion, they’re not meant to be super precise instruments. Even the best polls come with margins of error that are unavoidable byproducts of not talking to every single person in the country. So if, for example, you see a candidate “leading” a poll by only 1 or 2 points, it’s best to think of that race as roughly a toss-up, because a 1- or 2-point polling error is not at all unusual.

Second, though, don’t try to outguess the polls. The 2016 and 2020 elections had highly publicized polling errors that benefited Republicans, which has led some people to conclude that polls are universally biased against them. But the polling error in the 2012 election benefited Democrats, and if you look at polling errors over the past few decades, you can see that they bounce around unpredictably. So while there may be a new, Trump-era reason why polls are underestimating Republicans, it could also just be a blip in history, so you should be prepared for a polling error in either direction (or no polling error at all is also always possible).

Third, don’t pay too much attention to any single poll, especially if it’s an outlier, a.k.a. a poll that is way off the consensus of all the other polls. Instead, average them together to get the most realistic read on a race or issue.

Philip Bump

The putative author of this article

I’m just going to interject here to also suggest that you pay attention to why you’re seeing a poll. Is it from a campaign? Is it at a partisan site? Poll results can be confusing or contradictory, making it easy to elevate numbers to fit a particular purpose. Consider context.

And finally: If the polling firm at issue spends an inordinate amount of time espousing conspiracy theories on Twitter, sprinkle a few more grains of salt on their assessments.

Loading…

Source: WP