How to Read a Presidential Poll Without Getting Fooled
Author
James Brennan
Date Published

A presidential poll is one of the few pieces of data most people read every week and still misunderstand by the time they close the tab. The headline gives you a number. The number sounds precise. It usually isn't, and even when it is, it isn't measuring what the headline implies it's measuring.
Reading a poll well doesn't take statistical training. It takes about ten minutes of context and a small set of habits that, once you have them, make most political coverage feel different. Some of it gets clearer. A lot of it starts to look thinner than it did before.
The headline number is rarely the important one
A poll story will lead with something like "Candidate A leads Candidate B 48% to 45%." That's the topline. It's the easiest piece of the poll to report and the hardest piece to interpret on its own. Three points sounds like a lead. In most national presidential polls, it isn't.
What you want to look at first is the spread relative to the margin of error, the share of undecided voters, and where the poll falls relative to other polls of the same race over the last two or three weeks. A single poll showing a three-point lead, with a three-point margin of error and ten percent undecided, is telling you almost nothing about who is winning. It's telling you the race is competitive. That's the actual headline.
Pollsters know this. Editors know this. The story still leads with the number because the number is what people read. So the habit to build is to look past the headline to the table underneath it before forming any opinion about what the poll says.
Margin of error is not what it sounds like
Most people read "margin of error: ±3 points" as a buffer. They think it means the candidate's real support is somewhere between three points higher and three points lower. That's not quite right, and the way it isn't right matters.
The margin of error is a confidence range, usually at 95%. It says that if you ran the same poll a hundred times under the same conditions, ninety-five of those polls would land within three points of the true value. That leaves a one-in-twenty chance the result is further off than the margin suggests, just from sampling alone. And it does not account for any error introduced by who you reached, who agreed to answer, or how the questions were worded.
There's also a second thing the headline margin almost never mentions: when you're comparing two candidates, you have to roughly double the stated margin to get the margin on the spread between them. A poll with a three-point margin of error on each candidate has something closer to a six-point margin on the lead. A "three-point lead" in a poll with a three-point margin of error means the race could easily be a tie. It could also mean the trailing candidate is actually three points ahead. That sounds wild until you remember the pollster is telling you that, in plain text, on the methodology page nobody reads.
Sample size, and why 1,000 is usually enough
The first instinct most people have when they see a poll of a thousand people is to ask how that could possibly represent two hundred million voters. It's a fair question with a counterintuitive answer: it can, surprisingly well, as long as the thousand are chosen carefully.
Once a sample is large enough to be representative, doubling it does not cut the margin of error in half. It shaves the margin a little. A poll of 1,000 has a margin of error somewhere around three points. A poll of 2,000 brings that down to roughly two. A poll of 10,000 brings it to about one. Going from 1,000 to 10,000 doesn't make the poll ten times more accurate. It makes it modestly more precise, while costing far more to run and not necessarily improving the parts of polling that are actually hard.
The hard parts aren't about how many people you reach. They're about which people you reach, whether they're willing to talk to you, and whether the ones who are willing to talk are systematically different from the ones who aren't. A perfectly executed sample of 800 is more useful than a sloppy sample of 8,000.
Who answered, and who didn't
Response rates on telephone polls used to sit comfortably above thirty percent. Today, most are in the single digits. That doesn't mean polls are useless, but it does mean the people who pick up are not a random slice of the country. They tend to be older, more politically engaged, more likely to follow the news closely, and more comfortable answering questions from an unknown number. Pollsters correct for this with weighting, which adjusts the raw responses to better match the demographics of the actual electorate.
Weighting works well for things pollsters can measure — age, gender, race, education, region. It works less well for things that are harder to see, like willingness to vote, trust in institutions, or political views among people who systematically refuse to participate in surveys. The polling misses of the last decade have mostly been failures of weighting on those harder-to-see traits, not failures of the underlying math.
A good poll will publish its weighting choices. The numbers won't mean much unless you spend time with polls regularly, but the willingness to publish them is itself a signal that the pollster is trying to be transparent about decisions other pollsters would rather keep quiet.
Likely voter, registered voter, all adults — three different polls
Buried at the top or bottom of every poll is one of three phrases. "All adults." "Registered voters." "Likely voters." Those three populations behave differently, and a five-point swing between two polls of the same race can sometimes be explained entirely by the fact that one polled likely voters and the other polled registered voters.
All adults is the broadest and least useful for predicting elections, because many of them won't vote. Registered voters is closer, but registration is no guarantee of turnout, and turnout is where elections are actually decided. Likely voter models try to narrow the sample to the people most likely to actually cast a ballot. They do this through some combination of past voting history, stated intent, enthusiasm, and demographic patterns. Different pollsters use different likely voter models, and those choices can move a result by more than the margin of error.
In the closing weeks of a presidential race, likely voter polls are the only ones worth reading for predictive value. Before then, registered voter polls give you a stable picture of preference that isn't being distorted by enthusiasm swings. All-adult polls are useful mainly for measuring sentiment, not outcomes.
Why one poll is almost never enough
Any single poll, including a well-conducted one from a respected outfit, can land outside its margin of error by chance. That's what a 95% confidence interval implies. One in twenty polls is statistically expected to be an outlier even before accounting for the imperfections of weighting and likely voter modeling.
The signal lives in the average of polls, not in any individual one. When five recent polls of the same race all show a tied result and the sixth shows a six-point lead, the right interpretation is almost always that the sixth is the outlier — not that the race suddenly broke open. Reporters love a poll that moves the story. Statisticians treat single polls like data points, not like turning points.
This is why polling averages, run carefully and weighted by pollster quality and recency, are more useful than any single survey. They smooth out individual outliers and let trend lines emerge. A polling average that's moved two points in a candidate's direction over six weeks is a real shift. A single poll that suddenly shows a five-point swing from last week's average usually isn't.
How to spot a poll worth ignoring
Not every poll deserves your attention. Some are conducted by campaigns, allied groups, or partisan outfits whose business model is producing favorable numbers. Others are run by pollsters with consistently bad track records who keep operating because the public can't easily tell them apart from serious ones.
There are a few signals to watch. A pollster that doesn't publish its methodology, sample size, or weighting is not a pollster worth taking seriously. A poll commissioned by one campaign or party and released without an independent firm conducting it should be treated as advocacy, not measurement. A poll whose questions are obviously slanted — long preambles, loaded wording, push-poll framing — is producing a result the sponsor wanted, not a result that reflects anything else.
When in doubt, look at how the pollster has performed historically. Several organizations publish pollster ratings based on past accuracy. They aren't perfect, but they're a far better filter than reputation or visibility. A relatively unknown pollster with a strong track record is a more reliable source than a famous brand whose recent calls have been off.
What polls genuinely can't tell you
A presidential survey can tell you, within a range, what some group of people would say today if asked the question on the form. That's it. It cannot tell you what those people will do on election day, which can be different from what they said. It cannot tell you why they hold the views they hold, even if it asks a follow-up question — people are not always good narrators of their own preferences. And it cannot reliably tell you about people who don't take polls, who are increasingly a large share of the population.
It also cannot, on its own, predict the outcome of a presidential election under the Electoral College. National polls measure the popular vote, which has split from the Electoral College result more than once in recent memory. State-level polls in close states are doing the actual predictive work in modern presidential races, and those are usually smaller, less frequent, and more variable than national polls. The closer the race, the less a national polling lead means.
A working habit for the next year
If you read political coverage regularly, the most useful thing you can do is build a small routine around polls. When a new one drops, check four things before forming an opinion: who ran it, who was polled, when, and how it fits with the recent average. Those four facts will reorient most stories you read about a single poll into stories about the race itself.
A presidential poll is a measurement, not a verdict. It comes with uncertainty the headline always understates, and with assumptions the reader is supposed to know exist. Once you start treating that uncertainty as part of the data rather than a footnote, the polls don't stop being interesting. They start being useful in a different way — less as predictions, more as the rough shape of a country still making up its mind.
Related posts

High turnout used to favor one party. Now the relationship is more complicated. Here is what turnout actually tells you about an election.

Out of fifty states, one usually decides the election. Here is what a tipping point state is, why it matters, and how to find it on a map.
