“Hey, did you see Trump is up by 1% in the polls?”
“Look, Harris is up by 4% now!”
“The polls are so close this election!”
How many times in the past few months have Americans heard words like these? As election day draws closer, the media can’t stop talking about the most recent polling numbers and what they mean. But that begs a question for anyone following the U.S. presidential election: what do these polls actually mean?
Take the 2024 election. From September 11 to September 16 in the New York Times/Siena poll, — a collaborative effort between the New York Times and the Siena College Research Institute in New York — Harris and Trump were tied nationally, both at 47%. With the two candidates neck and neck, it’s hard to make sense of which candidate is polling better. But in reality, much of the confusion with polling comes from not understanding how to interpret the data.
Though Trump and Harris may be tied, with the New York Times/Siena poll’s 3% to 4% margin of error, the percentage of people voting for Trump or Harris could be as high as 51% or as low as 43%. These are considerable differences that are important to take into account — and forgetting margin of error can make polls falsely appear to have done a bad job collecting their data. “In general, I’d say [polls] actually pretty darn good,” as Statistics Teacher Ms. Jennifer Seymour put it. “They’re usually within the margin of error. It’s just that people forget that the margin of error is so big.”
Even if there were to be a smaller margin of error, the use of national polling is minimal. The US uses an electoral system, meaning the popular vote — which is what polls track — does not directly decide the outcome of the election. Instead, each state gets a certain number of electoral votes, and the candidate with the most overall electors wins the election. Therefore, it doesn’t matter that Harris is up by 4% nationally if much of that advantage comes from California or Massachusetts. But if more of that advantage came from swing states, it can matter significantly.
Luckily, pollsters know national data is not as helpful, which is why they also poll individual states. “The battleground ones are the polls that I would be more interested in looking at,” Statistics Teacher Mr. Juan Vidal said. “[Those] are the ones where…the decisions are made on the whole election.” In the same New York Times/Siena poll, Harris is up 50% to Trump’s 46% in Pennsylvania — a deciding state in the election, as the winner of Pennsylvania has won every U.S. election since 2008. This data is more telling than the 47% tie nationally, though it’s important to also note that Harris is not up in all other swing states.
But just looking at swing state data and taking it at face value is not enough either. Another important factor is how the polls are being conducted. For presidential elections, phones remain the most popular way of polling. According to a New York Times article, over 90% of voters for their New York Times/Siena polls were reached by phone. Generally, pollsters use a national list of registered voters to decide who to call. But if you ask your parents, neighbors, friends, or teachers if they’ve ever been called for a presidential poll — their answer will almost certainly be no. So who is actually being called?
First of all, California is definitely not the focus in presidential polling. It isn’t by any means a swing state (the share of registered Democrats is 46.2% compared to 24.7% Republicans), so its data is not as important as other states, such as Michigan or Pennsylvania. Even in swing states however, the number of voters being polled is low. The New York Times stated that for their polls, generally less than 2% of people will answer the phone.
This very low response rate contributes to another essential part of how polling works: sample-size polling. A sample-size poll uses a smaller percentage of citizens to represent the larger population. If a town had 10,000 people, a sample-size poll may survey 100 people. If there are 5,000 women, the sample-size survey would have 50 women. If there are 1,000 children, the sample size survey would have 1 child, and so on.
Obviously, polling the entire country is not possible, especially with low response rates, so pollsters use a sample size model. According to an article by the California Institute of Technology, sample-sizes for US election polls (national or state-specific) are generally around 1,000-1,500 people.
In theory, sample-size polling can actually be very effective if “your sample is representative of the whole population,” as Mr. Vidal put it. If election polls proportionally represent voter demographics, the data will be quite accurate. Mr. Vidal added, “It is always better to get a bigger sample size, [but] at some point, it doesn’t matter that much…You can spend a whole bunch of money getting 10,000 extra people [and] it’ll give you maybe 1%” more accuracy.
Nevertheless, there are many complications with finding a sample that accurately represents the voter population. This is largely because the demographic of people voting changes depending on the candidate, for example, with Former President Donald Trump.
“Trump brought out a whole demographic of voters who did not normally vote as much in presidential elections and so this was often white, non-college educated [voters],” Ms. Seymour explained. “They just came out in droves because they were really excited and energized by this candidate.” According to American Progress’s analysis of voting patterns in 2016, “exit polls claimed that white college graduates actually outnumbered non-college-educated white voters” by 37% to 34%, when in reality white non-college educated voters outnumbered their college-educated counterparts by 45% to 29%. Polls underestimate this demographic because of previous data, and in turn, they underestimated Trump — which is part of the reason why most polls incorrectly predicted Hillary Clinton winning.
Some other issues with sample-size polling are more predictable and therefore easier to solve. For instance, the fact that some demographics are more likely to answer a call about polling than others means that if you called people at random, your data might be skewed. For example, as opposed to working adults, “older retired people are much more likely to answer phone calls,” Ms. Seymour explained. Since older voters tend to lean more Republican than Democrat, calling at random could give extra weight to Republicans, introducing an inaccuracy to the poll. To avoid this, pollsters can call more working adults to “find as many likely voters in that demographic,” as Ms. Seymour explained. However, if they cannot get enough data “they extrapolate out those percentages to fill the projection for that desired demographic.”
“They find a few, and then they just extrapolate that out, and they just kind of extend that percentage,” Ms. Seymour explained. This, of course, lessens accuracy, but is a decent solution.
Ultimately, polls should be viewed with “a grain of salt,” as Nason Li (‘25) put it. “Polls are a snapshot of the public’s feelings about a particular candidate or issue at that time,” he added. Parsa Avaz-Barandish (‘27) similarly said that polls help him have a general idea of “what kind of election we have.”
Polls are not perfect. There is margin of error, there are biases, and there are unpredictable surprises in every election. Still, checking and understanding polls in battleground states remains a useful context for elections.