Arrow-right Camera

Color Scheme

Subscribe now

This column reflects the opinion of the writer. Learn about the differences between a news story and an opinion column.

Spin Control: What to know about polls that TV pundits usually don’t tell you

Vice President Kamala Harris speaks on the South Lawn of the White House in Washington, D.C., on Monday.  (Erin Schaff/New York Times)

The presidential race might best be described as an exercise in “too much.”

Too much money spent. Too much time taken up. Too much reliance on polls.

Never has that third been so obvious as last week, in the wall-to-wall news coverage of Joe Biden getting out of the presidential race and Kamala Harris getting in.

By the second day of this news bombshell, cable news pundits seemed to have exhausted all of their thoughts about the shakeup in a race they’d been prognosticating for months. So they did what they always do in such a situation. They asked what the polls were showing.

The fact that there even were polls may have surprised some viewers. But in the six months before a presidential election, someone, somewhere, is always conducting a poll. Usually multiple someones.

The people the news networks pay to explain polls to the viewers – and, in many cases, to the talking heads with whom they share the studio – sometimes warned it was too early to make much of the results. Then they reported the results. And the talking heads spent the next segment of the broadcast making much of the results.

The previous week, Donald Trump had been several percentage points ahead of Biden. By Wednesday he was only a point or two ahead of Harris. This continued for the rest of the week, slicing and dicing multiple sets of results, with the usual admonishment that it might be too early to put too much emphasis on poll results, followed by time putting lots of emphasis on polling results. Sometimes the results showed Trump slightly ahead, sometimes Harris slightly ahead.

While the numbers flashed large on the screen with photos of the candidates, an anchor or reporter would sometimes briefly note that the difference between the two is within margin of error so they were statistically tied.

Soon they were dissecting the support of various voter “blocs” – like women or suburban women or Black women or Black suburban women – and showing how Harris was doing better than Biden compared to Trump.

Along with this surfeit of numbers was a paucity of qualifiers that should accompany any discussion of polls.

To start with, it matters who is being polled. Are they registered voters, which will include people who register but rarely if ever cast a ballot; regular voters, which may have people who cast a ballot in some elections along with some who vote in every election; or so -called “perfect” voters who have voted in at least three of the last four elections? Results almost always vary.

It matters how many people are polled. It matters if it’s a nationwide poll, which is a statistical representation of the entire country, or a statistically valid poll in each of the 50 states, because the presidential election is decided by the results of the individual states conveyed through the Electoral College.

It matters how the voters were contacted. A generation ago, people were called at home, on landlines. After some embarrassing failures, pollsters expanded that to include calls on cellphones, then texts, then social media. It’s still hard to get a good reading on voters in the 18- to 34-year-old age group.

Each polling firm has its own formula for contacting voters. Because of that, it’s difficult to compare the results of a poll conducted last week by one polling firm with a poll conducted this week by a different firm, even if they have the same margin of error. They are questioning different people and have many other variables. But television pundits do that all the time.

They also usually move quickly through, if they don’t just skip over it entirely, what a margin of error means. If the margin of error is 5 percentage points with a 95% confidence level, it doesn’t just mean that candidates with a gap less than that can be tied. It means either result, or both, could vary by those five points if 95 of 100 polls of a similar group of voters were taken at the same time using the same methods. But on five other polls, the results could be wildly different.

If the results of that poll are divided to show how women or a particular age or racial group feel about the candidates, the margin of error goes up as the numbers go down, and the confidence level also drops.

News organizations often emphasize the standing of candidates in a poll – what’s often called the “horse race.” Campaigns, which are also polling constantly, primarily use polls to see how voters are reacting to issues, and adjust the messages in ads and stump speeches. When looking at horse race results from a candidate’s poll, it’s important to know if that question was asked at the beginning, or after a series of questions in which the candidate’s virtues were extolled and the opponent’s shortcomings denounced. If the campaign won’t show the entire poll, a news organization shouldn’t report any part of it.

But most of all, it’s important to remember something pollsters who worked for The Spokesman-Review always reminded me whenever giving me the results of a poll.

Polls don’t predict the results of a future election; they illustrate how people are feeling now, or at least how they were feeling a few days ago when the questions were being asked. And feelings can always change.

More from this author