Research | Fall 2019 Issue

Are Polls Reliable?

The polls in 2016 suggested Hilary Clinton would win the election. Can they still be trusted?

By Bill Boyarsky

ALTHOUGH IT’S A SMALL, NICHE INDUSTRY, the political polling business has an inordinate influence on politics and how people view the electoral process. So, when many pollsters predicted that Hillary Clinton would win the 2016 election, their failure was held up as another weakness of our democratic system. It also triggered some major soul-searching on the part of pollsters. Their leading organization, the American Association for Public Opinion Research, investigated, and in 2017 it reported: “The 2016 presidential election was a jarring event for polling in the United States. Pre-election polls fueled high-profile predictions that Hillary Clinton’s likelihood of winning the presidency was about 90 percent, with estimates ranging from 71 to over 99 percent. When Donald Trump was declared the winner of the presidency in the early hours of November 9, it came as a shock even to his own pollsters. There was [and continues to be] widespread consensus that the polls failed.”

(Note: A summary of the polls, many of which accurately predicted that Hillary Clinton would win the popular vote, is included in this issue’s Infographic.)

I talked to a lot of academics and poll takers to find out why so many surveys were wrong. By examining the reliability of data used by the polls, I found some answers. “We are in a data collection revolution right now,” UCLA political scientist Professor Matt Barreto told me when we talked in his office in Bunche Hall.

“There is no such thing as an authoritative poll. None. No one poll should ever be taken as authoritative,” said Bill Schneider, professor at the Schar School of Policy and Government at George Mason University.

Others shared the skepticism and blamed the mass media for hyping inaccurate results. Retired USC public policy professor and media pundit Sherry Bebitch Jeffe said, “Trump has laid the foundation of mistrust of the media, and I think people perceive polling as part of the media. And it doesn’t help if the media often get it wrong.”

The polls and politics

It’s an important matter. Polls have become intertwined with the electoral process. Fluctuations are hyped by the mass media. Political surveys are reported constantly on 24-hour cable news. They flash through myriad online sources and are quoted regularly by prestige newspapers. The numbers guide campaign strategies and shape the public policies of candidates. That can be seen in the way Democratic presidential candidates have changed their health care proposals in response to polling. With the credibility of elections facing increased skepticism, the question of whether erroneous polls destroy faith in democracy is of great significance.

Not everyone agrees that all polls were wrong in 2016, or that their performance was a threat to democracy. “No, I think that’s ridiculous,” said UCLA political scientist Lynn Vavreck. “Hillary Clinton won the popular vote. The polls showed she was going to win the popular vote. They were closer in 2016 than they were in 2012 in the actual popular vote election outcome, which is what most of these polls are measuring. … Polling is not broken. That should not be the takeaway [from your story]. Polls were better in 2016 than they were in 2012.”

Indeed, one subtlety of the 2016 polling has escaped some notice. Most polls predicted that Hillary Clinton would win because a slim majority of Americans favored her on the eve of the election. That proved correct, as Clinton received about 3 million more votes than Donald Trump. American presidential elections, however, are not won by commanding the popular vote, and Trump defeated Clinton in the Electoral College. Failing to anticipate that outcome was not a failure of polling the popular vote.

Still, Vavreck said, the polling process needs improvement. “Whatever mistakes they made in 2016, they are going to go forward and make sure they don’t make them again.”

I got a variety of views as I called on political scientists who have devoted their careers to the study of the political process and the elections that shape it.

When I had trouble finding Bunche Hall, home of the UCLA political science department, a student told me it was a tall building with odd windows that made it look like a waffle. They did.

I went up to the third floor and spoke with Barreto, a nationally known expert on Latino politics, and Vavreck, co-author of Identity Crisis: The 2016 Presidential Campaign and the Battle for the Meaning of America. Her fellow authors are John Sides, professor of political science at George Washington University, and Michael Tesler, associate professor of political science at UC Irvine.

I also interviewed Jill Darling, survey director of the USC Dornsife College’s Center for Economic and Social Research, which collaborates on the Los Angeles Times poll. At Jeffe’s home, I sat at the dining room table and talked to her and Schneider. Both have been my friends for many years. And finally, I drove to Loyola Marymount University to hear the views of political science Professor Fernando J. Guerra, founding director of LMU’s Center for the Study of Los Angeles. His poll focuses on the Los Angeles area. It shows how surveys can impact local politics.

Polling assumptions

I was struck by several facts. First, those surveyed are selected from lists obtained from commercial or other sources that may or may not be accurate in describing them as voters or potential voters. Some are telephoned by pollsters, others are reached online. Second, less than 10 percent of them answer. That’s far fewer than a decade or more ago.

Third, sharp cutbacks at news media organizations have reduced the number of journalists assigned to polling, as I know from my own experience. Buying a survey is much cheaper than hiring reporters and editors. Yet competitive pressure to be first has impelled the news media to blast out polls, often purchased from unreliable sources, without examining whether they are statistically sound. For example, every survey contains a statistical margin of error, usually two or three percentage points or more. If a poll shows that Candidate A is only two points ahead, that may not be meaningful or even correct, a fact that should be explained to readers and viewers.

“The media, including print and TV, were front and center,” Barreto said, “and the media has liked this, because it helps them recap the race, understand the race, maybe even predict the race.”

But techniques are changing so rapidly that most of the public and much of the press doesn’t understand what’s happening.

Thirty years ago, when I started working with pollsters as a Los Angeles Times political reporter, surveys were a simple matter. Phone numbers were selected randomly. A poll taker would call and ask you to take part in an election survey. Most likely, you’d be pleased by the attention. It was a big deal. The media and the pollsters associated with it were widely respected. Usually there was someone at home to pick up the phone. Now, nobody might be home. If someone is, he or she may not want to answer questions about how they plan to vote. Worse yet, the person may dislike the media.

Today, said Lynn Vavreck, “I think polling is really moving away from random sampling, because nobody has a landline anymore. People don’t want to get called on their cellphones. It’s hard to reach people.”

For example, a national poll by Quinnipiac University in Connecticut selects a sample of about 1,000 women and men who are 18 or older. The USC poll has a sample of 8,000. This is a small percentage of the electorate, but it is designed to be a sample of the voting population. “It’s like a blood test,” said USC’s Jill Darling. A tiny sample of blood represents all of the blood in the body.

Quinnipiac and USC obtain their names of potential respondents from a variety of sources, including voter rolls, the U.S. Postal Service and a growing number of data-collecting firms. Quinnipiac uses a company called Dynata. It creates panels of people who are willing to participate in surveys for businesses, including polls. Dynata’s website says: “We actively recruit consumers, business professionals and hard-to-reach individuals as members of our research panels, and we build trusted ongoing relationships.”

Phone numbers are randomly selected by a computer, with listed and unlisted numbers, including cellphones. Questioning is done over a four- to seven-day period, from 6 p.m. to 9 p.m., by a mix of students and non-students trained for the job. Interviews are in Spanish and English. “If there is a no answer, we will call back that number. We will call every number where there is a no answer at least four times,” the Quinnipiac website said.

The L.A. Times poll is conducted online. It sends tablet devices to those on its survey list who do not have computers, and it pays people a small amount to participate.

Some respondents are recruited more informally, from people volunteering in what is known as an opt-in panel. “Opt-in panels are what most [survey] panels come from because they are super cheap,” Barreto said. “It’s where they just put an ad on Facebook, and it says, ‘Click here and get paid for your thoughts.’ Or, ‘Win a free iPhone,’ and all you have to do is take one survey a week.”

Once a panel is selected, it is manipulated to match the Census with representative samples. Suppose a panel of 1,000 shows that Latinos comprise 30 percent of Los Angeles County’s population, when it is actually 48 percent. The panel results are then mathematically weighted or adjusted to match the Census.

It is in this process that mistakes are made. “It’s complex,” said Barreto. “You have to be a social scientist and a methodologist today.”

What went wrong?

Two errors illustrated the failures of polling in the 2016 election.

One was made by state polling organizations, some in the media and at universities, others privately owned. Generally, national polls got the final results right, showing Clinton would beat Trump in the popular vote, which she did. But most organizations polling the states failed to catch a key factor: Older white men with high school educations or less supported Trump in the Midwestern battleground states, where polls showed that Clinton was favored — but Trump won narrowly. Many analysts felt this was the pollsters’ biggest mistake of 2016.

“Education was strongly correlated with the presidential vote in key states: That is, voters with higher education levels were more likely to vote for Clinton,” said the American Association for Public Opinion Research. “Yet some pollsters — especially state-level pollsters — did not adjust for education in their weighting, even though college graduates were over-represented in their surveys. This led to an underestimation of support for Trump.” In other words, there were not enough older non-college-educated white men in the survey — and, pollsters said, some of them did not want to answer survey questions.

Another polling error was in sampling Latino voters.

Loyola Marymount’s Fernando Guerra, an expert in polling Latinos, told me his curiosity was piqued by some surveys in the 2004 presidential election that showed George Bush was more popular among Latinos than in other polls.

Guerra didn’t believe the polls with higher figures. “A good proportion of Latinos were Latinos who lived in middle-income or non-Latino districts,” he said. The surveys had underestimated the number in working-class and poorer areas. In other words, too much San Gabriel Valley, not enough East Los Angeles.

In subsequent elections, he sent LMU students to polling places throughout the city to interview people after they voted in Latino, Anglo, African American and Asian American areas — and got what he considered a more accurate sample.

Nobody I talked to had great faith that polling would be better in 2020 than it was in 2016. By the end of my exploration, all I knew was that, with all the media attention, polls would continue to be a dominating force in political life.

A force for good or bad? Or just another institution met with skepticism? Pollsters at the American Association for Public Opinion Research had asked: “Did the polls fail? And if so, why?”

Those questions are still open and leave many years of work ahead for the current generation of political scientists and their successors.

Bill Boyarsky

Bill Boyarsky

Boyarsky is a veteran journalist and author. He was with the L.A. Times for 31 years, serving as city editor, city county bureau chief, political reporter and columnist. He is the author of several books, including: "Inventing LA, The Chandlers and Their Times."

Post navigation

Related

profile

Anthony Rendon: A New Speaker for a Changing State

Assembly Speaker Anthony Rendon reflects on leadership in the age of Twitter