John Keane / CC BY-SA 2.0

Harvard historian and New Yorker contributor Jill Lepore looks with informed disgust upon the assumptions and methodology that underlie pollsters’ efforts to track the 2016 presidential race.

For starters, “the Sea of Polls is deeper than ever before, and darker,” Lepore wrote in mid-November:

From the late nineteen-nineties to 2012, twelve hundred polling organizations conducted nearly thirty-seven thousand polls by making more than three billion phone calls. Most Americans refused to speak to them. This skewed results. Mitt Romney’s pollsters believed, even on the morning of the election, that Romney would win. A 2013 study—a poll—found that three out of four Americans suspect polls of bias. Presumably, there was far greater distrust among the people who refused to take the survey.

The modern public-opinion poll has been around since the Great Depression, when the response rate—the number of people who take a survey as a percentage of those who were asked—was more than ninety. The participation rate—the number of people who take a survey as a percentage of the population—is far lower. Election pollsters sample only a minuscule portion of the electorate, not uncommonly something on the order of a couple of thousand people out of the more than two hundred million Americans who are eligible to vote. The promise of this work is that the sample is exquisitely representative. But the lower the response rate the harder and more expensive it becomes to realize that promise, which requires both calling many more people and trying to correct for “non-response bias” by giving greater weight to the answers of people from demographic groups that are less likely to respond. Pollster.com’s Mark Blumenthal has recalled how, in the nineteen-eighties, when the response rate at the firm where he was working had fallen to about sixty per cent, people in his office said, “What will happen when it’s only twenty? We won’t be able to be in business!” A typical response rate is now in the single digits.

Meanwhile, polls are wielding greater influence over American elections than ever. In May, Fox News announced that, in order to participate in its first prime-time debate, hosted jointly with Facebook, Republican candidates had to “place in the top ten of an average of the five most recent national polls.”

Polling in the United States dates to George Gallup, founder of the eponymous research and management consulting company, in the 1930s, Lepore explains.

Ever since Gallup, two things have been called polls: surveys of opinions and forecasts of election results. […] It’s not a bad idea to reserve the term “polls” for the kind meant to produce election forecasts. When Gallup started out, he was skeptical about using a survey to forecast an election: “Such a test is by no means perfect, because a preelection survey must not only measure public opinion in respect to candidates but must also predict just what groups of people will actually take the trouble to cast their ballots.” Also, he didn’t think that predicting elections constituted a public good: “While such forecasts provide an interesting and legitimate activity, they probably serve no great social purpose.” Then why do it? Gallup conducted polls only to prove the accuracy of his surveys, there being no other way to demonstrate it. The polls themselves, he thought, were pointless.

Social scientists were critical of polling too:

In 1947, in an address to the American Sociological Association, Herbert Blumer argued that public opinion does not exist, absent its measurement. Pollsters proceed from the assumption that “public opinion” is an aggregation of individual opinions, each given equal weight—an assumption Blumer demonstrated to be preposterous, since people form opinions “as a function of a society in operation.” We come to hold and express our opinions in conversation, and especially in debate, over time, and different people and groups influence us, and we them, to different degrees.

Blumer’s critique didn’t stop Gallup from predicting the defeat of President Harry S. Truman the following year, however. And Gallup was wrong, as the famous photograph of Truman hoisting a copy of the Chicago Tribune bearing the headline “DEWEY DEFEATS TRUMAN” reminds us:

Gallup liked to say that pollsters take the “pulse of democracy.” “Although you can take a nation’s pulse,” E. B. White wrote after the election, “you can’t be sure that the nation hasn’t just run up a flight of stairs.”

“Polls don’t take the pulse of democracy,” Lepore adds. “[T]hey raise it.”

Even if public opinion were a stable thing that could be measured, would that be good for democracy? Critics argue, Lepore writes, “that legislators’ use of polls to inform their votes would be inconsistent with their constitutional duty.”

The United States has a representative government for many reasons, among them that it protects the rights of minorities against the tyranny of a majority. “The pollsters have dismissed as irrelevant the kind of political society in which we live and which we, as citizens, should endeavor to strengthen,” [political scientist Lindsay Rogers, author of the book “The Pollsters: Public Opinion, Politics, and Democratic Leadership” and law professor at Columbia University] wrote. Polls, Rogers believed, are a majoritarian monstrosity.

The alarms raised by Blumer and Rogers went unheeded. Instead, many social scientists came to believe that, if the pollsters failed, social science would fail with them (not least by losing foundation and federal research money). Eight days after Truman beat Dewey, the Social Science Research Council appointed an investigative committee, explaining that “extended controversy regarding the pre-election polls among lay and professional groups might have extensive and unjustified repercussions upon all types of opinion and attitude studies and perhaps upon social science research generally.”

Of course, growing evidence of the problems in polling have not dissuaded journalists from giving it undue weight in their reporting and analysis. Some surveys and polls may be methodologically respectable, but journalists themselves often have no way of verifying this because, as Lepore points out, “there isn’t much of a check on political scientists who don’t reveal their methods because they’ve sold their algorithms to startups for millions of dollars.” And this kind of opacity is everywhere.

Read much more in Lepore’s full essay here.

— Posted by Alexander Reed Kelly.

Your support matters…

Independent journalism is under threat and overshadowed by heavily funded mainstream media.

You can help level the playing field. Become a member.

Your tax-deductible contribution keeps us digging beneath the headlines to give you thought-provoking, investigative reporting and analysis that unearths what's really happening- without compromise.

Give today to support our courageous, independent journalists.

SUPPORT TRUTHDIG