Survey Trolls, Opt-In Polls, and the New Era of Survey Science

Yves here. One of my summer jobs in college was conducting survey research, so I have an old-school bias as to the value of this sort of information-gathering and the importance of doing it well. The article below describes how this process can become corrupted, either by accident of design.

By Teresa Carr, a Colorado-based investigative journalist and the author of Undark’s Matters of Fact column. Originally published at Undark

Last December, a joint survey by The Economist and the polling organization YouGov claimed to reveal a striking antisemitic streak among America’s youth. One in five young Americans thinks the Holocaust is a myth, according to the poll. And 28 percent think Jews in America have too much power.

“Our new poll makes alarming reading,” declared The Economist. The results inflamed discourse over the Israel-Hamas war on social media and made international news.

There was one problem: The survey was almost certainly wrong. The Economist/YouGov poll was a so-called opt-in poll, in which pollsters often pay people they’ve recruited online to take surveys. According to a recent analysis from the nonprofit Pew Research Center, such polls are plagued by “bogus respondents” who answer questions disingenuously for fun, or to get through the survey as quickly as possible to earn their reward.

In the case of the antisemitism poll, Pew’s analysis suggested that the Economist/YouGov team’s methods had yielded wildly inflated numbers. In a more rigorous poll posing some of the same questions, Pew found that only 3 percent of young Americans agreed with the statement “the Holocaust is myth.”

These are strange times for survey science. Traditional polling, which relies on responses from a randomly selected group that represents the entire population, remains the gold standard for gauging public opinion, said Stanford political scientist Jon Krosnick. But as it’s become harder to reach people on the phone, response rates have plummeted, and those surveys have grown exponentially more expensive to run. Meanwhile, cheaper, less-accurate online polls have proliferated.

“Unfortunately, the world is seeing much more of the nonscientific methods that are put forth as if they’re scientific,” said Krosnick.

Meanwhile, some pollsters defend those opt-in methods — and say traditional polling has its own serious issues. Random sampling is a great scientific method, agreed Krosnick’s Stanford colleague Douglas Rivers, chief scientist at YouGov. But these days, he said, it suffers from the reality that almost everyone contacted refuses to participate. Pollsters systematically underestimated support for Donald Trump in 2016 and 2020, he pointed out, because they failed to hear from enough of those voters. While lax quality controls for younger respondents, since tightened, led to misleading results on the antisemitism poll, YouGov’s overall track record is good, said Rivers: “We’re competitive with anybody who’s doing election polls.”

Nonetheless, headlines as outrageous as they are implausible continue to proliferate: 7 percent of American adults thinkchocolate milk comes from brown cows; 10 percent of college graduates think Judge Judy is on the Supreme Court; and 4 percent of American adults (about 10 million people) drank or gargled bleach to prevent Covid-19. And although YouGov is one of the more respected opt-in pollsters, some of its findings — one third of young millennials aren’t surethe Earth is round, for example — strain credulity.

Amidst a sea of surveys, it’s hard to distinguish solid findings from those that dissolve under scrutiny. And that confusion, some experts say, reflects deep-seated problems with new methods in the field — developed in response to a modern era in which a representative sample of the public no longer picks up the phone.

The fractious evolution in polling science is likely to receive fresh attention as the 2024 elections heat up, not least because the consequences of failed or misleading surveys can go well beyond social science. Such “survey clickbait” erodes society’s self-esteem, said Duke University political scientist Sunshine Hillygus: It “undermines people’s trust that the American public is capable of self-governance.”


Veteran pollster Gary Langer compares traditional randomized polling methods, known as probability polling, to dipping a ladle into a well-stirred pot of minestrone soup. “We can look in and see some cannellini beans, little escarole, chunks of tomato,” he said. “We get a good representation of what’s in the soup.”

It doesn’t matter if the pot is the size of Yankee Stadium, he said. If the contents are thoroughly mixed, one ladle is enough to determine what’s in it. That’s why probability surveys of 1,000 people can, in theory, represent what the entire country thinks.

The problem is that getting a truly representative sample is virtually impossible, said YouGov’s Douglas Rivers, who pointed out that these days a good response rate to a randomized poll is 2 percent.

Pew expends a great deal of effort to maintain a randomized panel of about 10,000 people willing to take surveys. For the most recent annual recruitment, the organization mailed letters to a random selection of 13,500 residential addresses obtained from the U.S. Postal Service, receiving around 4,000 responses according to Pew researcher Courtney Kennedy. They only invite one-quarter of responders to the panel. Otherwise, Kennedy explained, the panel would be overrun with the types of people most amenable to taking surveys. Eventually, they wound up with 933 new recruits.

Some groups — in particular young people, people of color, and those who didn’t go to college — are generally more reluctant to take surveys, said Kennedy: That’s where they lose the perfect representative. Like every other pollster, she said, Pew adjusts their data, giving more weight to the responses of those underrepresented in the sample, so that the results represent the country in terms of demographics such as age, gender, race, education level, and political affiliation.

But those weighting methods are imperfect. And the people in a poll are still unrepresentative in at least one way: They are the Americans who are willing to answer a pollster’s message. Those difficulties have prompted a quiet revolution in survey research over the past two decades.

In 2000, nearly all pollsters simply called people on the phone, according to a 2023 Pew study of polling methods. But use of calls alone plummeted starting in 2012, while online opt-in surveys like the Economist/YouGov survey, one of the main forms of what are known as nonprobability polls, soared.

Nonprobability surveys don’t stir the pot so that each ingredient has an equal chance of being selected. Instead, they scoop up what’s referred to as a convenience sample of respondents, typically recruited online. Opt-in pollsters differ in how they recruit and select participants, and they are not always transparent about their methods. Once they have assembled a group of participants, pollsters can weight the sample so that it matches the broader U.S. population. But it’s much harder to accurately weight nonprobability polls, since there is less information on how people who opt into polls compare to the public at large.

“Probability sampling tends to yield more representative samples that nonprobability approaches,” Kennedy wrote in an email.

However, nonprobability surveys are typically much cheaper than probability polls. As Americans have ditched their landlines and stopped answering their cell to unknown callers, contacting people takes far more time and effort than it used to. As a result, according to Duke University political scientist Sunshine Hillygus, while it can cost as little as $1 per response to run a short online opt-in poll, it can cost 50 to 500 times that for a high-quality random-sample survey.

To create a pool of people to take opt-in surveys, polling companies recruit through ads that pop up on social media, internet search engines, and even during video games, offering cash or rewards to complete surveys, said Kennedy. YouGov, for example, pays people in points — 500 to 1,000, for example, to take a short survey. At 25,000 points, you can cash in for a $15 gift card; 55,000 points earns $50 in cash.

Pew and other pollsters who do randomized polling also pay people a small amount to take the occasional survey. But with opt-in polling, survey taking can become a full- or part-time job for many people. The job search website Indeed, for example, lists companies that pay for surveys in its career guide. And in the Reddit community Beer Money, which has 1.3 million members, people frequently discuss the pros (time flexibility) and cons (skimpy pay; frequently getting screened out) of taking surveys for money.

Some of those surveys are for academic research. (Many psychology papers, for example, rely on paid respondents recruited through platforms like Amazon Mechanical Turk.) Others help companies with market research ­— or feed the insatiable media market for polls.