For pollsters, it’s back to the drawing board after yet another miss in the 2020 election

For pollsters, it’s back to the drawing board after yet another miss in the 2020 election

It’s been a rough few weeks for the polling industry—one that has seen the very profession of polling publicly scrutinized and dissected in the wake of yet another disappointing presidential election. After failing to predict Donald Trump’s shocking rise to the presidency in 2016, most polls got it wrong yet again this year. Though Joe Biden’s victory over Trump turned out to be correct, on a national level, once all the votes are counted, it’s clear that the 2020 election did not yield the Democratic landslide many forecasted. 

Polling website RealClearPolitics’ national average, which aggregates and factors the results of polls nationally, forecast a 7.2-point lead for Biden going into the election; in reality, the actual margin of Biden’s popular vote victory will likely come in at half that spread. In Wisconsin, where some polls predicted a double-digit winning margin for Biden, the Democrat ended up winning by only 0.7%, or roughly 20,000 votes. In Florida, where most polls had Biden ahead by a few points, Trump won convincingly by more than 3%. Meanwhile, polls tracking key Senate races—like those indicating that reelected Republican incumbents Susan Collins, of Maine, and Thom Tillis, of North Carolina, were in real danger of losing their seats—also proved wide of the mark.

It’s all led to a great deal of hand-wringing and consternation among those in and around the polling industry. Noted Republican pollster Frank Luntz went as far as to condemn the entire profession of political polling on the morning after Election Day. But while most aren’t nearly as fatalistic as Luntz and his prognosis, there is an acknowledgment that something is amiss in how pollsters are capturing the opinions of a deeply divided electorate—and that they must now go back to the drawing board after missing their mark for a second consecutive election.

“Polls aren’t always wrong, but they’ve been wrong enough that we ought to treat them with skepticism and realize they are imperfect instruments prone to failure and flaws,” according to W. Joseph Campbell, a professor at American University in Washington, D.C., and author of the recent book Lost in a Gallup: Polling Failure in U.S. Presidential Elections.

As Campbell notes, there’s nothing new about political polls being proven wrong; one can look back as far as 1948, and President Harry S. Truman’s famous upset victory over Thomas E. Dewey in that year’s election, to find an example of pollsters failing to make the right call.

Still, he describes an industry that is “going through some tough times in terms of what the best, most reliable approach to conducting surveys is”—one still searching for its “next gold standard,” as the days of computer-generated telephone calls to landlines across the country are now a relic of the past. Coupled with federal laws restricting how pollsters can contact people via their mobile devices, the industry has faced declining telephone response rates in recent years, forcing some to explore alternative means of contact like text messages and online polls.

But beyond the medium, there has been heightened focus on the methods by which pollsters are accounting for the populace whose opinions they are meant to be gauging. Four years after Trump’s widespread support among white, non-college educated voters helped carry him to the White House, many pollsters recalibrated their methods to more greatly weigh the input of such voters. But underweighting such voters only “contributed a point or two to the [polling] miss in 2016,” according to Patrick Murray, director of the Monmouth University Polling Institute, and doesn’t fully account for the polls’ underwhelming performance as of late.

Some pollsters looked to tweak their formulas in other ways. Lee Miringoff, director of the Marist College Institute for Public Opinion, noted that Marist doesn’t even weigh by education in its polls, which succeeded in calling a very tight race between Trump and Hillary Clinton in 2016. Instead, the college sought to sample more voters not just in rural counties, but in the more rural areas within those rural counties that are more likely to support conservative candidates.

“We feel that the difference between the polls that have been working and those that haven’t is not specific to education; we always felt that was a symptom rather than a cure,” Miringoff says. “Our feeling was that even in a rural county, there are ‘metro’ areas, and we felt that you needed to get the right balance between the ‘metro rural’ areas and the ‘rural rural’ areas that are more [supportive of] Trump. We always thought it was more about geography, and an issue of sampling rather than weighting.”

That approach yielded mixed results for Marist this election season. It rightly called an exceptionally close race in Arizona, and forecast surprisingly strong support for Trump among Latinos in both that state and Florida. But Marist also had Biden up by several points in Florida and North Carolina—both states won by Trump—as well as Pennsylvania, which Biden won by only 1%. 

Miringoff notes that in the case of states like Pennsylvania, Marist’s findings were “technically in the margin of error.” Still, given the extent to which political polls have lately failed to provide an accurate reading of the situation on the ground, he acknowledges: “We have to be better as an industry. The science is messy, but [polling] is still our best guess as to what is going on.”

Monmouth didn’t have much better luck, having forecasted a robust advantage for Biden in Pennsylvania, Florida, and Arizona, as well as narrower advantages for him in Georgia and North Carolina. Murray says the university is already “developing hypotheses about what went wrong” in its 2020 projections—listing off possibilities including a “last-minute increase in people refusing to participate in polls” in the final weeks before the election, as well as the mooted “shy Trump voter effect” that sees Trump’s backers either disguise their support of the president or refuse to speak to pollsters outright.

But Murray also raised a far more complex theory for why this president’s support has been so hard for pollsters to nail down over the course of two election cycles—one having to do with the hyper-polarisation of the American electorate during the Trump era, a dynamic that “may have reshaped the American political psyche in a way where it’s more difficult to measure things accurately,” he says.

“We’ve been seeing, in our polling over the last few years, a deeper division in the American public, to the extent where they view everything through the lens of politics,” according to Murray. He notes that even questions about a respondent’s finances or whether they intend to take a vacation are being viewed through “partisan prisms,” with respondents more inclined to answer in a manner that reflects positively on their preferred candidate and political views.

This “social desirability bias,” Murray says, is a phenomenon unique to Trump. “When we asked in the past whether people approved or disapproved of the President, it was easy to give an answer: It depended on their performance,” he notes. “Now, it’s become a reflection of who you are as a person and how other people in society see you… These are questions we’ve asked for years that have never had a significant partisan correlation for us.”

For Murray, it remains to be seen whether this dynamic proves a distinct trait of the Trump era, or indicative of a more permanent shift in the political landscape that could make it harder for pollsters to accurately gauge opinions. “We might have to accept that we’re in a different environment politically,” he says. “Under Trump, it’s accelerated to the point where your political opinions have become an intrinsic part of your identity. If that’s seen as a negative, it might be something you may not want to talk about.”

For now, it’s back to the lab for pollsters who yet again must take stock of another election’s shortcomings and devise a better way to take the pulse of the American electorate. It is a process that will require patience and due diligence; as industry observers note, it took six months for the American Association of Public Opinion Research (AAPOR) to release its postmortem on the 2016 election, and it will likely take as long to comprehensively evaluate this year’s polling, even with the likes of Pew Research Center already floating theories

That work will be of the utmost importance for the polling industry and profession. Because while polls aren’t perfect, they remain the preferred method to research and measure public sentiment—not only for political candidates, but all sorts of public issues and policies that may hang in the balance.

“There’s long been this recognition that [elections are] an acid test of polling procedure,” according to Campbell. “Election polling is a mere sliver of a multibillion-dollar international industry known as public opinion research… If the polling of presidential elections is off, then are we also off on other public policy issues and market research?”

More politics coverage from Fortune:

Source link