Skip to content

We should stop asking too much of polling data

More data does not necessarily mean better data

A couple arrives to vote at the Anthem Center in Henderson, Nev., during early voting in Nevada on Oct. 24.
A couple arrives to vote at the Anthem Center in Henderson, Nev., during early voting in Nevada on Oct. 24. (Bill Clark/CQ Roll Call file photo)

As each day brings new poll results in advance of Election Day, we face the prospect, as in past years, of post-election handwringing and mea culpas regarding what went wrong.

The potential sources of error are well understood, sometimes mentioned in reporting of polling results and occasionally discussed at length in such recent opinion pieces as “Frustrated With Polling? Pollsters Are, Too” Quoctrung Bui and “4 reasons to be skeptical about election polling” by Jennifer Rubin.

However, we see a disturbing trend toward use of polling averages computed by FiveThirtyEight and other aggregators as magic elixirs that cure what ails us.

In fact, polling averages need not be more accurate than the individual polls they aggregate. Indeed, they may be less accurate than particular high-quality polls.

We are concerned that the mystique of poll averaging is yet another example of big data mythologizing, which mistakenly asserts that obtaining more data overcomes problems that can really only be addressed by obtaining better data.

Polling data are collected to make inferences on the preferences and voting intentions of the electorate, which are then translated into predictions of election outcomes.

Our research, including “More Data or Better Data: A Statistical Decision Problem,” provides a clear framework for weighing the costs and benefits of allocating resources to acquiring more data as opposed to better data, for the purpose of inference about a population of interest.

If the only inferential problem arises from the statistical imprecision associated with the small number of respondents to a single poll, then bringing in more of the same kind of data by averaging results across polls is an obvious solution.

However, collecting more data is not the solution if identification problems are a concern.

Identification problems arise from data quality issues that do not diminish with sample size. Data quality may be impaired by selective survey nonresponse, inaccurate measurement, or selection of convenience samples as often occurs in internet panels.

To cope with these types of problems, we have argued that resources will often be better spent collecting higher quality data rather than more of the same kind of data.

In the case of polling data, problems of selective nonresponse have become so severe and
intractable that it may not be realistic to expect that data quality can be enhanced as would be
desirable.

It is tempting to hope that averaging polls might reduce the bias of polling forecasts, but this is a pipe dream. Our analysis of meta-analysis as practiced in medical research, which is analogous to poll averaging, makes clear that averaging does not resolve the identification problem stemming from selective nonresponse.

Instead, one should recognize that each poll yields at most a bound on the prospective vote outcome. Aggregation of findings should be performed by taking the intersection of these bounds across polls.

Having become accustomed to the reporting of precise predictions (plus or minus a couple
percentage points to account for statistical imprecision), many users of polls may not be
satisfied with this clear acknowledgement of the limits to our knowledge based on available
data.

However, we are reminded of the lessons learned by Princeton’s Sam Wang, co-founder
of the Princeton Election Consortium, whose own electoral vote estimator was based on a meta-analysis of polling data he performed nearly 20 years ago.

Professor Wang famously ate a cricket on CNN after tweeting a promise to eat a bug if Donald Trump won more than 240 electoral votes in the 2016 election.

In an interview four years later, he argued: “It’s best not to force too much meaning out of a poll. If a race looks like it’s within three or four points in either direction, we should simply say it’s a close race and not force the data to say something they can’t.”

While we believe that “three or four points” is likely too optimistic, we can certainly endorse this message.

Jeff Dominitz is an economist and ECONorthwest affiliate. Charles F. Manski is board of trustees professor in economics at the Department of Economics and Institute for Policy Research, Northwestern University.

Recent Stories

Trump ‘safe and well’ after agents fire at armed man near golf course

Capitol Ink | Social media warning label

‘Dogs and cats … mass hysteria!’ Congressional Hits and Misses

Donald Trump on running for president: ‘I don’t like doing this’

Women looking to make Senate history ‘intend to be quite bold’

Photos of the week ending September 13, 2024