Polling industry going through changes

Published on Aug. 22, 2015 in the Waterloo Region Record.

Election campaigns are notoriously unpredictable but one thing is certain: Canadians will be bombarded with public opinion polls until the federal vote on Oct. 19.

But how accurate and representative are the data?

Read more.

The polls are bad – their accuracy, that is

Published on Aug. 13, 2015 in the University Affairs

Barry Kay, a member of the Laurier Institute for the Study of Public Opinion and Policy, or LISPOP, has been doing seat projections for upcoming elections for the past 35 years. But, he warns, “People should understand I do not have a crystal ball. The fact is the model is only as good as the polls it is based on. If the polls are off, it will be off.” And, the bad news is that the polls are getting worse, he says.

Seat projections, as opposed to party popularity, were a novelty when Dr. Kay first started out but have attracted greater interest over the past decade or so. An associate professor of political science at Wilfrid Laurier University, where LISPOP resides, Dr. Kay says his model has been accurate to within four seats per party over the past 15 federal elections.

Read more. 

The End of Representative Samples? The Future of Survey Research

According to Steven Shepard’s article in the National Journal:

“The days of accurate telephone polling are numbered. With more and more Americans dropping their landline service, reliable phone surveys are becoming prohibitively expensive for news organizations and nonprofit groups with tight budgets. Many news outlets are choosing to forgo the rigorous survey research they have commissioned for decades…. With consumer behavior upending traditional polling methods, Zukin of Rutgers predicts that pollsters will stop conducting dual-frame phone surveys (contacting both landline and cell-phone users) “within the next five years. I think we’re going to survey people with whatever mode they wish. That means Internet- and smartphone-based surveys, Zukin says. Indeed, a significant number of the sessions at the pollsters’ conference focused on this kind of research, which uses a methodology known as non-probability, or nonrandom sampling. In many cases, these surveys are completed by respondents who “opt-in,” clicking on a link to complete a poll or joining a Web-based panel (or downloading a smartphone application) to complete surveys—usually with monetary incentives or rewards given for doing so.”

The entire article is worth reading and captures many of the concerns I’ve heard informally from my colleagues who do this type of work.  To me, it sounds like the death knell of survey research, at least in terms of being able to gather data that are representative of the entire population, unless researchers have access to significant financial resources.

Continue reading

Few academics, however, have access to such resources and so have turned towards web-based panels.  Indeed, my colleague Jason Roy and I used this technique to collect data for our Election Timing paper.


“Some critics in the polling community are highly skeptical of this type of research. Yes, Internet pollsters can create and weight their panels to reflect the public at large, using demographic information to make their samples more representative, they say, but that kind of weighting can serve in some cases to further distort unreliable data.”

I think these criticisms are right.  The only solution is to use web-based panels to target specific publics, rather than representative samples, and to acknowledge upfront the strengths and weaknesses of these techniques.

Weighting is not a solution.  Indeed, I think weighting of any type of dataset, however it is collected, is problematic.  Rarely do you see academic journal articles report findings both with and without weights.  Yet I’ve heard that for some published articles, some of the reported findings that are significant with weights, are either significant at less robust levels or are no longer significant once the weights are removed.

UPDATE: Here’s an article by Andrew Gelman on survey weighting.