Skip to content

The perils of polling

Our last two election campaigns saw a lot of press coverage of polls, though fortunately it hasn't become as frenzied as in the United States.

Our last two election campaigns saw a lot of press coverage of polls, though fortunately it hasn’t become as frenzied as in the United States. The Republican primary process has more than 30 straw polls, not to mention dozens of web surveys and phone polls organized by everybody from newspapers to gun lobbyists.

This is a big change from the old days, when there were basically two kinds of polls. The first was informal, like the old No Pop Sandwich Shop sandwich poll in Whitehorse. There was a sandwich named after each party and hacks would try to lunch their political movement to victory. It was fun, and no one took it too seriously.

Then there were authoritative polls by some well-known polling brand, like Gallup. These tended to be rare and costly. It took a lot of money to call 5,000 people around the country and have a human ask them a list of questions. But they had credibility.

Now, literally anyone can have their own poll. If you want to poll Yukoners on anything from the Peel Watershed to whether the sub-alpine fir should really be the Yukon’s official tree, all it takes is $500 to hire a robo-poll operator to sic her auto-dialler on the 867 area code.

The result is familiar from most other things the internet has touched. On the one hand, polling is much more democratic and accessible. On the other, quality is wildly variable.

It has also become harder to reach a statistically valid sample of the population. In the heyday of Gallup, nearly every household had a phone and most people considered it a duty to answer the phone when it rang. Today many people, especially the young, don’t have landlines. And landline fogies often screen their calls. This is something you can’t fix with web surveys, since not everyone has web access and there is no email “phone book.”

This means both reporters and voters have to kick the tires of a poll before buying its message. There are three key things to look at: sample size (which relates to the margin of error), methodology (which can create bias) and question format.

Sample size is critical in a small place like the Yukon. Consider the Datapath poll that came out the last newspaper day before the recent territorial election. It had the Yukon Party, NDP and Liberals at 35 per cent, 35 per cent and 26 per cent respectively. The poll sampled 357 Yukoners, and only around 250 of those were decided voters. Depending on how you calculate it, this will typically give a margin of error of around six per cent, 19 times out of 20, given our population.

The margin of error is so big it’s hard to put a headline on the poll.

At first glance, the NDP and Yukon Party looked “tied.” But a plus or minus six per cent range means it could actually have been 41 per cent Yukon Party and 29 per cent NDP.

Or that the Liberals could have been 32 per cent and the Yukon Party just 29 per cent.

The poll data can accommodate totally different realities.

Statistically speaking, the most appropriate headline is “Poll Inconclusive.”

This isn’t a criticism of Datapath.

The math just works so that to get a margin of error of less than one per cent, common in national pools, you’d have to poll around 7,500 Yukoners. It might not even be possible to get that many to answer the phone and not hang up on a pollster.

On election day, the Yukon Party came in with 40 per cent, while the NDP and Liberals trailed with 33 per cent and 25 per cent.

Then there is “bias,” which is where a poll is systematically wrong. Say you phone people at home during the day, and tend to get a higher number of mothers, retired and unemployed people than exist in the general population.

Or you use landlines and skip young people with only cellphones. Or you survey some groups who answer the phone, but don’t tend to vote. Or you use a web survey, which only gets to people with computers (and may have other biases depending on where you got your list of emails from).

The Datapath polls are adjusted to take into account the differences between their sample and the overall population for demographic elements like community, age and gender.

But the bulk of their sample was from 301 web surveys and we don’t know how representative these 301 people are. Since you can sign up on their website, this raises the potential of self-selection bias. It is impossible for someone reading the poll results to know if there is a statistical bias and, if so, in which direction.

When Datapath’s federal poll showed Larry Bagnell with 44 per cent and Ryan Leef with 24 per cent, and Leef ended up winning, was that because people changed their minds or because the sample was skewed?

Finally, there’s question selection.

In the famous British comedy Yes, Minister, the minister proposes reintroducing conscription since a poll shows 64 per cent of voters are in favour of it. To kill the idea, scheming deputy minister Sir Humphrey proposes another poll with the questions carefully slanted the other way.

This raises the question of the 2009 polling on the future of the Peel Watershed.

A poll with a sample of 500 came out that year that said, according to CBC, that “75 per cent agreed that a large part of the Peel Watershed should be free of industrial activities.”

Yet somehow we just had an election where the pro-mining Yukon Party won in the face of heavy pro-Peel campaigning.

Did Yukoners change their minds?

Was the sample of the poll not representative?

Or did voters think other issues were more important than the Peel?

It’s hard to tell.

The only lesson is that with polls, like with so much else, “buyer beware” is the operative principle.

Keith Halliday is a Yukon economist and author of the Aurore of the Yukon series of historical children’s adventure novels.