h1

Online Polls, Big Stories, Shaky Foundations

December 24th, 2014

A special column from former ICM polling head, Nick Sparrow

Over the last 3 years the British Population Survey has been monitoring people who respond to online surveys and comparing them to the population as a whole, in terms of detailed demographics and attitudinal variables. It is a massive survey involving 6-8,000 face-to-face in home interviews per month.

In an article published on the Research-Live web site Steve Abbott describes some of the important findings. Analysis suggests that online survey respondents are more active in the broadest sense, than others, more likely to vote, have stronger opinions than others, are more optimistic and more volatile.

Such respondents are just what any journalist commissioning a poll would want, giving results that show people have strong opinions, suggest big movements in public opinion, producing surprising and therefore newsworthy results. And they are cheap as chips. As a result, they are everywhere. So much so that when we talk about “the polls” we mean, substantially, “Online polls”. But all online responders account for no more than 10% of the population, perhaps the most influential minority in Britain today suggests Abbott.

Of course it helps that poll results are underpinned with detailed explanation of how the representative samples are achieved, a careful description of multi stage weighting by relevant demographic and other variables and margins of error for those who are sceptical. Trouble is the use of panellists who have, themselves, sought an opportunity to give their views online involves, at the outset, the abandonment of all the principles of sampling theory. And other research has failed to find any form of weighting that can remove differences in attitudes between online samples and the population as a whole as measured by very large scale random surveys. Demographic weighting does not help, nor does newspaper readership nor even the use of key attitudinal variables that ought to be closely linked to the subject matter of an opinion poll. See.

The importance of the British Population Survey results cannot be minimised. What if that poll, you know the one that said “Yes” would win in Scotland, the one that panicked the whole political establishment into making wild promises for constitutional reform for us all was – how can I put this – wrong; the product of views expressed by people with stronger views and a more optimistic outlook than others? People who might be considered to be more likely to embrace a new vision of an independent future for Scotland, less concerned than others, for example that Scotland may not have a currency, or place inside the EU.

    What if online polls, comprising panellists with stronger opinions than others, being more optimistic and more volatile suggest in the run up to the next general election that the LibDems will be annihilated, UKIP and the SNP in Scotland are surging upwards, Farage and Salmond will be the new kingmakers and mould breakers?

Can we exclude the possibility that the drip feed of such polls helps to create a bandwagon effect, influencing the outcome of elections and referenda. In the end the “Yes” campaign in Scotland did not do as well as predicted, but did it do better than it would have done if the polls had suggested the “No” campaign were always going to win comfortably? What if online polls over the next few months inflate UKIP and the SNP, thereby encouraging more voters to switch to them? In the end they may not do as well as predicted by some polls, but they may do better than they would have done had earlier polls not suggested they were on the march.

This means pollsters are not innocent observers of public opinion, but active participants in the political process; not only reporting public opinion but helping to shape it. Participation that, the British Population Survey suggests, may rest on some very shaky foundations.

Nick Sparrow is the former head of polling at ICM