Nick Sparrow, the pollster who did most to change post-1992, on poll averaging, herding and the pressure to conform

Nick Sparrow, the pollster who did most to change post-1992, on poll averaging, herding and the pressure to conform


Why Polls End Up Saying The Same Thing

Following the General Election, the pollsters have been accused of having herd instincts.  How else do so many polling companies, acting independently, get to the same – wrong – answer?

In the final days of the campaign, the polls mainly agreed on the likely outcome, and even a late movement to Labour.  Polls of polls ironed out small differences and gave an even greater feeling of certainty.  But the natural belief that the average of independent observations is likely to be most accurate does not apply to vote intention polls.  Almost all the final polls in all general elections since the Second World War show bias and not error.  Put simply, they almost always err in one direction or the other, mainly underestimating the Conservatives.  In short, beware the average, it is only better than the worst and worse than the best.

    Nevertheless, apart from a few days, or at the most weeks after a general election, pollsters are judged by media commentators mainly on the proximity of their predictions to the average, whether that average is calculated or more vaguely expected.  That pressure is steady and, as polling day approaches, increasing. 

    A pollster with results diverging from the average will be asked by their client and others to examine every aspect of the methods for anything that might be “wrong”.  A pollster with results on the average can relax.

Those soft but sustained pressures, over the years, will tend to give greater prominence to those perfectly justifiable methods that tend to lead in the direction of conformity, and less attention may be paid to methods that lead to a greater degree of divergence.  So, the average is not only where the pollsters feel most comfortable, clients and political commentators believe the average is likely to be most right.

However, the pressure to conform to the average of the polls in turn restricts the tone of political commentary.  Common sense might have told us that the Conservatives would do very well in the General Election.   Nowadays it is more similar to a presidential election, with decisions by ordinary voters based only or primarily on the look of the leader, his aspirations for Britain, goals and ambitions.  Cameron vs Miliband was a mismatch.  Inasmuch as party and policy matter, Old Labour was so last century; the policy proposals lacking resonance in modern Britain.  The polls did not have the right smell about them.  Why did so few say so at the time?

Rather than herd instinct, the process by which pollsters and commentators influence each other may be better described as an informational cascade.  Over the long term, the publication of vote intention polls adds to the expectation of what any new poll will predict, sometimes irrespective of any other signals pointing in a different direction.  The theory would suggest that the publication of vote intention polls, strongly promoted as being reliable by the media owners who pay for them, suggesting certainty both for the overall prediction as well as small fluctuations, can rapidly influence a much larger group to accept the likelihood of a particular outcome.  At some point, the theory goes, any person with a correct prediction (however it is obtained) can be convinced, through social pressure, to adopt an alternative and incorrect view of the likely outcome.

Following a 1992 sized polling debacle, pollsters now need to take a hard look at the methods. Still relevant are the recommendations made by the Market Research Society in the report published after 1992:

“We would encourage methodological pluralism; as long as we cannot be certain which techniques are best, uniformity must be a millstone – a danger signal rather than an indication of health.  We should applaud diversity; in a progressive industry experimentation is a means of development.  No pollster should feel the need to be defensive about responsible attempts to explore in a new direction …”

Now that is a lot easier to suggest than to do.  Between 1992 and 1997 I changed from quota face-to-face interviewing to random telephone polls (“you can’t do that not everyone has a telephone”) started weighting by past voting (“you can’t do that, people imagine they voted for the party they now support – Hemmelweit et al”) and adjusted for the likely votes of those who could not or would not say who they would vote for (“you are making up the answers”).

Defiantly, and with the backing of The Guardian, as the General Election in 1997 approached I produced very different predictions to the rest, and in the process had my ear well and truly bent by many political commentators who had come to believe the average of the polls, most of which used methods in 1997 unchanged from 1992.

As it turned out the ICM prediction was most accurate, but in the run up to polling day the pressure to adopt the alternative, less accurate average of the rest, was intense.

Now, as then, pollsters should be seeking new solutions, and be unafraid of producing results very different to each other.  The average is clearly not to be trusted.  Sadly, I suggest, the likelihood is that come 2020 both pollsters and political commentators will again be converging on the average.

Nick Sparrow – former head of polling at ICM

Comments are closed.