How did unadjusted polls impact on the politics of the 1990s?

How did unadjusted polls impact on the politics of the 1990s?

    ICM solves the great historical polling mystery.

On Monday evening there was a vigorous debate on the site after I published an article on how a 3% lead in the ICM poll of October 1993 had been listed in Mori table of all polls as a 15% margin. Some people just did not believe it because the recall of many from that period was of mega Labour leads.

    This has become highly relevant today with many comparing the current Tory position with the huge margins that Labour appeared to have in the 1992-1997 parliament. But it could have been that those mega-leads were illusory.

Yesterday afternoon the boss of ICM, Nick Sparrow, posted the following explanation which because it appeared at the end of the thread was perhaps not widely seen. I think this is very important and am publishing it here now.

Let me clear up the confusion. .After the polling debacle of 1992, alongside the Market Research Society’s investigation into that disaster, we at ICM set about a number of experiments to see if we could improve polling methodology. These experiments led us in early 1994 to introduce the methodology we now use. For a time we published in the Guardian both adjusted and unadjusted data.

As you might imagine this led to some confusion, and provided some in the industry with an opportunity to deliberately obfuscate. Following the success of the new methodology in predicting the outcome of the 1994 European Elections we decided, with the Guardian, to publish only our adjusted figures.

At that time we recalculated the voting intentions on all polls back to the election in 1992, and it is these figures that now appear in our trend tables. So, a 15% lead did indeed appear in The Guardian in October 1993 because we had not moved to the new methodology at that time. Nor did we publish the adjusted figures alongside the unadjusted data. Having fixed on the new methodology we went back and adjusted this poll along with others going back to 1992 to create a methodologically consistent data set. The adjusted data gave the lead at 3%.

Our search for a better methodology is fully explained in a paper I wrote with John Turner which I have posted onto our web-site. The graph showing adjusted and unadjusted vote intentions throughout the relevant period is shown in Figure 6.

The paper was awarded the Market Research Society’s Silver Medal as being the best paper in the Journal of the Market Research Society published in 1995.

This is the table that Nick refer to showing the difference between the adjusted and unadjusted polls in the couple of years after the 1992 General Election and the ERM crisis in September of that year.

ICM polls 1992-94.jpg

The paper that Nick links to is well worth reading and leaves you wondering about the impact of politics on the period of polling methodologies.

    Would, for instance, Labour have handled the closing phases of their 1992 campaign differently if they had had polls based on today’s methodologies? Was the collapse of Tory support after the September 1992 crisis as great as it was reported at the time?

Today all the main pollsters, with the exception of Mori, use some adjusting mechanism to ensure that their samples are representative.

Mike Smithson

Comments are closed.