The GE2015 polling fail put down to “unrepresentative samples”

The GE2015 polling fail put down to “unrepresentative samples”

unnamed (3)

Too many LAB supporters interviewed – not enough Tories

A new report just published today by NatCen Social Research and authored by leading psephologist, Prof John Curtice, suggests that the polls called the General Election wrong primarily because the samples of people they polled were not adequately representative of the country as a whole.

Rather than other explanations, such as a late swing to the Conservative Party, Labour abstentions, or so-called “shy Tories” not telling pollsters their true voting intentions, the report suggests that the polls’ difficulties arose primarily because they interviewed too many Labour supporters and not enough Conservatives.

Even when the polls went back to their respondents after the election and asked how they had voted, they still largely put the Conservatives neck and neck with Labour. In contrast, today’s report reveals that the 4,328 respondents to NatCen’s 2015 British Social Attitudes (BSA) survey put the Conservatives 6.1 points ahead of Labour, very close to the actual election result of a 6.6 point lead.

BSA was conducted very differently from the polls. It selected its respondents using random sampling, the approach recommended by statistical theory and which is less at risk of producing an unrepresentative sample. BSA’s relative success at replicating the result of the election is in line with that of a similar random sample survey conducted since the election on behalf of the British Election Study.

If the polls were wrong because Conservative voters were especially reluctant to declare their preference or Labour supporters unwilling to admit that they had abstained, then both these surveys should also have got the result wrong. Instead BSA and the Election Study show that those who voted Conservative and those who abstained are capable of being found by survey researchers, so long as the right approach is used.

 Two sources of error

The report goes on to suggest there are two main reasons why the sample of respondents interviewed by BSA 2015 proved to be more representative than those obtained by the polls.

 More time and effort is needed to find Conservative voters. Polls are conducted over just two or three days, which means they are more likely to interview those who are contacted most easily, either over the internet or via their phone.

The evidence from BSA suggests that those who are contacted most easily are less likely to be Conservative voters. The survey made repeated efforts during the course of four months to make contact with those who had been selected for interview. Among those who were contacted most easily – that is they were interviewed the first time an interviewer called – Labour enjoyed a clear lead of no less than six points, a result not accounted for by the social profile of these respondents. In contrast, the Conservatives were eleven points ahead amongst those who were only interviewed after between three and six calls had been made.

 Identifying who is going to abstain is crucial.  People who are interested in politics are more likely to respond to polls and thus are more likely to vote. This means the polls are at risk of underestimating crucial differences in the inclination of different groups of voters to turn out and vote.

Just 70% of those who took part in BSA 2015 said that they had voted, only slightly above the official turnout figure of 66%. More importantly the survey shows that those aged 18-24 were around 30% less likely to vote than those aged 65 or more. Most polls, however, anticipated a smaller age gap than this. At the same time, BSA confirms the evidence of other surveys that Labour gained ground amongst younger voters in 2015 while the Conservatives advanced amongst older people. Thus any tendency among polls to overestimate the turnout of younger voters meant that there was a particularly strong risk in 2015 that Labour support would be overestimated.

Report author, Prof John Curtice, Senior Research Fellow at NatCen said:  “A key lesson of the difficulties faced by the polls in the 2015 general election is that surveys not only need to ask the right questions but also the right people. The polls evidently came up short in that respect in 2015.

“BSA’s relative success in replicating the election result has underlined how random sampling, time-consuming and expensive though it may be, is more likely to produce a sample of people who are representative of Britain as a whole. Using that approach is crucial for any survey, such as BSA, that aims to provide an accurate picture of what the public thinks about the key social and political issues facing Britain and thus ensure we have a proper understanding of the climate of public opinion.”

Kirby Swales, Director of the Survey Research Centre at NatCen said “This research shows how difficult it is to secure a sample that is truly representative of the public, without which it is not possible to accurately generalise about what the public thinks. When we are seeking to understand the opinions or views on issues of particular importance, such as those of voters during a General Election campaign, a random sample survey like British Social Attitudes should be used wherever possible.”

  • Note: This post was based on a summary of the report issued by the centre – Mike Smithson

 

 

 

Comments are closed.