Following publication of our latest four constituency polls for Alan Bown earlier this week, there has been considerable public discussion of the methodology we used for these polls, including specific criticisms from Anthony Wells of YouGov, Lord Ashcroft and an article by John Rentoul which lists a number of criticisms passed to him by “a Conservative source”. I would like to use this opportunity to address these criticisms and provide a robust explanation for our choices of methodology.
In his article, John Rentoul lists the four main criticisms that have been levelled at our recent constituency polling. I will address each in turn.
1. The lack of weighting by past vote and the resulting fact that in South Thanet we had too many Labour 2010 voters relative to Conservatives.
As we have mentioned before, we have conducted a general review of how we do constituency polling over the last six months, with the conclusion that we have decided to stop using 2010 vote weighting for all constituency polls we publish.
As a results of not using past vote weighting, Anthony Wells from YouGov correctly points out that our past vote shares in South Thanet showed more Labour voters than were recorded in 2010. However, he does not mention that in Dudley North it was the Conservative past vote that was too large relative to Labour. In fact there are two good reasons why past vote weighting might worsen rather than improve accuracy in constituency polls.
Firstly data from ONS implies that approximately 5% of the population of an average constituency might move out of the area each year. With the 2010 general election now three and a half years passed and coupled with new electors coming of age and old ones passing away, that suggests that 15-20% of the population resident in an average constituency today might not have been resident there at the time of the last election, making past vote weighting targets much less accurate on a constituency basis than they are nationally.
Secondly and perhaps more importantly, we have reason to believe that there is a substantial degree of false recall going on in these telephone polls when people are asked who they voted for in the last election. In every constituency we have polled so far, the proportion of people saying they voted UKIP in 2010 was higher than the actual recorded percentage from the last election. I cannot think of a plausible reason why, after having corrected for age, gender and ward, we would actually have over-sampled past UKIP voters so significantly and so consistently. Instead it seems far more likely that these additional “past UKIP” voters, virtually all of whom say they are currently planning to vote UKIP, are either consciously or subconsciously altering their response to make their views sound more consistent, or else are confusing the 2010 general election with a different election, perhaps the last local elections, in which they did actually vote UKIP (in South Thanet for instance UKIP came top in the local elections 2013). To consistently depress the UKIP vote by applying a downweighting to these voters without a plausible hypothesis for why they are being “over-sampled” would seem to be a major mistake.
2. An “unusually high” proportion of people who say they will vote not giving a voting intention (because they “don’t know” or “refused”). This was 21% in South Thanet, 31% in Great Grimsby and 38% in Dudley North.
This criticism can be addressed easily, in that these proportions are not “unusually high” at all. In fact they are perfectly typical of telephone constituency polls conducted by all polling companies and actually pretty good compared to many of them. Here for example is the Populus / Times poll for the Eastleigh by-election at the start of the year, where fully 44% of people who said they would vote went on to not give a voting intention by refusing or saying “don’t know”. By comparison, a figure of only 21% for South Thanet is a very efficient response rate.
3. The don’t knows are not re-allocated based on past vote, to the advantage of UKIP
This is correct insofar as re-allocating “don’t know”s based on past vote does indeed significantly disadvantage UKIP, as not many people voted for them in 2010. However, one must ask whether such an adjustment to depress the UKIP vote is necessarily justified – is it really plausible that only 5% of “undecided” voters would be considering voting UKIP in a constituency where we know that 20%+ of “decided” voters are planning to vote UKIP? We would suggest that it is unlikely that there would be such a huge discrepancy between the preferences of “decided” and “undecided” voters – most likely the parties that the “undecided” voters are trying to choose between are broadly the same as the ones that “decided” voters have already plumped for.
Currently Survation continues to use a re-allocation method based on 0.3 of the 2010 vote for our national opinion polls, on the basis that this problem will be less pronounced nationally than in those particular constituencies that have seen large changes in which parties are in contention since 2010 (which will be those where UKIP is doing best or where a significant Lib Dem vote has collapsed). However, we are currently in the process of reviewing this methodology nationally as well and will likely be replacing it with a new adjustment that takes more account of how “decided” voters are now planning to vote; something we have been testing recently alongside our existing approach.
4. Survation prompts for UKIP
This is perhaps the most commonly cited criticism of our methodology and one which we have addressed a number of times in the past. Nevertheless I will attempt to expand here on why we not only believe this method to be justified, but why we feel that not prompting for UKIP at present could be a major mistake.
The main arguments against prompting for UKIP are outlined by Anthony Wells of YouGov here, as well as by Lord Ashcroft in his piece on measuring UKIP support here. Lord Ashcroft states, simply enough, that “Most pollsters continue to judge that naming UKIP in the initial voting intention question has the effect of exaggerating the party’s score.” This, in essence, is an empirical claim that where UKIP has been prompted it has polled above what it scored in actual elections, whilst implicitly suggesting that where UKIP has not been prompted it has polled at about the right level compared with real election results. Anthony Wells expands on this point further by showing that in the European elections of 2005 when YouGov prompted for all parties (including UKIP), they over-estimated UKIP’s final vote share by four points, but when they used a closed prompt in 2009 (excluding UKIP) they polled UKIP’s vote share about right.
This argument is perfectly valid as far as it goes and, indeed, if it was still 2009 then I would be in wholehearted agreement with Anthony Wells. However, there can be little doubt that the political landscape has been hugely transformed since the formation of the coalition in 2010, such that assumptions that were true before the last election may need reevaluating. Whichever polls you look at, UKIP has at the very least tripled its vote share since 2010 and has for the first time acquired a significant core support base of loyal voters. It has displaced the Liberal Democrats as the main depository of “none of the above” voters and has also benefitted from a huge surge in publicity accorded to it, such that far more people are likely aware of the party’s existence than were only a few years ago.
The empirical evidence since 2010 no longer supports the assertion that polls are “exaggerating the party’s score”. Take a look at the table below, showing all the Ashcroft by-election polls this parliament, along with comparable Survation polls.
In every single case bar one, the opinion polls this Parliament have under-estimated rather than over-estimated the UKIP vote, in some cases very significantly. The degree of under-estimation has also increased significantly since early-2012, around the time that the current UKIP “surge” in the polls began.
In Eastleigh both the Ashcroft / Populus polls and our own Survation polls significantly underestimated the UKIP vote (partly no doubt due to the late UKIP surge near polling day). We also both had problematic over-estimates of Conservative vote, with one of ours and one of Ashcroft’s polls showing them ahead of the Lib Dems – a problem which we have attributed at our end to false recall skewing the past-vote weighting (something which contributed significantly to our decision to abandon that method as discussed above). See our analysis at the time of the by-election here for further discussion.
Comparing the Feltham & Heston Ashcroft and Survation polls shows that Survation was closer to the final results, largely by correctly predicting the close tie for third place between the Lib Dems and UKIP. Lord Ashcroft erroneously had UKIP significantly below the Lib Dems, which we would suggest was largely because he included the Lib Dems in his prompt but not UKIP.
Finally – Ashcroft and Survation Polls Lead to Same Conclusion
Overall, though, it is the similarity rather than the differences between our polls and Lord Ashcroft’s that lends credibility to our recent work. In September, Lord Ashcroft published the results of his poll of nearly 13,000 voters in marginal seats across the country – an extensive and highly informative piece of research in which, disagreements over prompting aside, there are few grounds to criticise as lacking in methodological rigour. The aggregate result of this was that Labour’s lead in marginal seats was at 14 points. Averaging the results of our own five polls in marginal seats so far (not Bognor or Folkestone which are “safe” Conservative seats) shows a Labour lead over the Conservatives of…. 14 points.
In other words, despite the prompting differences and other criticisms that have been levelled at our methodology, the key result from both Lord Ashcroft’s and Survation’s constituency polls is exactly the same. The Conservatives are significantly underperforming their national average across key marginal seats, regardless of where UKIP is pinned at (Ashcroft has them at 11% across all marginals, we have them at 23% across those five marginals where we expected them to be around their very best – probably not that different on average).
Rather than attempting to dismiss the results of polls that show them doing badly, unnamed “Conservative source”s might be better off actually taking notice of how much they are struggling versus Labour in marginal seats and taking action to address their unpopularity.
– By Patrick Briône
Director of Research, Survation