Why is Angus Reid seemingly different?
Research Director, Andy Morris, answers your questions
(Every time there’s a new PB/Angus Reid poll a lot of comments are made here and elsewhere about the firm and I thought it useful if Andy Morris could write a post and be available to answer questions. A particular issue has been the lower Labour shares compared with other firms though it’s worth pointing out as well that since we started AR is the only pollster not to record a Tory share of more than 40%. Thanks Andy for your co-operation – Mike Smithson)
Why do we poll online?
Three main reasons: We believe that it is the most representative means of conducting research. True, not everybody has access to the internet but not everybody is available to take a phone call at 6.30pm or answer the door when the interviewer knocks. By allowing respondents to answer the questionnaire at a time and place of their choosing you increase the percentage of people available to answer across all demographics. Internet uptake is now sufficient in all sectors of society to be able to sample to cover all sectors of society.
You additionally have much greater control over your sample and can therefore fine tune it to a greater degree than you can by phone or face-to-face. Studies have shown that people are more honest online than when talking to an interviewer – they are more willing to give what might be regarded as socially unacceptable answers. Online is a more cost-effective way of conducting polling and the negligible incremental costs of additional sample mean that sample sizes can be much greater using that method.
Why do we ask a question before the voting intention question? Most pollsters choose to ask the voting intention question first for fear of introducing bias into the poll. We fully respect that but feel that it is important to warm people up with a non-partisan politics based question first. We want to know how they would vote at a polling booth following a period where they have been much more politicised than they are now. The issues question, in a very small way, helps to put people in context.
When we conduct new product research rather than just ask a respondent whether they would buy the product we often create a mocked-up interactive shop so that they can choose whether or not to buy the product in the context of being in a shop and having other products around it. A different type of research, a different approach but the same principle – behaviour should be predicted by getting people as close to a real situation as possible.
I have seen arguments that this approach is anti-incumbent; with a non-partisan question this would only be the case if the very fact of having a campaign would be anti-incumbent and that depends on the prevailing political sentiment. Our record in Canada is impeccable and we’ve had perfect forecasts for incumbents (Federal Tories, BC Liberals, Manitoba NDP) and non-incumbents (Saskatchewan Party, Nova Scotia NDP) using the issue question as a warm-up.
Why do we ask who people will support? This is one way (along with a follow-up question) that we use to tease out leaners. For most people ‘vote for’ or ‘support’ makes no difference. A small number aren’t prepared at this stage to commit to saying they would ‘vote for’ but the reality is that if they support them, then they will most likely vote for them. Comparative testing, by the way, suggests that this wording makes no difference to headline voting intention figures.
Why do we ask about constituencies? This is another attempt to put people in the context of how they actually vote. A small number of people are generally supporters of Party A but when they come to vote they look at the situation in their constituency they, for whatever reason, vote for Party B. It is important therefore to remind them that they vote in a constituency not in one nationwide ballot.
Why do we past vote weight? Most research is sampled and weighted in some way to make sure that you have the correct proportions of people across society or across the group that you are targeting. The surest predictor of how people will vote next time is how they voted last time and therefore ensuring that you have those people in your poll in the right proportions is vital.
Why do we not adjust for ‘false recall’? The other pollsters in GB that use past vote weighting make an assumption that a certain number of people incorrectly recall who they voted for and that higher proportions incorrectly recall voting for Labour. They therefore adjust their past vote weights to differing extents to take account of this, which has the effect of boosting Labour in their polls. There is then a legitimate debate about the somewhat arbitrary amount that Labour should be weighted up by. We avoid this debate by assuming that people are accurate with their answer to the past vote question, as we assume that they are accurate in their answers to other questions.
We acknowledge the existence of false recall and have no philosophical objection to incorporating it in our weightings. We however feel that the research we have done suggests that in the context of the 2010 election, with the Conservatives currently in the ascendancy, false recall is consistent and therefore the best predictor of past vote is claimed past vote.
Will we be making any adjustments in the run up to the election? We are very comfortable with our approach; we have seen it continually work successfully in Canada and feel that we have adapted it to be as successful here; so don’t expect to see major changes. That said, as I imagine everyone else is, we will continue to fine tune our approach particularly to sampling.
One of the keys to our success in Canada has come from a slavish desire to microsample by area so we will be dividing the GB constituencies into 100+ groups of 5 or 6 similar constituencies and sampling based on what we call these ‘superconstituencies’. Not a fundamental change but a finetune to make sure our sample is perfect and not just very good.
Will the polls converge before the election? It is one of the great complaints of poll watchers that they can be very different in the run up to an election and then all converge as it gets nearer. Will that happen again? I can’t see into the future but I would imagine that it is quite likely. Many of the techniques applied by pollsters are attempts to get at the situation as it will be on the day and become less prone to fluctuations as the day approaches. For example some pollsters take into account likelihood to vote and others don’t. As the election nears this becomes less important as increasingly respondents know for certain whether or not they will vote and answer the voting intention question as such.
Are we right? We think so. We’ve had great success in Canada, we are very happy with our methodology and Mike’s golden rule favours us. On a site for political gamblers, if I was a betting man I’d bet on ARPO to win the battle of the pollsters.