— Mike Smithson (@MSmithsonPB) November 11, 2012
Which did best – which did worst?
Nate Silver of the New York Times has produced the above polling accuracy table. He’s based it on surveys in the final three weeks rather than just the closing poll.
Nate’s reasoning is that there’s a tendency, which we see in the UK as well, for pollsters to herd round the consensus in their closing polls. You have to read the fine print to find that different weightings or methodologies have been brought in.
It is based on both national and state polls and there’s a minimum qualification of having published five surveys.
Interestingly the firm right at the bottom is the one that created modern political polling back in the mid-1930s – Gallup. For the closing period of the campaign they were putting forward numbers that were very much out of synch with the rest and, no doubt, they’ll be looking afresh at their methodologies.
Gallup’s daily tracker together with Rasmussen had a huge impact on perceptions of how the battle was evolving. Next time they won’t be treated so seriously.
The Rasmussen and Gallup numbers also impacted a lot on the polling averages.
These tables are always good reference points and, no doubt will be referred to a lot in the coming four years.
Well done Nate on a great election and thank you. I’m sure that many PBers are a bit richer this weekend after following your predictions.
For the latest polling and political betting news