Andy Cooke on the UNS – Part 1

Andy Cooke on the UNS – Part 1

Uniform National Swing – an investigation

I’ve put together a short series on UNS – what it is, what’s its track record, and what levels of distortion have occurred in recent elections. This is part one of three.

My assumptions about the distortions to UNS that have accumulated and are likely (in my opinion) to rebound have been controversial to some. That’s fine – the point has always been to give you information to ponder – you make your own decisions. One comment that has been raised is the scale of the difference from UNS measured from 2005 – about 4% difference from the believed requirement of a 10.2% Conservative lead for a bare majority. This level of change from what the UNS said before the election would not be a first for the elections since 1987. Or even a second. Too many commentators quote the UNS requirement as if it’s Gospel – when one thing that we can be sure of is that after the election, in hindsight, we’ll note that the lead required was not 10.2%. I’d say it’s highly unlikely to be more, but that’s up to you to agree with – and if not, it would have to be less. Possibly considerably less. Too many people seem to use the polls and UNS like the proverbial drunk uses a lamppost – more for support than illumination.

There’s a common title for the 1992 election around these parts: “The Failure of the Polls”. That well known failure actually masked a potentially greater one – in 1992, the UNS prediction broke and has never since been fixed.

The simple Additive UNS seat prediction is truly simple. Take all constituencies from the last election. Assume the swing from then to now is uniform nationally (the name is not very imaginative …) and apply it in every constituency. If the Conservative vote has gone down by 4 points, deduct 4 points from the Conservative score in every constituency. If the Labour vote has gone up by 6 points, add 6 points to the Labour vote in every constituency. If the Lib Dem vote goes down by … you get the idea. Then look at every constituency again, and see who won. Add up the new totals and you’ve got the make up of the next House of Commons. Done.

There are plenty of complaints about UNS. It’s an oversimplification – the swing isn’t uniform nationally. It’s mathematically illogical in some circumstances – if the Tory vote was 5% in your constituency and it has dropped 6% since the last election, how many votes do they get this time? It’s psychologically irrational – would the Tories really put on the same 7% of votes in Glasgow Central, Bootle, Huntingdon, Croydon Central and Harlow? Surely safe seats, no-hope seats and marginals would have hugely different swings? But it’s been held up as the standard because it’s deemed to be a fairly good stab. A decent approximation, for all its flaws. That’s fine – but if you’re going to risk your hard-earned cash, you need to be aware of its track record.

There have been two major problems with forecasting the seat totals for elections: reliability of opinion poll data and reliability of the method used to convert that data into seats. The opinion poll issue has been addressed here frequently. Mike’s Golden Rule of polling – the poll that’s worst for Labour is usually the closest to the real result – is well known. There’s a reason he’s taken that stance – in the past, the Labour position relative to the Tories has almost always been overstated. Of course the pollsters are aware of this and have repeatedly investigated the problem and put into place more and more sophisticated models to overcome this – but the average of polls has still always overstated the Labour position relative to the Conservatives. In 1997 it was about as bad as 1992, in 2001 it was very bad still, in 2005, rather good. All polling companies got within the MoE of the target, and one got it spot on. However, it still seems likely that there was a residual bias – the putative MoE errors all occurred in one direction only – that of overstating the Labour position relative to the Conservatives.

The pollsters will know this as well, and will be working again to overcome it. But will their models – fashioned in an era of Labour popularity – work when the swing may be strongly against Labour? Or will we find that they have solved the problem? The inaccuracy of the average eve-of-election poll from 1992 onwards has masked the repeated failures of UNS – until 2005.

I set up a simple UNS spreadsheet to have a look at what UNS said should have happened at each election – from 1987 to 2005. What I found was that in 1987, it was great. From 1992 to 2005, it didn’t work. The requirements needed for a Tory majority (for example) were up to 4.3% off from what UNS said they should have been before the election. The advantage has swung from slightly in the Conservatives’ favour to extraordinarily advantageous for Labour – and slightly back again.

Some of the asymmetry has been because the Lib Dems have won most of their seats from the Conservatives, but following 2005, that asymmetry is starting to be addressed. Nevertheless, relying on UNS before an election would be unwise. Use it as a (very rough) guide, certainly – but be aware that the requirements always change in hindsight. Once the election has been fought, we can say “actually, they needed a lead of this much”. Since UNS’s last success in 1987, the Conservative lead required for a majority has been different in hindsight by 2% to 4.3% from the position claimed by UNS before the election. Today, UNS claims it’s 10.2%. What will it say was actually the case when the election has been fought?

Comments are closed.