2015 Election Polls – What Went Wrong?

As everyone remembers, the polls before the May General Election got it wrong. The vote was predicted to be a tight-run race, with Labour and Conservative neck and neck. When the results came in, it was clear this wasn’t the case. The Tories claimed a 6.5 % lead, and were able to form a majority government. This raised questions over the methodology used by pollsters and an independent inquiry was launched. This week, we were fortunate to attend the presentation of the inquiry’s initial findings, chaired by Professor Patrick Sturgis, at the Royal Statistical Society.

The presentation began with an overview of the methods used by pollsters. In all cases, a quota sample of eligible adults was selected, such that target numbers of people within certain demographic groups (e.g. age, social grade) were interviewed. The vote intention for each respondent was then weighted in two ways: firstly, by the probability that they would actually vote and secondly, by demographic group, such that the sample was representative of the UK population as a whole.

Where did the method break down? Potential sources of error discussed were a late Tory swing, error in the probability that a person would vote and the selection of unrepresentative samples. While there was found to be a small late Tory swing and some error in vote casting probabilities, neither of these was able to explain the amount by which the polls missed. By elimination, this left unrepresentative sampling as the likely cause. This could have arisen, for example, due to over-surveying of the politically engaged or weighting by groups that are insufficiently specific.

Further questions were raised as to why, despite being so far off the result, the polls all made predictions so close to one another. It was found that there was some evidence of herding: an either conscious or unconscious shift in weightings to give a result closer to that of another poll. However, in all cases where weightings were altered mid-campaign, they did in fact lead to a result closer to the election outcome. The committee highlighted the fact that there was no evidence of malpractice.

Overall, then, how can the pollsters do better and will they do better next time? It is clear that some improvement can be made with respect to population representation by, for example, weighting by more specific population groups. However, there is still no guarantee that these samples will be representative. In addition, late swing and error in vote casting probabilities may not have caused too much trouble this time, but it doesn’t mean they won’t in the future. Polling methodology rests on fragile assumptions, although until a more robust alternative is developed, it will continue to be our best estimate.

Leave a comment

Please prove that you are human: