On October 15 1987 the senior and well respected BBC weatherman, Michael Fish failed to predict the onset of the worst storm to hit the South of England in three centuries. That incident is emblazoned in the annals of the weather forecasting industry, resulting in some serious rethinking about how to deal with extreme events. Is the Trump election victory a ‘Michael Fish moment’ for political science and the polling industry?
Political scientists strive to be objective, seeking to develop evidence-based analysis, to make arguments based on facts. Of course this is not fool proof. We’re all human after all; opinion can creep in, especially when the subject matter lends itself to it. And it can’t be denied that in the case of Donald Trump the subject matter is certainly toxic. It was hardly surprising therefore that on the eve of the election hundreds of US political scientists signed an open letter condemning Trump as ‘a unique menace to American democracy’. Their view was shared by many of the rest of us looking on from the outside.
It was perfectly understandable why so many of us would find Trump abhorrent as an individual. But this risked an under-estimation of what his support base represented, an unintended downplaying of what drove millions of US citizens to vote for him. With such a reprehensible candidate topping the Republican ticket (repulsive even to mainstream Republicans) spouting what to most was a toxic campaign message, it was easy enough to believe that Trump would ultimately fail. He broke the campaign rulebook by refusing to move into the centre-ground after the primary season; he insulted every possible reference group (women, migrants, army veterans); his campaign lacked an effective get-out-the-vote strategy on polling day; the main media outlets and all manner of experts were queuing up to condemn him as a candidate unfit for office; and, as polling day approached, the poll predictions were consistent in predicting that Clinton was in the lead both nationally and in terms of electoral college votes. The pundits – many of these prominent political scientists – were largely at one in predicting a Clinton victory.
In the end, they were close: the final outcome was within the margin of error, and Clinton did win in terms of the popular vote. Just as in 2000 (which saw the election of George Bush despite the fact that Al Gore had more votes), it was the vagaries of the Electoral College system in the US which meant that she was the loser.
But the more important point is that the mood music from expert punditry had predicted Clinton was going to win. That mood music was over-blown.
This is not the first time that the pundits got it wrong. It was much the same in predicting the outcome of the Brexit vote, and indeed in the most recent British general election. In fairness, of course, this is because the public opinion data were pointing to the wrong outcomes: the experts were drawing conclusions from the evidence base presented to them.
One lesson to draw from this is that the design of the evidence base needs rethinking. If we’re going to continue to use survey research in a predictive capacity then some thought needs to be given to how to improve the methodology.
But this is more than just a problem of skewed samples and poor weighting models; questions also need to be asked about question designs that are failing to tap important opinion shifts. The fact that one of the wealthiest people in the US – a man with his own private jet decked out with gold and gaudy glitz – should attract 46% of the popular vote and such a large base of ‘anti-establishment’ support, particularly from among the less well-off, is symbolic of highly significant tectonic shifts in voter opinion.
The norms that underlie representative democracy as we know it – most notably perhaps the nature of ‘class politics’ – are breaking down. It’s time for all of us, political scientists included, to wake up to this challenge.
Ultimately, it may be over-egging things to suggest that this is a ‘Michael Fish moment’ for political science, but there is no doubt that these are testing times.