The testing of pundits

In the recent US election many notable pundits kept their partisan blinkers firmly affixed to their faces. As a consequence they called things wrong.

Pundits are almost never held to account for the calls…it certainly would be interesting if someone did.

One person though has made a study of pundits in all sorts of areas, from economics to politics, and the results are interesting.

The shocking and little-acknowledged truth is that most expert forecasts are wrong. Not only wrong, but more wrong than if they had been generated at random.

Two decades ago psychologist Philip Tetlock, of the University of California, Berkeley, began testing the forecasts of 284 famous Americans who made their living pontificating about politics and economics.

As he says in his book¬†Expert Political Judgment: How Good Is It?¬†it wasn’t easy to pin them down. When stripped of rhetoric their predictions were surprisingly slippery.

So he surveyed them, asking every few months whether the variable they covered would (a) stay the same, (b) increase or (c) decrease.

More than 82,000 testable forecasts later he found that as a group the experts performed worse than if they had just selected (a), (b) then (c) in rotation. They performed worse than a dartboard.

Two Reserve Bank economists have just found the same thing about the Reserve Bank itself. One year out its unemployment forecasts have been “less accurate than a random walk”.

Heh, most are worse than a chuck at a dart board.

There are exceptions. Weather forecasters are especially good, as we are discovering right now. The New York Times data geek Nate Silver got the presidential election spot on. These exceptions tell us something. Neither Silver nor our weather forecasters think they are experts (Silver comes from sports rather than politics). They are guided by the data Рregardless of who it offends Рrather than their own judgment.

By contrast experts have reputations to protect. Whether they realise it or not they often play games, avoiding intellectual curiosity if it will leave them out on a limb away from the pack. They remember their good forecasts and bury the bad. Put plainly they are not the sort of people you would want providing economic advice that could have catastrophic consequences.

Nassim Nicholas Taleb, author of¬†The Black Swan: The Impact of the Highly Improbable¬†and the new book¬†Antifragile, asks why predictors keep predicting, given that their predictions are so often wrong. His answer: “They are not harmed by what they are doing”.

97%