Why it’s so hard to make accurate predictions

According to psychologist Nancy Kim, it’s crucial for experts to incorporate statistical predictions into their forecasts. Photo by iStock.


This week marked the release of the 200th edition of The Farmers’ Almanac, which is primarily known for providing long-range weather forecasts. But its historically spotty predictions—last year, for example, the guide wrongly predicted heavy snow for the Mid-Atlantic region—call into question the very value of prognostication itself.

Nancy Kim, associate professor of psychology at Northeastern, studies conceptual thinking, reasoning, and decision-making. And her forthcoming book, Judgment and Decision-Making in the Lab and the World, will include a chapter on the psychology of prediction. We asked her to consider the cottage industry of professional prognostication, with a particular focus on why the public seems to cling to pundits’ predictions, which so often fail to come to fruition.

Yogi Berra, the idiosyncratic baseball player known for his seemingly unintentional witticisms, once said, “It’s tough to make predictions, especially about the future.” In your opinion, why is it so difficult for experts to predict social phenomena like elections, wars, and economics crises?

There are two basic kinds of predictions that people make: intuitive predictions, which rely on experience and intuition, and statistical predictions, which rely on data and algorithms. When meteorologists try to predict tomorrow’s weather, they’ll be able to draw upon mountains of carefully recorded data on precise atmospheric conditions and what the weather was actually like. They can look at computer models, which are constantly being honed. But predicting the outcome of events like elections is much different—and much harder—because of their uniqueness. There is no directly relevant data. You could try analyzing data from past elections, but every political election is different, with candidates who have never gone up against each other before and a different social and economic climate. Polling data is helpful, but it’s still all future-oriented guesses, not hard data cemented in the past. It consists entirely of people’s individual predictions about how they are going to vote in the future and whether they are actually going to make it to the polls, based on what they know and how they feel at the time of the poll.

What are some of the keys to making accurate predictions?

In the research literature, two primary messages rise to the fore. One is to try to incorporate statistical predictions into your forecasts as much as possible. You see this in sports, with baseball teams that harness the power of data to build their rosters instead of relying solely on scouts to pick the best players. You also see it in health. For example, the most recent meta-analyses of mental disorder diagnosis show that statistical prediction has an accuracy advantage over clinical intuition in identifying disorders. The current trend, however, seems to suggest that mental health professionals will continue to depend on their clinical intuition while taking into account the statistical data.

The second message is that expert predictors developed their expertise by relying on tons of corrective feedback to shape their forecasts. Weather forecasting is often considered the gold standard of prediction, because meteorologists receive so much corrective feedback, enabling them to constantly rework their algorithms. In other fields, it can be harder to get feedback. In some medical practices, for example, it might be difficult for doctors to receive feedback on the accuracy of their diagnoses, particularly if they have to rely solely on their patients to provide it.

Social psychologist Philip Tetlock, who is known for holding “forecasting tournaments” to test peoples’ ability to predict complex events, has found that “the accuracy of an expert’s predictions actually has an inverse relationship to his or her self-confidence, renown, and depth of knowledge.” Why, then, do we continue to listen to people who appear as experts on TV, get quoted in newspapers, and participate in punditry roundtables?

Oftentimes we hope that the pundits we see on TV or read in the paper might be offering up good ideas and novel information. But their predictions could be harmful if they don’t strive to consider relevant data and make clear the limits of their ability to predict future events. This is easier said than done: TV experts want to appear confident and compelling. Expressing their uncertainty about a particular issue would be appropriate, but doesn’t make for good TV.

Having said that, we should keep in mind that incorrect predictions do not necessarily suggest faulty reasoning. Consider the case of the patient whose doctor recommends a surgery with a 98 percent success rate. Most people say, “The odds are pretty good, go have the surgery.” But if you told those same people that the patient died during the procedure, most would think the doctor’s reasoning was poor. It’s called outcome bias—an error made in evaluating the quality of a decision when the outcome of that decision is already known. Whether a pundit is right or wrong shouldn’t matter as much as it does, so long as his or her reasoning is sound given the information that was available at the time the prediction was made.

Laypeople make predictions everyday, prognosticating on everything from sports and politics to weather and entertainment. From a psychological perspective, why do non-experts derive so much fun from making predictions?

In addition to the entertainment factor, I think there’s a psychological benefit to feeling like we have some control over what might happen to us in the future. But it’s more deep-seated than that. Every morning we wake up and make predictions about how the day will unfold based on past experience. To make these forecasts—to conjure up mental pictures of what’s ahead—is one of the most remarkable things about being human.