Consider the following. After years of study, researchers estimate with a high degree of certainty that there is a 60% chance of a particular event, (call it **A**), happening. When asked to make a discrete prediction of whether or not **A** *will actually happen* at a moment in time, 100 out of 100 experts independently conclude that the event will happen.

Now consider this. After years of study, researchers estimate with a high degree of certainty that there is a 50% chance of a particular event, (call it **B**), happening. When asked to make a discrete prediction of whether or not **B*** will actually happen* at a moment in time, 50 out of 100 experts independently conclude that the event will happen.

The expert predictions in both of these scenarios are perfectly rational. These independent expert predictions provide the most accurate long-run information about the whether or not **A** and **B** will happen. However, in the second scenario the *aggregate *prediction (e.g., by taking the average) is precisely correct, and the first scenario the *aggregate *prediction is infinitely wrong.

If you want to see a real world example, take a look at the predictions of 18 experts on the NHL post season for 2020:

https://www.sportsnet.ca/hockey/nhl/sportsnet-nhl-insiders-2020-stanley-cup-playoffs-predictions/

All 18 experts predict that Pittsburgh is going to win their playoff series. For each expert this prediction makes sense–by most measures, Pittsburgh is the better team. However, this information does not give me a realistic representation of the actual probability that Pittsburgh will win. As bad as Montreal is, they have a better than 0% chance of winning the series.

In contrast, if we sum the total number of experts predicting New York will win and divide it by the total number of predictions, New York is given a 56% chance of winning their series. This number is probably a pretty good long-run estimate of the probability that New York will win the series. There is no consensus, and that actually yields a more realistic aggregate prediction!

What this quasi-paradox suggests is that the closer experts are to a consensus about an event, the more likely we are to get a *bad aggregate *prediction of the true probability of an event. If we combine the expert prediction, we will think that the event is more (or less) probable than it actually is.

This is a reminder of why when consulting an expert, we should not ask *if* something will happen, but instead ask about the *probability* that something will happen. Among other things, this probability is something I can average across experts to get a sort of ‘meta’ prediction.

It is also a reminder not to mistake an expert consensus about an event as equivalent to a guarantee that the event will happen.