Unfulfilled Conditions

by F.

In a “reputation economy,” you need to maximize your reputation, but if your reputation depends on making predictions about the future, it can suffer when you turn out to be wrong. So don’t be wrong.

Easier said than done, of course. But there is a solution, as we can see by studying how pundits or experts adroitly manipulate the data so they are never really wrong—and in fact, can’t be wrong.

It turns out there is a pattern to this manipulation: experts tend to use seven basic “defenses” to do it. They are outlined in P.E. Tetlock’s book, Expert Political Judgment. They are useful to remember to get yourself out of a jam when you’re wrong and it matters. They are, of course, completely self-serving and bogus. But they will preserve your reputation if used convincingly. Because it’s not about truth; it’s about getting people to believe your bullshit. Even if you’re not a pundit, the same rules apply. If you look closely, you’ll see these patterns in any human institution—a company, an academic department, wherever.

Forecasts are typically conditional: if x is satisfied, then y will occur, but the first part of the conditional is often hidden or unstated. So, when weather forecasters say “Tomorrow it will be sunny,” this is actually a conditional: “if certain conditions are present, tomorrow will be sunny.”

This is a useful aspect of forecasts because you can go back, after the fact, and (a) invent the conditions which you (wink wink) “actually” assumed would be satisfied and then (b) say, “Well, sure, my forecast was wrong, but I assumed x, and x wasn’t satisfied.” Provided x is reasonable or confusing enough, you can preserve your reputation because you weren’t really wrong. In other words,

Challenge whether the conditions for
hypothesis testing were fulfilled.

Here’s an example from Tetlock (p. 130) concerning modeling the 2000 United States presidential election:

A panel of prominent political scientist announced in August 2000 that their models of presidential elections foretold the outcome of the upcoming Bush-Gore contest. With confidence estimates ranging from 85 percent to 97 percent, they declared Gore would win—and win big with at least 52 percent and perhaps as much as 60 percent of the vote. There was no need to pay attention to campaign trivia or polling blips. The incumbent party could not lose: the economy was too strong, the country too secure, and presidential approval too high [emphasis added].

There was only one problem: the forecasters were about as wrong as they could possibly be. And when you can’t fool the public about whether your prediction is satisfied or not (“Well, you may think Bush won, but actually Gore is the President…”) you go back and work the antecedent conditions:

After the election, [Tetlock et al] found that, the more sympathetic forecasters were to the modelers, the more they argued that the basic premise of the [modeling] enterprise was still sound: the models had just been fed misleading macroeconomic numbers.

In other words, the conditions had not been satisfied. The condition? “Correct data was fed into the model.”

Now, think about how often they could say that that condition wasn’t satisfied. Will the input data ever be 100% correct? Even 90% correct? And couldn’t the experts always go back and say, “Oh, yes, but we didn’t know about z, and if z had been taken into account, then I would have been right.”

Sure. That’s the point. The experts can’t be wrong, and so they can preserve their reputations, allowing them to continue to sell their products—expert opinion, articles, position papers, speaking opportunities, op-eds, talk show appearances, and so on. The system works! And if you think this is too cynical, listen to a resident of a think tank who was interviewed by Tetlock: “I fight to preserve my reputation in a cutthroat adversarial culture. I woo dumb-ass reporters who want glib sound bites.” One can’t help but think these folks know exactly what they are doing: selling snake-oil.

Variation: The Historical Conterfactual

Here’s an example from the 1990s. An economist suggests that Russia institute a policy of economic “shock therapy” in order to avoid hyperinflation. Russia does institute shock therapy, but there is still hyperinflation, and the economy goes to the dogs. Uh oh. The economist’s reputation will suffer. The solution? Pull a historical conterfactual out of your ass: “if Yeltsin had practiced real shock therapy, Russia would have avoided this new bout of hyperinflation” (Tetlock 131; emphasis added). The experts can say he merely “failed to anticipate how maladroitly the policy would be implemented.” Problem solved.

As Tetlock says, “counterfactual history becomes a convenient graveyard for burying embarrassing conditional forecasts.”

Tomorrow: The Exogenous-shock Defense.

Advertisements