Don’t Be Such a False Negative Nancy

Arun Solanky
11 min readJul 27, 2021
Yes, the photoshopping is intentionally shitty.

A couple of years ago, my sister sent me an article about a guy who said “yes” to everything for a week: He wanted to get out of his comfort zone. He then, almost immediately, landed in the hospital after severely injuring himself when he tried CrossFit the first time. I daresay he escaped his comfort zone.

My sister’s joke, of course, was that I’m tough to make plans with. I will never just say “yes.” I don’t like beaches, pickup basketball, pools, chess, board games, arts and crafts, or Soul Cycle. My idea of a good time is aligning boxes on a PowerPoint slide while listening to My Beautiful Dark Twisted Fantasy for the 400,000th time. Bliss.

But, when that article came up in conversation recently, it got me thinking. Is it so bad to say “No” all the time? Isn’t it better to relax, play it safe, and minimize my risk of being run over by a car, or of stubbing my toe, or of being bitten by a rabid squirrel? Isn’t it better to err on the side of caution?

There are four possible outcomes for a scientific prediction.

In the scientific method, there are two types of “wrongness”: False Positives and False Negatives. False positives are when you think something will be the case, but that actually turns out to be not the case. For example, if you think that John’s boyfriend is tall, but he’s actually quite short, that would be a false positive.

The other type of error is a False Negative, also known as the “Type II Error.” This is where you predict something won’t be true, but it actually is true. A false negative error might be predicting that John’s boyfriend won’t be tall, but when you meet the boyfriend in question, he turns out to be, in fact, tall.

While this example is trivial, Type II errors are very common and very costly. Humans have a structural bias toward assuming too little risk because, in the short run, we find loss to be disproportionately painful, and we struggle to conceptualize foregone gains. We can try to fight our tendency toward inaction, but it’s hard work — work that pays huge dividends in the long run.

What is a Type II Error?

Correctly guessing the height of someone whom you’ve never met doesn’t really matter. However, there are many circumstances where it’s critical to make accurate predictions about the world.

In many circumstances, it is more costly to be excessively conservative than it is to be excessively optimistic.

For example, if I was responsible for Adidas’s supply chain and I consistently made false-positive errors by assuming that everyone would love every new shoe that came out, I’d pretty quickly end up in hot water. Adidas would end up creating way too many shoes and lose money. If I consistently made False Negative Errors and consistently assumed the demand for shoes would be less than it actually was, I would also be leaving tons of money on the table. In fact, given that the profit-per-shoe is often greater than the cost of manufacturing and shipping a shoe, I’d actually lose more money by ordering too few shoes vs. too many. In many circumstances, it is more costly to be excessively conservative than it is to be excessively optimistic.

Of course, this isn’t always the case. There are circumstances where the risk from a screw-up is way worse than the payoffs from overperformance. If you were a plant operator at the Chernobyl nuclear plant on Saturday, April 26th, 1986, getting your job done better and faster might have gotten you a promotion a little sooner. Screwing up got you an agonizing death from radiation poisoning.

This is also true of activities like handling nuclear weapons, welding a spaceship, or holding a baby: The magnitude of the penalties for doing marginally worse are far greater than the payoffs for doing marginally better. But, in many circumstances, the payoffs for doing well are much greater than the costs for screwing up. Nevertheless, most of the time we assume it’s safer to do too much, rather than too little, rather than deciding on a case-by-case basis. How come?

The Trouble with Opportunity Cost

One of the core reasons for our intrinsic conservatism is our psychological tendency to discount opportunity costs. “Opportunity cost” is the principle in economics that the price you pay for something is the value you could have gotten from your best alternative. If I spend a dollar buying M&Ms, I can’t use that same dollar to buy Skittles. By getting the M&Ms, I’m sacrificing the Skittles, not the dollar — a dollar is a virtually worthless piece of cloth or a wholly worthless electron in a computer. If humans were perfectly rational and followed the principles of classical economics, we’d make all our decisions by quantifying and evaluating opportunity cost. In practice, this is often not the case.

“Bartender! Fetch me the 4Loko, please. I’m feeling classy tonight.”

It’s annoying to think really hard about every decision, and it’s not always obvious when you’ve foregone a better alternative. When I’m at a bar, I don’t carefully consider the merits, dollar for dollar, of every single drink on the menu. That would take forever. I ask for a Vodka Redbull, so I can skip ahead to the part of the night where I start challenging my coworkers to push-up competitions. Moreover, it’s impossible to definitively know how much pleasure I would have gotten from tequila shots rather than the Vodka Redbull — uncertainty is inherent when calculating opportunity cost.

Loss Aversion and Prospect Theory

Compounding the issue, humans loathe losing more than we like winning, making Type II errors far more likely. Costs imposed by false negatives are typically foregone benefits rather than actual sacrifices. A perfectly rational homo economicus would find losing a dollar of income and losing a dollar from their wallet indistinguishable. Unfortunately, there is a robust body of evidence in the real world that shows that humans are dramatically and irrationally more loss-averse than reward-seeking. As a result, people are willing to accept unreasonably high opportunity costs to avoid much smaller “normal” costs.

Researchers consistently find at least 70% of participants would strongly prefer the second option, even though it’s economically irrational.

Experiments consistently show people consider avoiding a loss more than twice as valuable as an equivalent gain. Economically, the two options are the same, but subjects rarely see it that way. Consider the following scenario: Imagine that today is your payday when you normally receive $1000. You have two options: Option A, where I take $1000 out of your bank account, or Option B, where you just don’t receive your paycheck of $1000. Researchers consistently find at least 70% of participants would strongly prefer the second option, even though it’s economically irrational.

This graph demonstrates how people have asymmetric preferences for loss avoidance rather than reward-seeking. Subjects found a loss of 5 cents far more painful than a gain of 5 cents was pleasurable.

This phenomenon of “loss aversion,” first described by legendary Princeton economist Daniel Kahneman and his frequent collaborator, the psychologist Amos Tversky of Stanford, has dramatic implications for our wealth, security, and happiness. If we’re consistently taking too little risk, an enormous amount of progress and profit is being left on the proverbial table.

Let’s return to the Adidas store. An average, loss-averse manager would be consistently willing to forgo 500 dollars of profit to avoid 250 dollars of losses. Over time, she’d consistently lose money, harming herself, her employees, her employers, and the economy writ large. Unfortunately, it’s also pretty unlikely she’d be punished because of the aforementioned problem with opportunity costs: They’re hard to measure. It’s very obvious when someone loses money, but less obvious when someone makes less money than they should. It’s especially hard if everyone is making less money than they should.

Over time, these distortions lead to perverse internal politics. Workers recognize that failure is punished more harshly than success is rewarded, stifling innovation and risk-taking among even the most hard-nosed and inventive employees. This unfortunately common type of culture ossifies organizations, stifles innovation, and breeds complacency.

This article has been pretty theoretical so far — I’ve talked about hypothetical Adidas stores, imaginary nights out, and obscure behavioral economics studies from the 1960s. But false negativity bias is anything but a theoretical issue. It can help to explain everything from the profitability of financial institutions to the challenges faced by vaccine rollout.

Case No. 1: The Equity Premium Puzzle

The False Positive Outcome: I think a stock will rise, but it actually remains flat or falls.

The False Negative Outcome: I think a stock won’t go up, but it actually rises.

The equity premium puzzle is one of the most persistently confusing questions in finance. Simply put, financial economics is premised on the idea that investors are rewarded for taking risks — companies and borrowers need to pay investors and lenders to assuage their fears of going broke. But, strangely enough, American investors are rewarded more than they should be, relative to the riskiness of the US stock market.

More rational, less loss-averse middle-class Americans could have claimed a bigger portion of the value generated by the American stock market

Without going into the gory technical details, Americans are over-invested in zero-risk assets like US treasury bonds and under-invested in higher returning assets like the stock market, driving the gap between the two asset classes’ returns up. There are various explanations for this phenomenon, but one of the most prominent theories is Daniel Kahneman’s, who argues that for investors, losses “loom larger” than returns — i.e., investors are irrationally loss averse.

“Fuck it. Arun told me to take more financial risks. All my money on naked S&P BTC calls.”

These consistently high returns have helped sophisticated, highly leveraged investors to earn consistently high profits for decades, spurring the growth of income and wealth inequality. To be clear, I’m not an ideologue who thinks it’s bad for rich people to make money through investing— I’m saying it’s bad for Main Street investors to throw away money. More rational, less loss-averse middle-class Americans could easily have claimed a bigger portion of the value generated by the American stock market, improving their and their families' lives while encouraging a more efficient economy.

Case No. 2: The COVID-19 Vaccine Rollout

False Positive Scenario: COVID-19 vaccines were approved by the FDA, but didn’t actually work.

False Negative Scenario: COVID-19 vaccines were not approved by the FDA, but actually provided significant immunity to COVID.

Typically, healthcare is an industry that really ought to be very loss averse, like nuclear energy. Society believes the reward distribution is positively skewed, meaning there are many more scenarios where things play out badly (e.g., Fukushima Daiichi) than scenarios where things play out well (What’s the top-performing nuclear plant in Japan?). As a result, these industries have extremely low-risk tolerance and are usually governed by regulatory regimes that are even more loss averse.

A positively skewed distribution; there are many more negative outcomes than positive ones.

Typically, that kind of thinking makes sense. We don’t want random cowboy doctors “having a little experiment” on Grandpa Joe. House MD is a great show, but I’d prefer a doctor who does things by the book.

“Don’t worry! I’m like 60% sure this heart transplant method works!”

COVID-19, however, had a different “risk distribution” entirely. According to estimates, the daily cost of pandemic lockdowns was in the hundreds of billions of dollars. Antoine Mandel of the Paris University of Economics and Vipin Veetil of IIT Madras estimated that between 7 and 23% of global GDP was destroyed in 2021 due to pandemic restrictions, costing the world somewhere between 6 and 20 trillion dollars. Bill Gates, lately a leading authority on developmental economics, noted “[global poverty reduction was] set back about 25 years in about 25 weeks” due to pandemic disruptions. And, of course, millions of people died, a staggering human tragedy that will resound for a generation to come. Given the differences, COVID-19 had a radically different risk profile from normal healthcare: There was a negative skew in the healthcare risk distribution, where even marginal reductions in the length of the pandemic had such enormous payoffs that it was quite reasonable to take pretty huge risks in the hopes of making things even a little better.

Surveys of doctors, epidemiologists, and virologists revealed a high level of confidence in the United States’ vaccines long before their approval, but even limited use to save the highest-risk populations was expressly prohibited by Federal law.

Unfortunately, this was not the approach of the FDA or the medical establishment, who, either through habit or irrationality, generally stuck to their “steady as she goes” mentality when evaluating the innovative treatments that have helped suppress the pandemic in recent months.

A negatively skewed outcome; there are more positive outcomes than negative.

The FDA strenuously refused to allow challenge trials for COVID-19 vaccines and took months to approve the vaccines following the vaccines’ Phase 3 trials. Surveys of doctors, epidemiologists, and virologists revealed a high level of confidence in the United States’ vaccines long before their approval, but even limited use to save the highest-risk populations was expressly prohibited by Federal law. This level of “caution” has not been unique to the United States; regulators in India have not yet approved any American vaccines, despite their obvious effectiveness, because of an insufficient number of trials conducted in India.

There’s a very reasonable argument that regulators’ excessive caution has been a fundamentally political exercise; the point of the delay was to build public confidence in the vaccines’ efficacy. While reasonable, no health regulator has ever publicly made this argument, so it’s difficult to know if it’s the true explanation for their excessive caution. In any event, challenge trials were (publically) dismissed on ethical grounds — the FDA argued that adults could not reasonably consent to contract COVID-19 after being given the vaccines.

It’s possible that the authorities made the right call — a premature rollout might have undermined confidence in the vaccines, resulting in even greater vaccine hesitancy. On the other hand, unnecessarily delaying critical vaccines for political reasons seems unsustainable. I’m not sure. But at the very least, I think this is a useful analytical framework for evaluating whether the FDA made the right call in delaying vaccine rollout — and is continuing to make the right call as it delays approving booster shots, or granting full approval to the original vaccines.

Conclusion

At this point, I hope I’ve convinced you of a couple of things:

1) Type II Errors, also known as False Negatives, in many circumstances, are just as bad as and often worse than Type I terrors (False Positives).

2) Humans have the fundamental cognitive biases of loss aversion and discounting regarding opportunity cost.

3) These cognitive biases instill in us a tendency to minimize downside, rather than maximize net gain, resulting in a structural tendency toward Type II errors.

4) These observations matter because this bias toward Type II Errors shows up all the time in society and imposes very high costs.

If I haven’t convinced you, well, I’ve been working on this article for a month, mostly while sitting in airport terminals, and I’m really tired. I’m throwing the towel in.

So, now that we know we’re all probably making too many of this type of mistake, how can we make fewer Type II errors? There are no easy answers to this, but the most compelling solution I’ve heard is from Mark Zuckerberg, the founder of Facebook, who famously exhorted his team to “Move fast and break things” rather than spend lots of time getting to the perfect answer. A bias toward action isn’t just a Silicon Valley bro-ism. It’s practically necessary. Sometimes, we’ve got to realize, “No” is riskier than “Yes.”

--

--

Arun Solanky

Management consultant at BCG. I write about philosophy, politics, and business.