Every day you make decisions. Sometimes you can predict the outcome. Sometimes not.
Sometimes your decisions are binary—this or that—both with positive outcomes. You’re choosing between two wins. Other times, you need to decide to take action–or do nothing. Even the decision not to decide is still a decision.
Other decisions are complicated. Like this: If I need to go downtown, should I take the most direct route and risk getting stuck in construction traffic, or take the slightly longer route with no risk of delay? There are varying levels of risk and uncertainty.
Still other decisions not only involve risk and uncertainty, but they come with very high stakes. Should you take this job? Should you buy this house? Should you have kids? Where will they go to school? Should you take this medical treatment or that one?
Most of the decisions you make force you to weigh possibilities against each other:
- Should I make a risky investment, or make a low-risk investment?
- If I need to go downtown, should I take the most direct route and risk getting stuck in construction traffic, or take the slightly longer route with no risk of delay?
- Should I make coffee before I take a shower or after?
- If I hire all these new employees, how sure am I that I’ll be able to pay for them?
- If I don’t hire all these new employees, am I missing my one chance to grow my business and carve out extra market share?
- Do I want to listen to an audiobook or a podcast while I drive?
- If I pass this guy, can I get back in the right lane in time to exit?
- Is it worth paying a little extra for the free cancellation option on this hotel room?
- Should I thaw the steaks, even though there’s a chance I won’t be home early enough to make them for dinner?
- How high does the Powerball jackpot need to get before I buy a ticket?
In this post, we’re going to explore the mechanics of the decisions you make when you face uncertain outcomes.
Before we go any further, let’s clarify what this post is not. It’s not a how-to guide. It’s not a 5-part process.
Instead, what you’ll find here is an exploration into how you already (right now!) make decisions when you face uncertain outcomes. It’s a look in the mirror at your habits and assumptions. It’s descriptive, not prescriptive.
It’s also a description of how everyone else makes decisions—your clients, your competitors, your students, and That Guy In Front Of You Who Will Not Keep Right Except To Pass. It’s great to understand how you make decisions. But it’s even better to understand how other people make decisions. You’ll learn why humans—all humans—decide the way they do.
We’re going to cover this in two parts.
In the first part, you’ll learn how you approach a decision. It’s what you do before you decide—sometimes knowingly, other times below the level of your conscious awareness.
In the second part, we’ll explore how you actually make a decision when you face uncertain outcomes: how you weigh losses and gains, how you evaluate risk, why you like outcomes that are certain, and what you do when you encounter extreme outcomes–big wins or big losses–with low probabilities.
Here’s our roadmap:
Part One: How you approach a decision.
- How you frame the reference point
- How you compare the reference point on a subjective scale
Part Two: How you place a value on the deviation from the reference point
- Losses are more likely to influence your behavior than equivalent gains.
- You avoid risk when there might be a negative outcome, but you like risk when there might be a positive outcome.
- The certainty effect: Outcomes that are certain are more likely to influence your behavior than outcomes that are risky
- An exception: Losses, gains, and risks for low probability, extreme outcomes
Let’s dive in.
A quick note: what you’re about to read here is highly indebted to the work of the psychologists Amos Tversky and Daniel Kahneman, who spent years researching how people make decisions when they’re faced with uncertain outcomes. The framework they developed is called Prospect Theory.
How you approach a decision
You might think the first step in deciding is taking an initial look at your options. What are the variables? What are the possible outcomes? What could go wrong? What could go right?
But this isn’t the case.
Let’s back up a little and explore what you bring to a decision before you even begin the process of deciding.
At the most basic level, a decision involves a comparison between two things: a reference point, and something you’re comparing the reference point to–a variable.
How you frame the reference point
Reference points are framed. There is no canonical, transcendent, fixed reference point. When people are asked if a hate group can hold a political rally, 85% say yes when the question is prefaced with “Given the importance of free speech.” But when you start the question with “Given the risk of violence,” only 45% say yes.[1]
What about government spending? The left tends to be more accepting, but the right less so. When people are asked if the government spends too little on welfare, 20% say no. Meanwhile, 65% of Americans say not enough is spent on “assistance to the poor.” How do we reconcile opposite answers to what is essentially the same question? It’s all in how the question is framed.[2]
Risky choice frames
Reference points can cause you to reverse your choice, especially when you face a risky outcome.
Here’s an example:
Problem 1:
Suppose there’s an outbreak of a rare disease, which is expected to kill 600 people. You’re in charge of crafting the government’s response.
You have two options:
- Option A will result in 200 people saved.
- Option B will result in a one-third probability that 600 people will be saved, and a two-thirds probability nobody will be saved.
This is a difficult decision. How would you respond? In a survey conducted by Tversky and Kahneman, 72% of the respondents chose option 1, which guaranteed that 200 people would be saved, and 28% chose option 2.[3]
There’s something else interesting in these results: the expected outcome of both options is identical:
Outcome | Probability | Expected Outcome | Survey Results | |
Option A | 200 people saved | 100% | 200 × 1 = 200 saved | 72% |
Option B | 600 people saved | 33.333% | 600 x 0.33333 = 200 saved | 28% |
In a follow-up question, Tversky and Kahneman posed the same problem, but changed how the decision was framed: they presented the options in terms of lives lost instead of lives saved:
Problem 2:
Suppose there’s an outbreak of a rare disease, which is expected to kill 600 people. You’re in charge of crafting the government’s response.
- Option C will result in the certain death of 400 people.
- Option D will result in the one-third probability that nobody dies and two-thirds probability everyone dies.
How would you respond to these options?
In this scenario, only 22% chose the first option, guaranteeing the death of 400 people, while 78% chose the second option, resulting in the one-third probability that nobody dies and two-thirds probability everyone dies.
Again, these two outcomes are logically equivalent. 400 people dead is equivalent to a two-thirds probability all 600 die.
Outcome | Probability | Expected Outcome | Survey Results | |
Option C | 400 people die | 100% | 400 × 1 = 400 die | 22% |
Option D | 600 people die | 66.666% | 600 x 0.66666 = 400 die | 78% |
The only difference between the second set of questions and the first set of questions is that the first two questions were framed in terms of lives saved, while the second set of questions were framed in terms of lives lost. All outcomes for all questions were logically equivalent, yet people responded very differently when faced with a loss instead of a gain.
Outcome | Probability | Expected Outcome | Frame | Survey Results | |
Option A | 200 people saved | 100% | 200 × 1 = 200 saved and 400 die | Lives saved | 72% |
Option B | 600 people saved | 33.33% | 600 x 0.33333 = 200 saved and 400 die | Lives saved | 28% |
Option C | 400 people die | 100% | 400 × 1 = 400 die and 200 saved | Lives lost | 22% |
Option D | 600 people die | 66.666% | 600 x 0.6666 = 400 die and 200 saved | Lives lost | 78% |
Financial planners behave the same way. Two researchers asked to advise an imaginary client during an economic downtown. The client had $6,000 invested, and the financial planners were tasked with keeping as much as possible while the economy tanked.
Financial planners were presented with two investment strategies:
Problem 3:
- Option A: invest in a way that guarantees $2,000 will be salvaged.
- Option B: invest in a way that entails a one-third chance that all $6,000 will be saved, but a two-thirds chance nothing will be saved.
Even though both strategies produce logically equivalent outcomes, 56% of the financial planners opted for Strategy 1, salvaging $2,000, while 44% took the two-thirds chance nothing would be saved.[4]
But when the researchers framed the outcome of each strategy as a loss instead of a gain, the investors gave opposite answers.
Problem 4:
- Option C: invest in a way that guarantees a $4,000 loss.
- Option D: invest in a way that results in a one-third probability you won’t lose anything, but a two-thirds probability you’ll lose everything.
Once again, even though both strategies produce logically equivalent outcomes, 29% of financial planners chose strategy 1, while 71% of planners took the risk to avoid the loss.
Outcome | Probability | Expected Outcome | Frame | Survey Results | |
Option A | $2,000 saved |
100% | $2000 × 1 = $2000 saved and $4000 lost | Money saved | 56% |
Option B | $6,000 saved |
33.33% | $6000 x 0.33333 = $2000 saved and $4000 lost | Money saved | 44% |
Option C | $4,000 lost | 100% | $4000 × 1 = $4000 lost and $2000 saved | Money lost | 29% |
Option D | $6,000 lost | 66.666% | $6000 x 0.6666 = $4000 lost and $2000 saved | Money lost | 71% |
As you can see, you’ll make a very different (and possibly irrational) decision depending on whether a choice is framed as a loss or a gain.
Attribute frames
An attribute frame reverses one of the properties of your reference point. The question about whether the glass is half full or half empty falls into this category. It’s the same glass and the same volume of liquid but a different attribute: full or empty.
You’re forced to make decisions where the reference point if framed in this way all the time. What do you do when a medical treatment is described as 50% successful instead of 50% unsuccessful? When people were asked to make a treatment recommendation for a family member, they gave 11.6% more preference to success-framed treatments over failure-framed treatments—even though doctors were describing exactly the same treatment with exactly the same risks.[5]
Other studies have found similar results even when the odds aren’t 50/50. For example, if a doctor says a treatment option is successful 80% of the time, people are far more likely to take it than if the doctor says the same treatment option fails 20% of the time.[6]
The way doctors frame treatment options changes the decisions their patients make—with big consequences for their future health and, in some cases, their life.
Marketers use the framing effect, too. For example, if you’re shopping for groceries, you’ll see ground beef described as 75% lean, but you would never see beef described as 25% fat. Both labels could describe the same product, but one frames the product positively, while the other frames it negatively.
Framing even affects your perception of taste. In a taste test, people ranked 75% lean ground beef a 4.67 on a scale of 1 to 5 but ranked 25% fat a 3.57.[7]
Frame | Taste test results (1-5) |
75% lean ground beef | 4.67 |
25% fat ground beef | 3.57 |
Attribute framing can cause people to tolerate immoral behavior. In one study, students believed cheating happened more regularly after learning 65% of their fellow classmates cheated, compared to the students who learned 35% of their classmates had never cheated. Of course, these two numbers describe the same thing: if 65% cheat, then 35% don’t. When a neutral statistic is framed in a different way, you’re more likely to accept certain behaviors as normative, which affects the likelihood you’ll engage in those behaviors.[8]
Goal frames
We’ve already seen that framing a reference point in terms of gains and losses affects behavior, and we’ve seen that framing a reference point based on its attributes affects behavior. A third way of framing a reference point involves outcomes. Instead of focusing on the attribute, it focuses on the consequences of a decision.
Whenever you use a credit card instead of cash, your decisions are being affected by goal framing.
If you’re a credit card company, you can get people to use credit cards in a couple of ways.
One way is to show the negative consequences of not using a card. You can show you paying with cash is inconvenient. You’ll need to go the ATM. You’ll need to plan ahead. Your cash might get stolen.
The other way is to show the positive outcomes of paying with a credit card. It’s easy. It’s safe. You’re protected.
In a study of 246 people, 54.8% of the people who saw marketing materials describing the negative consequences of not using a card used their credit card. But, of people who saw the positive outcomes of paying with a card, only 23.6% used their credit card and spent half as much.[9] It seems framing an outcome as negative is more likely to affect a consumer’s decision to use a credit card than framing the outcome as positive.
Marketing message framing | Percentage of people who used a credit card |
Negative | 54.80% |
Positive | 23.60% |
Let’s review where we are at so far. Remember, before we even begin the process of making a decision, we start with a reference point—a baseline against which we compare something against. And this reference point is affected by subjective experiences, specifically framing. We have also seen there are three kinds of framing effects: risky choice framing, attribute framing, and goal framing.
To summarize: the decisions you make are often influenced more by how the reference point is framed than by the actual content of the options you’re deciding between.
Next, let’s look at how we compare the reference point to the other option we’re deciding between.
How you compare the reference point on a subjective scale
Just as there isn’t a transcendent, absolute reference point, there also isn’t a transcendent, absolute scale of magnitude.
If you think about it, this makes sense. When you eat a piece of cheesecake, the first bite tastes better than the tenth bite. Even though the quantitative difference between not eating cheesecake and taking the first bit is identical to the quantitative difference between the ninth bite and the tenth bit, the experience–the qualitative difference–isn’t. With each successive bite, you become more desensitized to its effect.
Many studies have shown that people are willing to pay more to reduce the number of bullets in a revolver from 1 to 0 than from 6 to 5. Even though the absolute, quantitative difference between 1 and 0 and 6 and 5 is identical, if you’re playing Russian roulette, the same difference isn’t weighted the identically.[10]
This is the same reason losing $100 feels different depending on how much money you start with. Losing $100 hurts more if you’re starting with $100 instead of $1100 or $1,000,100. The difference between $100 and $0 is the same as the difference between $1,000,100 and $1,000,000. But the quantitatively identical differences are weighted differently.[11]
These kinds of subjective magnitude scales—where quantitatively identical differences are given different weighting—are the scales we use to compare a reference point to a variable. We evaluate options against each other not by determining the transcendent, absolute, quantitative differences between the two, but by sorting through the subjective weightings of those quantities.
In fact, these weightings can be so strong they can lead us to see differences where none exist. It might not look like it, but squares A and B are the same color:
When making a decision, you shouldn’t assume you can accurately judge similarities and differences between two options. The scale you use to compare those options isn’t objective or absolute. It’s subject to your perception, and your perception isn’t always accurate.
How you place a value on the deviation from the reference point
So far, we’ve looked at framing, and we’ve looked at how you apply a subjective weighting to the scale you use to compare two options. With this background in mind, let’s now look at the process you use to make a decision.
Losses are more likely to influence your behavior than equivalent gains
We’ve already seen how losses and gains can affect your decisions. But why?
You already know you hate losing. In fact, you feel the effect of a loss a little more than twice as much as an equivalent gain. To put it another way, losing $50 feels as bad as winning $100 feels good.
This is part of the reason people aren’t willing to sell something for less than they bought it for. Selling feels like the loss of an object; buying feels like a gain. In one study, researchers split a group of people in two. One half was given mugs and became potential sellers. The other half didn’t get the mugs, so they became buyers. The sellers were told to indicate the lowest price they would sell the mugs for, and the buyers were told to indicate the highest price they would pay. Only one-sixth of the mug owners were willing to sell their mugs–mugs they had received just a few moments earlier.
In another version of the experiment, the researchers left the price tag on the mugs, so everyone–buyers and sellers–knew the mugs were worth $3.98. It didn’t matter. Even when everyone knew the real value, the owners were not able to find buyers. Parting with the mugs was a loss the mug owners were unwilling to accept.[12]
Other studies have shown similar behavior. Researchers found people were willing to pay $1.28 for a lottery ticket, but as soon as they held the ticket, they wouldn’t sell it for less than $5.18.[13] Hunters were willing to pay $31 for a hunting license, but wouldn’t sell the same license to someone else for less than $138.[14] People were given bottles of wine as a thank you gift and given the option to trade bottles with someone else. Most people wouldn’t trade.[15] People are more unwilling to part with a possession, even though they might be fairly compensated for it. They often need to be compensated at least twice the original value for the gain to make up for the loss of that possession.
A change in reference point changes a gain into a loss
It’s worth noting that a loss is only a loss compared to something else and a gain is only a gain compared to something else. That something else is, of course, the reference point. And reference points are surprisingly easy to move around. It’s not hard for marketers, doctors, negotiators, salespeople, or anyone else to change move a reference point–and affect the decision you’re about to make. Once a decision is framed as a loss instead of a gain, you behave very differently.
Three marketing professors at the University of Southern California conducted an experiment on purchase behavior when customers were asked to customize a car. In some cases, customers started with a fully loaded model and were required to subtract features in order to get the product they wanted–a loss. In other cases, customers were forced to add features from a base model in order to get their final product–a gain.
As you can expect, customers who started with a fully loaded model spent more on average than customers who started with a base model. Customers who started with a base model and added features spent $13,651.43 on average, while customers who started with a fully loaded model and removed features spent $14,470.63 on average. When salespeople changed the reference point from a gain to a loss, they got customers to spend an additional 6% on their new car.[16]
Starting point | Frame | Average price |
Base model | Gain | $13,651.43 |
Fully-loaded model | Loss | $14,470.63 |
A company I once worked for had a somewhat unusual health insurance policy. The company provided health insurance for all employees, but not their families, and this generated complaints from time to time. The CEO’s response to these complaints made good, rational sense. The reasoning went something like this:
- Some people don’t have families. Other people have big families. And others have spouses whose health insurance is covered by another employer. Offering health insurance for all family members is an unfair form of compensation.
- Because health insurance is a form of compensation, the CEO preferred to compensate in cash and let each employee pay for health insurance separately.
The logic was sound. (Plus, my family’s health insurance was covered by my wife’s employer; I preferred the extra cash.)
Yet employees continued to be critical.
Why?
Because the absence of health insurance felt like a loss. Nobody thought of the extra cash as a gain–and even if they had, because losses hurt twice as much as their corresponding gain, the cash compensation would have needed to be twice as much for the complaints to go away. Or: taking away a $5,000 health insurance policy to cover a spouse requires cash compensation of $10,000 for employees to feel good about it.
Savvy marketers use loss aversion, too. When you compare two prices in terms of loss, you behave differently than if you compare them in terms of gain.
When credit cards were first introduced in the 1970s, many stores were tempted to charge higher prices to cover the credit card processing fees–a surcharge. But credit card companies banned this practice. They didn’t want customers to associate a credit card transaction with an extra cost. So, as a condition of being able to process credit cards, stores were required to charge the same amount to both cash and credit card customers, effectively raising prices for everyone.
Was this fair? Congress didn’t think so and considered passing a bill to outlaw the practice. Eventually, credit card companies conceded to the public pressure. But they extracted an important concession from lawmakers. They consented to a bill only on the condition that the difference in price between a cash transaction and a credit card transaction be described as a discount, not a surcharge.
In his description of the Senate hearings, Richard Thaler notes:
In his testimony before the Senate Committee on Banking, Housing, and Urban Affairs, Jeffrey Bucher of the Federal Reserve Board argued that surcharges and discounts should be treated the same way. However, he reported that ‘critics argued that a surcharge carries the connotation of a penalty on credit card users while a discount is viewed as a bonus to cash customers. They contended that this difference in psychological impact makes it more likely that surcharge systems will discourage customers from using credit cards.[17]
Consumers view a surcharge as a loss, but they view a cash discount as a gain–even though both the two prices and the difference between is identical. A surcharge (a loss) hurts more than the equivalent discount (a gain). Credit card surcharges would curtail credit card usage at a rate twice as much as cash discounts would encourage cash usage.
You avoid risk when there might be a negative outcome, but you like risk when there might be a positive outcome
We’ve explored a variety of instances where a loss exerts a more powerful influence on your decisions than an equivalent gain.
But what about the prospect of a loss when you’re faced with a risky decision? What about when the outcomes are uncertain?
To start, let’s use the most basic form a risky decision: a coin toss.
Problem 5:
A coin is tossed.
- Heads: you win $10
- Tails: you lose $10
Would you take the bet?
Most people wouldn’t.
It shouldn’t be a surprise why: losses hurt more than their equivalent gains. Even though it’s 50/50, losing $10 feels twice as bad as winning $10, so nobody takes the bet.
What if you won $15 and lost $10? Would you take the bet?
A coin is tossed.
- Heads: you win $20
- Tails: you lose $10
Would you take the bet?
Still, most people wouldn’t.
In fact, most people won’t take this bet until the winnings approach $25. Because a potential loss hurts twice as much as a potential gain feels good, when the odds are better than 2-to-1, it feels like a good bet–however irrational it seems.[18]
Let’s return to our example of the disease outbreak. Recall that there’s a rare form of the flu expected to kill 600 people, and you are tasked with coming up with a response to the outbreak.
When people are required to choose between a plan that saves 200 people for sure and a plan with a one-third probability of saving all 600, 72% of people choose to save 200 people for sure and 28% take the risk.
But when people are required to choose between a plan where 400 people are guaranteed to die and a plan with a two-thirds probability that everyone dies, 22% choose the plan resulting in certain death for 400 people and 78% take the chance.[19]
Scenario | Expected outcome (lives saved) | Frame | Survey Results | Behavior | |
Option A | 200 people saved | 200 saved, 400 die | gain | 72% | risk seeking |
Option B | 1/3 chance 600 saved and 2/3 chance nobody saved | 1/3 × 600 saved =
200 saved, 400 die |
gain | 28% | risk seeking |
Option C | 400 people die | 200 saved, 400 die | loss | 22% | risk averse |
Option D | 1/3 probability nobody will die, 2/3 probability 600 die | 2/3 × 600 =
200 saved, 400 die |
loss | 78% | risk averse |
Framing the same outcome as a loss or a gain affects your willingness to take a risk.
When you face a possible gain, you’re risk seeking. When you face a possible loss, you’re risk averse.
To illustrate this, Kahneman and Tversky presented people with a series of problems, each one with two logically equal outcomes.
They asked people to make a decision that involved varying levels of risk and a positive outcome:
Problem 7:
Would you prefer:
- Option A: a 90% chance of getting $3,000, or
- Option B: a 45% chance of getting $6,000?
Most people preferred the 90% chance of getting $3,000 over the 45% chance of getting $6,000, even though the expected outcome is the same for both.
Scenario | Probability | Expected Outcome | Frame | Survey Results | |
Option A | Get $3,000 | 90% | 3,000 × .90 = 2,700 | gain | 86% |
Option B | Get $6,000 | 45% | 6000 × .45 = 2,700 | gain | 14% |
Then, they asked the same question, but this time, the outcome was framed as a loss. How would this affect the level of risk people were willing to take?
Problem 8:
Would you prefer:
- Option C: a 90% chance of losing $3,000, or
- Option D: a 45% chance of losing 6,000?
Once again, the expected outcome is identical for each question. But because people are loss averse, they aren’t willing to accept a higher risk for any loss, preferring a low risk, larger loss instead.[20]
Scenario | Probability | Expected Outcome | Frame | Survey Results | |
Option C | Lose $3,000 | 90% | -3,000 × .90 = -2,700 | loss | 8% |
Option D | Lose $6,000 | 45% | -6,000 × .45 = -2,700 | loss | 92% |
Think about the decisions you make that involve risk. How do you normally respond?
Faulty hiring processes and risk aversion
Let’s say you’re hiring for a position at your company. No hiring process is perfect, and you can’t predict exactly how each candidate might perform. There’s risk involved.
You and your colleagues bring in a group of candidates for an initial round of interviews. After the first interviews, you must decide who will remain in the hiring process and who won’t.
If in your follow-up discussion, you discuss who you should select for a callback, then you are framing your decision in terms of a gain. As we’ve seen, when you make a decision that involves risk, and that decision is framed in terms of a gain, then you will be risk seeking.
But if in your follow-up discussion, you discuss which candidates should be selected for rejection, then you are framing your decision in terms of loss. Again, we’ve seen that when you make a decision that involves risk, and that decision is framed in terms of a loss, then you will be risk averse.
Vandra Huber and her colleagues have shown that fewer candidates make it to the second round when the post-interview discussion is positive (a gain), instead of negative—who gets rejected (a loss).[21]
Contracts framed in terms of losses or gains for risky outcomes
In another study, 170 fleet managers from the National Association of Fleet Administrators were asked to evaluate maintenance proposals from two providers.
There was a catch. There was a possible merger with another company, which would increase the fleet size. But these managers didn’t know how likely the merger would be.
One maintenance provider offered a flat rate of $375 per vehicle for between 125 and 340 vehicles.
The other maintenance provider offered a variable rate: $400 per vehicle for between 125 and 260 vehicles, $350 for between 261 and 341 vehicles.
In other words, the fleet managers were comparing the $375 rate (the reference point) against the possibility of a $350 rate (savings of $25–a gain) or $400 (loss of $250–a loss).
When the second maintenance provider pitched the contract in terms of a possible gain–a savings of $25–the fleet managers chose the first provider with the flat rate. They were risk seeking.
But when the second maintenance provider pitched the contract in terms of a loss–losing $25–then the fleet managers chose the second provider. They were risk averse.[22]
In fact, losses, gains, and risk are inherent in most forms of negotiation. One party is about to lose something, which means, for them, the negotiation is framed by a loss, which causes them to be risk averse. Meanwhile, for the other party, the negotiation is framed by what they’re about to gain, which causes them to be risk seeking. They’re more likely to take a chance on a deal.[23]
The certainty effect: Outcomes that are certain are more likely to influence your behavior than outcomes that are risky
You prefer certainty to risk.
If, when you make a decision, one option has risk, and the other doesn’t, you’ll take the no-risk option.
This is called the certainty effect.[24]
Problem 9
Choose between:
- Option A: Get $4,000 with an 80% probability
- Option B: Get $3,000 for sure
Here’s how people respond:
Scenario | Expected Outcome | Survey Results | |
Option A | Get $4,000 with 80% probability | 4,000 × .80 = 3,000 | 20% |
Option B | Get $3,000 for sure | 3,000 × 1 = 3,000 | 80% |
Problem 10
Choose between:
- Option A: Get $2,500 with a 33% probability, get $2,400 with a 66% probability, get nothing with a 1% probability
- Option B: Get $2,400 for sure
Now, most people choose option A, even though Option B offers a slightly better expected outcome.
Scenario | Expected Outcome | Survey Results | |
Option A | Get $2,500 with a 33% probability, get $2,400 with a 66% probability, get nothing with a 1% probability | (2,500 x .33) + (2400 x .66) + (0 x .01) = 825 + 1584 + 0 = 2,409 | 83% |
Option B | Get $2,400 for sure | 2,400 | 17% |
You’re willing to take a risk for a possible gain. But if you have the choice of a slightly worse outcome that is certain, you’ll take the slightly worse outcome instead. Outcomes that are predictable and certain—even if not optimal—are more likely to drive behavior.
Many people fall for the certainty effect on their daily commute. If you’re like me, your commute looks something like this:
- Option A: Take the freeway and arrive at work in 19 minutes
- Option B: Take the backroads and arrive at work in 17 minutes (on average), but go through 6 stoplights, which each have a small chance of a 1-minute wait for a red light.
If you do the math, the backroads could get me to work in as little as 14 minutes or as much as 20 minutes. It depends on the timing of the lights. The freeway, however, carries no risk. Even though on average, the freeway is longer, I usually take it.
If you’re a marketer trying to get your customers to take an online survey, you can use the certainty effect to get more responses from your customers. Most surveys offer one of two kinds of rewards: a small gift at the end of the survey, or a chance to win a larger gift.
Take this example from Contently:
In exchange for your time, you will be entered to win a gift card. That’s a gain, so you’ll be risking seeking—and take the survey. Contently is probably seeing good results from this email. The folks at Hubspot called this approach “actionable, explanative, and incentive-driven. Well done, Contently.”
But it could be better.
As we’ve seen, people prefer outcomes that are certain over outcomes that have any risk, even if the expected outcome for the risky decision is better.
Here’s an example from Buffer:
There’s no chance of a big gain, but there is a more certain chance of a small gain: “We’d love to help you hit your goals . . . by sharing the survey results, benchmarks, and takeaways with you.”
And here’s an example with an even more concrete, immediate, certain outcome. In exchange for taking this survey, you’ll get 20% off your next order. There’s zero risk and no waiting:
An exception: Losses, gains, and risks for low probability, extreme outcomes
We’ve already seen that people are risking seeking for gains and risk averse for losses.
There’s one exception: very low probability events.
Problem 11
Choose between:
- Option A: a 0.001 chance of getting $5,000
- Option B: get $5 for sure
In this scenario, the expected outcomes for both options are the same, but most people take the gamble to get $5,000.
Scenario | Probability | Expected Outcome | Survey Results | |
Option A | Get $5,000 | 0.1% | 5,000 × 0.001 = 5 | 72% |
Option B | Get $5 for sure | 100% | 5 × 1 = 5 | 28% |
But for losses, the reverse happens:
Problem 12:
Choose between:
- Option C: a 0.001 chance of losing $5,000
- Option D: lose $5 for sure
When faced with a large loss, people switch.[25]
Scenario | Probability | Expected Outcome | Survey Results | |
Option C | Lose $5,000 | 0.1% | -5,000 × 0.001 = -5 | 17% |
Option D | Lose $5 for sure | 100% | -5 × 1 = -5 | 83% |
These two scenarios explain why people both play the lottery and purchase insurance. Both represent very small chances of each large gains (winning $1 million) or large losses (storm destroys your house).
When you buy a lottery ticket, you exchange a small amount of money you’re guaranteed to have (by not buying a ticket) for the very small chance of gaining a large amount of money instead.
And when you buy insurance, you trade a small, certain loss each month in the money you pay for in insurance premiums in exchange for avoiding the slight possibility of a major loss.
How to make decisions
When you are faced with a decision that has an uncertain outcome, you approach the decision by framing your reference point and then comparing that reference point to your alternate. This comparison is subject to your perception and isn’t necessarily based on a set of transcendent, absolute metrics.
Next, you place a value on the difference between your reference point and your variable. To determine this value, you give greater weight when the comparison is framed as a loss instead of a gain. You take risks when faced with possible gains, and you avoid risks when faced with losses. You also give greater weight to the option that offers a certain outcome—even if that outcome might be worse for you. Finally, for extreme events—such as a tiny chance of a big windfall or a big loss—you do the opposite.
The bottom line?
- You make irrational choices instead of rational choices.
- Your decisions are based on subjective experiences and immediate perceptions rather than objective comparison and thoughtful calculation.
- You’re easily swayed by shiny objects, such as certain outcomes and small chances of big gains or losses
The most important thing?
Recognize it in yourself before someone else does.
Get more articles about cognitive biases, consumer psychology, and human behavior.
[2] Rasinski KA. (1989). “The effect of question wording on public support for government spending.” Public Opinion Quarterly 3:388–94
[3] Tversky & Kahneman (1981) “The framing of decisions and the psychology of choice.” Science 211(30), 453-458.
[4] Roszkowski & Snelbecker, 1990. “Effects of ‘Framing’ on measures of risk tolerance: Financial planners are not immune.” Journal of Behavioral Economics 19(3), 237-246.
[5] Levin, I. P., Schnittjer, S. K., & Thee, S. L. (1988). “Information framing effects in social and personal decisions.” Journal of Experimental Social Psychology, 24(6), 520-529.
[6] Wilson, D. K., Kaplan, R. M., & Schneiderman, L. J. (1987). “Framing of decisions and selections of alternatives in health care.” Social Behaviour, 2(1), 51-59.
[7] Levin, I. and Gaeth, G. (1988). “How consumers are affected by the framing of attribute information before and after consuming the product.” Journal of Consumer Research 15(3) 374-378.
[8] Levin, I. P., Schnittjer, S. K., & Thee, S. L. (1988). “Information framing effects in social and personal decisions.” Journal of Experimental Social Psychology, 24(6), 520-529.
[9] Ganzach & Karsahi, 1995, “Message framing and buying behavior: A field experiment.” Journal of Business Research 32(1) 11-17.
[10] Kahneman, D. and Tversky, A. (1979), “Prospect theory: An analysis of decision under risk.” Econometrica 47(2), 263-292.
[11] Kahneman, D. and Tversky, A. (1979), “Prospect theory: An analysis of decision under risk.” Econometrica 47(2), 263-292.
[12] Kahneman, D. Knetsch, J., and Thaler, R. (1990). “Experimental Tests of the Endowment Effect and the Coase Theorem.” Journal of Political Economy 98(6) 1325-1348.
[13] Knetsch, J. & Sinden, J. (1984). “Willingness to pay the compensation demanded: Experimental evidence of an unexpected disparity in measures of value.” The Quarterly Journal of Economics 99(3), 507-521.
[14] Heberlein, T. and Bishop, R. (1985). “Assessing the validity of contingent valuation: Three field experiments.” Science of the Total Environment 56(15), 99-107.
[15] Van Dijk, E. and Knippenberg, D. (1998). “Trading wine: On the endowment effect, loss aversion, and the comparability of consumer goods.” Journal of Economic Psychology 19(4), 485-495
[16] Park, C., Jun, S., and Macinnis, J. (2000). “Choosing what I want versus rejecting what I do not want: An application of decision framing to product option choice decisions.” Journal of Marketing Research 37(2), 187-202.
[17] Thaler, R. H. (1980). “Towards a positive theory of consumer choice.” Journal of Economic Behavior and Organization, 1, 39-60.
[18] Kahneman, D. and Tversky, A. (1979). “Prospect theory: An analysis of decision under risk.” Econometrica 47(2), 263-292.
[19] Tversky, A. and Kahneman, D. (1981). “The framing of decisions and the psychology of choice.” Science 211(30), 453-458.
[20] Kahneman, D. and Tversky, A. (1979), “Prospect theory: An analysis of decision under risk.” Econometrica 47(2), 263-292.
[21] Huber. V.L., M.A. Neale, and G.B. Northcraft (1987). “Decision bias and personal selection strategies.” Organizational Behavior and Human Decision Processes 40: 136-147.
[22] Qualls, W. J., & Puto, C. P. (1989). “Organizational climate and decision framing: An integrated approach to analyzing industrial buying decisions.” Journal of Marketing Research, 26(2), 179-192.
[23] Neale, M.A.. V.L. Huber. and G.B. Northcraft (1986). “The framing of negotiations: Contextual versus task frames.” Organizational Behavior and Human Decision Processes 39(2), 228-241.
[24] Kahneman, D. and Tversky, A. (1979), “Prospect theory: An analysis of decision under risk.” Econometrica 47(2), 263-292.
[25] Kahneman, D. and Tversky, A. (1979), “Prospect theory: An analysis of decision under risk.” Econometrica 47(2), 263-292.