Every organization, whether small or big, early or late stage — and every individual, whether for themselves or others — makes countless decisions every day, under conditions of uncertainty. The question is, are we allowing that uncertainty to bubble to the surface, and if so, how much and when? Where does consensus, transparency, forecasting, backcasting, pre-mortems, and heck, even regret, usefully come in?

Going beyond the typical discussion of focusing on process vs. outcomes and probabilistic thinking, this episode of the a16z Podcast features Thinking in Bets author Annie Duke — one of the top poker players in the world (and World Series of Poker champ), former psychology PhD, and founder of national decision education movement How I Decide — in conversation with Marc Andreessen and Sonal Chokshi. The episode covers everything from the role of narrative — hagiography or takedown? — to fighting (or embracing) evolution. How do we go from the bottom of the summit to the top of the summit to the entire landscape… and up, down, and opposite?

The first step to understanding what really slows innovation down is understanding good decision-making — because we have conflicting interests, and are sometimes even competing against future versions of ourselves (or of our organizations). And there’s a set of possible futures that result from not making a decision as well. So why feel both pessimistic AND optimistic about all this??

Show Notes

  • Using a football thought experiment to distinguish skill and luck [0:58]
  • Balancing outcomes and process [9:49]
  • Asking the right questions, especially with a negative outcome [11:17]
  • Discussion of timing in forecasting [15:23], and other practical implications [16:59]
  • Why not making a decision is also a decision [23:40], and how to evaluate the options you didn’t take [30:15]
  • Discussion of how widely this type of decision-making will be adopted by the public [34:10]
  • How to communicate probabilistically [37:24] and how to build uncertainty into an organization [40:21]

Transcript

Sonal: Hi, everyone. Welcome to the “a16z Podcast.” I’m Sonal, and today Mark and I are doing another one of our book author episodes. We’re interviewing Annie Duke, who’s a professional poker player and World Series champ, and is the author of “Thinking in Bets,” which is just out in paperback today. The subtitle of the book is, “Making Smarter Decisions When You Don’t Have All the Facts,” which actually applies to startups and companies of all sizes and ages, quite frankly. I mean, basically, any business or new product line operating under conditions of great uncertainty — which I would argue is my definition of a startup and innovation. So that will be the frame for this episode. 

Annie’s also working on her next book right now, and founded howidecide.org, which brings together various stakeholders to create a national education movement around decision education, empowering students to also be better decision makers. So, anyway, Mark and I interview her about all sorts of things in and beyond her book, going from investing, to business, to life. But Annie begins with a thought experiment, even though neither of us really know that much about football.

Skill vs. luck

Annie: So what I’d love to do is, kind of, throw a thought experiment at you guys so that we can have a discussion about this. So I know you guys don’t know a lot about football, but this one’s pretty easy. You’re gonna be able to feel this one. I want you to do this thought experiment. Pete Carroll calls for Marshawn Lynch to actually run the ball.

Sonal: So we’re betting on someone who we know is really good?

Annie: Well, they’re all really good, but we’re betting on the play that everybody’s expecting.

Mark: Yeah, the default.

Annie: This is the default.

Mark: The assumed rational thing to do, right?

Annie: This is the assumed rational thing to do, right. So he has Russell Wilson hand it off to Marshawn Lynch. Marshawn Lynch goes to barrel through the line. He fails. Now they call the timeout — so now they stop the clock. They get another play now, and they hand the ball off to Marshawn Lynch — what everybody expects. Marshawn Lynch, again, attempts to get through that line and he fails. End of game, Patriots win. 

My question to you is, are the headlines the next day, “The Worst Call in Super Bowl History”? Is Cris Collinsworth saying, “I can’t believe the call, I can’t believe the call.” Or is he saying something more like, “That’s why the Patriots are so good. Their line is so great. That’s the Patriots’ line that we’ve come to see this whole season. This will seal Belichick’s place in history.” It would’ve all been about the Patriots.

So let’s, sort of, divide things into, like — we can either say the outcomes are due to skill or luck — and luck in this particular case is gonna be anything that has nothing to do with Pete Carroll. And we can agree that the Patriots’ line doesn’t have anything to do with Pete Carroll — Belichick doesn’t have anything to do with Pete Carroll — Tom Brady doesn’t have anything to do with Pete Carroll — as they’re sealing their fifth Super Bowl victory. 

So what we can see is there’s two different routes to failure here. One route to failure, you get resulting. And basically what resulting is, is that retrospectively, once you have the outcome of a decision — once there’s a result — it’s really, really hard to work backwards from that single outcome to try to figure out what the decision quality is. This is just very hard for us to do. They say, “Oh my gosh, the outcome was so bad. This is clearly — I’m gonna put that right into the skill bucket. This is because of Pete Carroll’s own doing.” But in the other case, they’re like, “Oh, you know, there’s uncertainty. What could you do?” Weird, right?

Sonal: Yeah.

Annie: Okay, so you can kind of take that and you can say, “Aha, now we can, sort of, understand some things.” Like, for example, people have complained for a very long time that in the NFL they have been very, very slow to adopt what the analytics say that you should be adopting, right? And even though now we’ve got some movement on fourth-down calls, and when are you going for two-point conversions, and things like that, they’re still nowhere close to where they’re supposed to be, and why is that?

Mark: So they don’t make the plays corresponding to the statistical probabilities?

Annie: No. In fact, the analytics show that if you’re on your own one-yard line, and it’s fourth down, you should go for it no matter what. The reason for that is if you kick it, you’re only gonna be able to kick to midfield. So the other team is basically almost guaranteed three points anyway, so you’re supposed to just try to get the yards. Like, when have you ever seen a team on their own one-yard line on fourth down be like, “Yeah, let’s go for it.” That does not happen.

Okay, so we know that they’ve been super slow to do what the analytics say is correct, and so you sit here and you go, “Well, why is that?” And that thought experiment really tells you why, because we’re all human beings. We all understand that there are certain times when we don’t allow uncertainty to bubble up to the surface — is the explanation — and there are certain times when we do. And it seems to be that we do, when we have this, kind of, consensus around the decision, there’s other ways we get there. And so, okay, if I’m a human decision-maker, I’m gonna choose the path where I don’t get yelled at.

Sonal: Yeah, exactly.

Annie: So, basically, we can, kind of, walk back, and we can say, “Are we allowing the uncertainty to bubble to the surface?” and this is gonna be the first step to, kind of, understanding what really slows innovation down — what really slows adoption of what we might know is good decision making, because we have conflicting interests, right? Making the best decision for the long run, or making the best decision to keep us out of a room where we’re getting judged.

Mark: Yelled at, or possibly fired. So let me propose the framework that I use to think about this and see if you agree with it. So it’d be a two-by-two grid, and it’s consensus versus non-consensus, and it’s right versus wrong. And the way we think about it, at least in our business, is basically — consensus right is fine. Non-consensus right is fine. In fact, generally, you get called a genius. Consensus wrong is fine, because, you know, it’s just the same mistake everybody else made.

Sonal: You all agreed, right, it was wrong.

Mark: Non-consensus wrong is really bad.

Annie: Horrible.

Mark: It’s radioactively bad. And then as a consequence of that, and maybe this gets to the innovation stuff that you’ll be talking about — but as a consequence of that, there are only two scripts for talking about people operating in the non-consensus directions. One script is, they’re a genius because it went right — and the other is they’re a complete moron because it went wrong. Does that map?

Annie: That’s exactly it. That’s exactly right. And I think that the problem here is that, what does right and wrong mean? In your two-by-two, wrong and right is really just, did it turn out well or not?

Mark: Yeah, outcomes.

Sonal: Not the process.

Annie: And this is where we really get into this problem, because now what people are doing is they’re trying to swat the outcomes away. And they understand, just as you said, that on that consensus wrong, you will have a cloak of invisibility over you — like, you don’t have to deal with it. <Right.> So let’s think about other things besides consensus. So, consensus is one way to do that, especially when you have complicated cost-benefit analyses going into it. I don’t think that people, when they’re getting in a car, are actually doing any, kind of, calculation about what the cost-benefit analysis is to their own productivity, versus the danger of something very bad happening to them. Like, what is this society? Someone’s done this calculation, we’ve all, kind of, done this together — and so, therefore, getting in a car is totally fine. I’m gonna do that.

Mark: And nobody second-guesses anybody. If somebody dies in a car crash you don’t say, “Wow, what a moron for getting in a car.”

Annie: No. Another way that we can get there is through transparency. So if the decision is pretty transparent, another way to get there is status quo. So a good status quo example that I like to give, because everybody can understand it is — you have to get to a plane, and you’re with your significant other in the car, and you go the usual route.

Sonal: This is a common fight for every couple.

Annie: Yeah, so you go your usual route. Literally, this is the route that you’ve always gone and there is some sort of accident, there’s bad traffic, you miss the plane — and you’re mostly probably comforting each other in the car. It’s like, “What could we do?” You know, eh. But then you get in the car and you announce to your significant other, “I’ve got a great shortcut, so let’s take this shortcut to the airport.” And there’s the same accident, whatever — horrible traffic, you miss the flight. That’s like that status quo versus non-status quo decision.

Sonal: Right, you’re going against what’s familiar and comfortable.

Annie: Exactly. If we go back to the car example, when you look at what the reaction is to a pedestrian dying because of an autonomous vehicle, versus because of a human, we’re very, very harsh with the algorithms. For example, if you get in a car accident and you happen to hit a pedestrian, I can say something like, “Well, you know, Mark didn’t intend to do that.” Because I think that I understand — your mind is not such a black box to me. So I feel like I have some insight into what your decision might be, and so more allowing some of the uncertainty to bubble up there. But if this black box algorithm makes the decision, now all of a sudden I’m like, “Get these cars off the road.”

Sonal: Never mind that the human mind is a black box itself ultimately, right?

Annie: Of course, but we have some sort of illusion that I understand, sort of, what’s going on in there, just like I have an illusion that I understand what’s going on in my own brain. And you can actually see this in some of the language around crashes on Wall Street, too. When you have a crash that comes from human beings selling, people say things like, “The market went down today.” When it’s algorithms, they say, “It’s a flash crash.” So now they’re, sort of, pointing out, like — this is clearly in the skilled category. It’s the algorithm’s fault. We should really have a discussion about algorithmic trading and whether this should be allowed, when obviously the mechanism for the market going down is the same either way.

So now if we understand that, so exactly your matrix. Now we can say, “Well, okay, human beings understand what’s gonna get them in the room.” And pretty much anybody who’s, you know, living and breathing in the top levels of business at this point is gonna tell you, “Process, process, process. I don’t care about your outcomes — process, process, process.” But then the only time they ever have, like, an all-hands-on-deck meeting is when something goes wrong. Let’s say that you’re in a real estate investing group, and so you invest in a particular property based on your model, and the appraisal comes in 10% lower than what you expected. Like, everybody’s in a room, right? You’re all having a discussion. You’re all examining the model, you’re trying to figure out. But what happens when the appraisal comes in 10% higher than expected? Is everyone in the room going, “What happened here?”

Outcomes vs. process

Mark: Now there is the obvious reality, which is, like, we don’t get paid in process, we get paid in outcomes. Poker players, you don’t get paid in process, you get paid in outcome, and so there is a…

Sonal: Incentive alignment.

Mark: It’s not completely emotional. It’s also an actual — there’s a real component to it.

Annie: Yeah, so two things. One is, you have to make it very clear to the people who work for you that you understand that outcomes will come from good process. That’s number one. And then number two, what you have to do is try to align the fact that, as human beings, we tend to be outcome driven — to what you want, in terms of getting an individual’s risk to align with the enterprise risk. Because otherwise you’re gonna get this CYA behavior. And the other thing is that we wanna understand if we have the right assessment of risk. So one of the big problems with the appraisal coming in 10% too high, there, could be that your model is correct. It could be that you could have just a tail result, but it certainly is a trigger for you to go look and say, “Was there risk in this decision that we didn’t know was there?” And it’s really important for deploying resources.

Sonal: I have a question about translating this to, say, non-investing context. So in the example of Mark’s matrix, even if it’s a non-consensus wrong — you are staking money that you are responsible for. In most companies, people do not have that kind of skin in the game. <Right.> So how do you drive accountability in a process-driven environment — that the results actually do matter? You want people to be accountable, yet not overly focused on the outcome. Like, how do you calibrate that?

Annie: So let’s think about, how can we create balance across three dimensions that makes it so that the outcome you care about is the quality of the forecast? So first of all, obviously this demands that you have people making forecasts. You have to state in advance, “Here’s what I think. This is my model of the world. Here are where all the places are gonna fall. So this is what I think.” So now you stated that, and whether the outcome is “good or bad” is — how close are you to whatever that forecast is?

So, now it’s not just like, oh, you won to it, or you lost to it. It’s — was your forecast good? So that’s piece number one, is make sure that you’re trying to be as equal across quality as you can, and focus more on forecast quality as opposed to traditionally what we would think of as outcome quality. So now the second piece is directional. So, when we have a bad outcome and everybody gets in the room, when was the last time that someone suggested, “Well, you know, we really should’ve lost more here?” Like, literally nobody’s saying that, but sometimes that’s true. Sometimes if you examine it, you’ll find out that you didn’t have a big enough position. It turned out, okay, well, maybe we should’ve actually lost more. So you wanna ask both up, down, and orthogonal. So, could we have lost less? Should we have lost more? And then the question of, should we have been in this position at all?

Mark: So in venture capital, after a company works and exits — let’s say it sells for a lot of money, you do often say, “God, I wish we had invested more money.” You never, ever, ever, ever — I have never heard anybody say on a loss, “We should’ve invested more money.”

Annie: See, wouldn’t it be great if someone said that? Wouldn’t you love for someone to come up and say that to you? That would make you so happy.

Sonal: I actually still don’t get…

Mark: And what would be the logic of why they should say that?

Sonal: I still don’t get the point. Exactly. Why does that matter? I don’t really understand that.

Annie: Can I just, like — simple, in a poker example?

Sonal: Yeah.

Annie: So let’s say that I get involved in a hand with you, and I have some idea about how you play. And I have decided that you are somebody that, if I bet X, you will continue to play with me. Let’s say this is a spot where I know that I have the best hand, but if I bet X plus C that you will fold. So if I go above X, I’m not gonna be able to keep you coming along with me, but if I bet X or below, then you will — so I bet X. You call, but you call really fast, in a way that makes me realize, “Oh, I could’ve actually bet X plus C.” You hit a very lucky card on the end, and I happen to lose the pot. I should’ve maximized at the point that I was the mathematical favorite.

Mark: Because your model of me was wrong, which is a learning independent of the win or the loss.

Annie: Exactly. So you need to be exploring those questions in a real honest way.

Mark: Righ