How You Can Predict The Future Better Than World-Famous Experts - The Art & Science of Risk with Dan Gardner
In this episode we discuss the radical mismatch between your intuitive sense of risk and the actual risks you face. We look at why most experts and forecasters are less accurate than dart throwing monkeys. We talk about how to simply and easily dramatically reduce your risk of most major dangers in your life. We explore the results from the “good judgment project” study of more than 20,000 forecasts. We talk about what superforecasters are and how they beat prediction markets, intelligence analysts with classified information, and software algorithms to make the best possible forecasts and MUCH more with Dan Gardner.
Dan Gardner is a New York Times best-selling author and a senior fellow at the University of Ottawa’s Graduate School of Public and International Affairs. His latest book Superforecasting: The Art and Science of Prediction, which he co-authored with Philip Tetlock. Superforecasting was chosen as one of the best books of 2015 by The Economist, Bloomberg, and Amazon. Dan is also the author of Future Babble and Risk: The Science and Politics of Fear and previously worked as a policy advisor to the Premier of Ontario and a journalist with the Ottawa Citizen.
How and why people make flawed judgments about risk
The radical mismatch between our intuitive sense of risk and the actual risks we face
Why we are the safest, healthiest, wealthiest people to live on planet earth (and we don't realize it)
Why we focus on vivid, dramatic risks, and ignore the real dangers in our lives
How to simply and easily dramatically reduce your risk of most major dangers in your life
The power of “meta cognition,” what it is, and why it’s so important
Lessons you can learn from the mega successful investor George Soros
Why most forecasters are less accurate than monkeys throwing darts
The difference between foxes and hedgehogs (and why you never want to be one of them)
The inverse correlation between fame and prediction accuracy
What cancer diagnosis shows about how averse people are to uncertainty
The universal principles of good judgement
The importance of intellectual humility and intellectual curiosity
Why certainty is an illusion and nothing is ever certain
Why everything is a question of degrees of maybe (probabilistic thinking)
The results from the “good judgement project” study of more than 20,000 forecasts
What superforecasters are and how they beat prediction markets, intelligence analysts with classified information, and software algorithms to make the best possible forecasts
The differences between these “superforecasters” and regular forecasters
The importance of being “actively open minded"
Why you should unpack smaller questions & looking things like base rates
How to use “fermi estimates” to solve tough and challenging problems
Why the growth mindset had a huge impact on positive ability to forecast
Need to do some planning for next year? Listen to this episode!
Thank you so much for listening!
Please SUBSCRIBE and LEAVE US A REVIEW on iTunes! (Click here for instructions on how to do that).
SHOW NOTES, LINKS, & RESEARCH
[SOS episode] Fixed Versus Growth Mindsets
[Book] Mindset: The New Psychology of Success by Carol S. Dweck
[Book] Superforecasting: The Art and Science of Prediction by Dan Gardner and Philip E. Tetlock
[Book] Thinking, Fast and Slow by Daniel Kahneman
EPISODE TRANSCRIPT
[00:00:06.4] ANNOUNCER: Welcome to the Science of Success with your host, Matt Bodnar [00:00:12.4] MB: Welcome to The Science of Success. I’m your host, Matt Bodnar. I’m an entrepreneur and investor in Nashville, Tennessee, and I’m obsessed with the mindset of success and the psychology of performance. I’ve read hundreds of books, conducted countless hours of research and study, and I am going to take you on a journey into the human mind and what makes peak performance tick, with the focus on always having our discussion rooted in psychological research and scientific fact, not opinion. In this episode, we discuss the radical mismatch between your intuitive sense of risk and the actual risks you face. We look at why most experts and forecasters are less accurate than dart-throwing monkeys. We talk about how to simply and dramatically reduce the risk of most of the major dangers in your life. We explore the results from the Good Judgment Project, which is a study of more than 20,000 forecasts. We talk about what super forecasters are, how they beat prediction markets, how they beat intelligence analysts with classified information and software algorithms to make the best possible forecasts, and much more with Dan Gardner. The Science of Success continues to grow, with more than 650,000 downloads, listeners in over a hundred countries, hitting number one New & Noteworthy, and more. A lot of our listeners are curious how to organize and remember all this information. I get listener emails all the time asking me. “Matt, how do you keep track of everything? How do you keep track of these interviews, podcasts, books that you read, studies that you read, all this incredible information?” I’ve developed a system from reading hundreds of books, from doing all this research, from interviewing these incredible experts, and I put it all in a free pdf that you can get. All you have to do is text the word smarter to the number 44222. It’s a free guide we created called How to Organize and Remember Everything. Listeners are loving this guide. I get emails every day from people talking about how this has helped them transform their lives and keep themselves more organized. You can get it completely for free, all you have to do is to text the word smarter to the number 44222 or go to scienceofsuccess.co and put in your email. In our previous episode, we discussed why the happiness movement has done us a disservice and sometimes actually makes things worse. How perfectionism creates an illusion of control and distorts your reality. How to become aware of the critical inner voice at the root of your pain and unhealthy habits, and the incredible power of self-compassion, and much more with Megan Bruneau. If you’re struggling with difficult emotions, if you want to become happier, if you have a battle with perfectionism, listen to that episode. [0:02:48.7] MB: Today, we have another fascinating guest on the show, Dan Gardner. Dan is a New York Times bestselling author, and a senior fellow at the University of Ottawa’s Graduate School of Public and International Affairs. His latest book is Super Forecasting: The Art and Science of Prediction, which he coauthored with Philip Tetlock. Super Forecasting was chosen as one of the best books of 2015 by The Economist, Bloomberg, and Amazon. Dan’s also the author of Future Babble, and Risk: The Science and Politics of Fear. He also previously worked as a policy adviser to the Premiere of Ontario, and as a journalist for the Ottawa Citizen. Dan, welcome to the Science of Success. [0:03:23.7] DG: Hello. [0:03:24.6] MB: Well, we’re very excited to have you on here today. For listeners who might not be familiar with you, tell us a little bit about you and your background? [0:03:32.1] DG: Yeah, sure. I sort of had a bit of an eclectic background. Initially after law school, I went and worked in politics, and then I got into journalism and did a whole bunch of work in journalism, and then I happened to catch a lecture one year by a man who is a pioneer in the field of risk perception psychology, Paul Slovak, and that lecture really opened my eyes. Made me connect a lot of dots. I started to think about psychology, I started to study psychology heavily, and that’s sort of been the course of my career ever since. It’s really been an interesting experience, because when you change your understanding of how people think, how they perceive, how they decide, you change your understanding of people generally, and it was a real water shed in my life. [0:04:21.4] MB: What is risk perception psychology, I’m really curious? [0:04:24.5] DG: Basically, it’s a field of psychology that goes back to the 1970’s, when as you may know, there was large and growing controversy about the safety of nuclear power. The nuclear engineers would say, “Look at our data, it’s okay, it’s safe, don’t worry about it”, and the public was worried about it regardless. It didn’t matter how many numbers they were shown, they got more and more worried. That was the point in which psychologists got involved to say, “Well, how do people make these judgments about risk? If they’re not making it on the basis of available data. How are they making these judgments? Why are they so much more worried than the nuclear engineer say they should be?” The bottom line on that is that risk perception is in large part intuitive, it’s felt. If you feel that something is a threat, you’ll take it seriously. If you don’t feel that, you won’t. Generally speaking, that applies to any risk. Sometimes that works, sometimes our intuitive understanding of risk, or intuitive sense of risk, is very accurate and will keep us out of danger, and sometimes it is horribly inaccurate. It will not help us whatsoever. Simple example is after 9/11. Of course, we all saw the jet fly into the tower. We saw what happened afterward, and all sorts of folks became terrified of flying, thinking that they will be the next victims of deadly hijackings. They still had to get around, so what did they do? Well, they started driving instead, because that didn’t feel like a threat. Well guess what? Driving is in fact considerably riskier than flying. As a result of this mass shift from flying to driving, by some estimates, as many as 1,500 people died who would not otherwise have died. That’s a great example of how our intuitive perception of risk can steer us in fact into greater danger. [0:06:23.5] MB: That’s something that I find really fascinating, and especially I feel like people who constantly watch the news or get caught up in stories about terrorism, or mass shootings, or whatever it might be, kind of miss the point that, I think as you’ve said in the past, today we’re actually some of the healthiest and safest people to ever live on planet Earth. [0:06:42.1] DG: Yeah, I mean that’s just an indisputable fact. We are some of the healthiest and safest people — and wealthiest too, if you want to throw that one in — to ever live, and yet we sure don’t talk or act like it. That’s really pretty unfortunate. Number one, we’re not sort of appreciating the bounty which has been befallen upon us, but also it means that we’re — in large part, we’re missing the real risks very often when we think about what we should worry about and what we shouldn’t worry about. You’re quite right, we worry about the big dramatic, the vivid risks like terrorist attacks, even though any quick glance in the statistics will tell you that as an individual, are you likely to be killed in a terrorist attack? Almost certainly not. But simultaneously, we ignore the real risks. Sitting on the couch, watching television, eating junk food doesn’t feel like a threat, but if you do it day after day, month after month, year after year? Yeah, it is a real threat, and that’s why there’s some pretty undramatic advice that I always give people. I always say basically, if you eat a reasonable diet, don’t smoke, obey all traffic rules, get some exercise, you have basically dramatically reduced your risk to all the major killers in modern life. That’s not a terribly exciting message. It’s not exactly great for grabbing headlines. [0:08:07.0] MB: You know, it’s funny. Often times, the best advice is the most simple and obvious. [0:08:12.2] DG: Yeah, I mean this is one of those areas where that is absolutely true, but the problem of course is again, it goes back to how do we judge risks? And as I say, sitting on your couch, watching television, eating junk food, it does not feel like a threat, because of our risk perception psychology. Where does that come from? It comes from where the brain evolves, the environment in which it evolves. It evolved in a world completely unlike the world in which we live and so there is this radical mismatch between our intuitive sense of risk and the world in which we live. The things that we should kind of be worried about, like not getting enough exercise, like eating too much salt, like smoking, those things don’t feel like threats. Meantime, those things that do feel like major threats, the terrorist attack that you see on television or whatever, they aren’t so much. That’s why it’s so absolutely critical that people think carefully about risk judgments. To ask themselves hard questions. Does this really make sense? Is there really evidence to support this? Don’t let your gut drive the decision. [0:09:24.6] MB: When thinking about some of these major risks for somebody who is listening now, instead of following kind of their gut instinct, what you’re recommending is think a little bit more deeply about it. [0:09:33.9] DG: Absolutely. Introspection is absolutely essential, and this is actually a point which I think comes out of psychology in general, comes out of decision making in general. When you ask who are the people who make good judgments and what do they have in common? I would suggest to you that there is at least a couple of points that are universal, and at the top of that list is introspection. People who have good judgment tend to think a lot about their thinking. Psychologist call that meta cognition. They think about their thinking. They tend to be sorts of people that say, “Okay, this is what I think. Here’s my conclusion, but does it really make sense? Is it really supported by evidence? Am I looking at the evidence in an unbiased fashion? Have I overlooked other possible explanations?” As I say, when you look at people with good judgment, you find that they have that introspection in space. My favorite illustration that is George Soros. George Soros is — of course today is controversial, because of politics, but just forget that. Remember that George Soros in the 1950’s to the 1980’s was an incredibly successful investor. Particularly during the 1970’s. That was impressive, because of course that was a terrible time to be an investor, and yet he was very successful during that time. The interesting thing is, when George Soros was asked, “George, why are you so good?” And when you’ve made billions and billions of dollars, you’re perfectly entitled to say it’s because I’m smarter than all you people. He never said anything at all like that. His answer was always the same thing. He always said, “I am absolutely aware that I am going to make mistakes, and so I’m constantly looking at my own thinking to try to find the mistakes that I know must be there, and as a result, I catch and correct more of my mistakes than does the other guy.” It’s that sort of a very intellectually humble message which he says is the source of his success and frankly, I think you can, as I say, I think you can find that sort of deep introspection in every single person who has demonstrable good judgment. [0:11:40.1] MB: On the topic of good judgment, I think that’s a good segue into kind of the whole discussion about forecasting. Let’s start out — I’d love to hear the story or kind of the analogy of monkeys throwing darts. Tell me about that? [0:11:54.1] DG: Yeah, we call that “unfortunate punchline” by coauthor Philip Tetlock. He’s a very imminent psychologist, and — recently at the university of California of Berkley, now at the University of Pennsylvania, the Wharton School of Business. Phil, back in the 1980’s, became interested in expert political judgments. You have very smart people who are observing world affairs, and they say, “Okay, I think I understand it, and I think I know what’s going to happen next.” They make the forecast. Phil decided, “Well, are they any good?” When you look at the available evidence, what you quickly realize is while lots of people have lots of opinions about expert forecast, that’s all they are. They hadn’t been properly scientifically tested. So Phil said to himself, well how should they be tested? How can we do this? He developed a methodology for testing the accuracy of expert forecasts, and then he launched what was at the time one of the biggest research programs on expert political forecasting ever undertaken. He had over 280 experts, people like economists, political scientists, journalists, intelligence analysts. He had those folks make a huge number of predictions about geo-political events over many different timeframes, and then he waited for time to pass so that he could judge the accuracy of the forecast. Then he brought together all the data, and crunched all the data, and boiled it all down, and there are vast numbers of findings that came out of this enormous research, which was published in a book called Expert Political Judgment in 2005. One conclusion that came out of this research was that the average expert was about as accurate as random guessing, or if you want to be pejorative, the average expert was about as accurate as a dart-throwing chimpanzee. Some people really latched on to that conclusion, they really enjoyed that. These are the sorts of people who like to sneer at so called experts. There are other people who like to say that it’s impossible to predict the future, and they always cite this as being evidence of that — demonstrably fallacious conclusion. This is one of those instances where statisticians like to warn people that averages are often useful and insightful, but sometimes they obscure things, and this is one of those classic illustrations where the average actually obscured the reality. The really interesting finding from Phil’s research was not that the average expert was about as accurate as a dart-throwing chimpanzee. It was that there were two statistically distinguishable groups of experts. One group did much worse than the dart-throwing chimpanzee, which is pretty incredible when you think about it. The other group had real predictive insight. They did better than random guessing. It was still modest predictive insight; they made lots of errors, but they clearly had real foresight. The really interesting question from Phil’s original research was what distinguishes the two types of experts? What makes one type of expert a disaster, and what makes the other type of experts somebody with real foresight? He looked at all sorts of the factors that you think might be relevant. Did they have PHD’s, did they have access to classified information, whether they were left wing or right wing, optimistic or pessimistic, and he showed that none of these factors made a difference. Ultimately, what made the difference was the style of thinking. The two types of forecastors had two very different styles of thinking. To sum this up, Phil used a metaphor which has been used in many different contexts. Foxes and hedgehogs, because there’s a scrap of Ancient Greek poetry in which the Ancient Greek poet says, “The fox knows many things, but the hedgehog knows one big thing.” The one type of expert style of thinking is to have one big idea, that’s the hedgehog. The hedgehog has one big idea, and here that means they have one analytical tool. They have one lens, one way of looking at reality, and they think that that is sort of the secret decoder ring of the universe. So they use it over and over again to tell them what is going on. To make forecasta. That sort of expert, they like to keep their analysis simple, they don’t like to clutter it up with a whole bunch of different perspectives and information. They like to push the analyst until it delivers a nice clear answer, and of course if you deliberate — if you push the analysist until it lures a clear answer, you’re more often than not, you’re going to be very confident in your conclusion. You’re going to be more likely to say that something is certain or that something is impossible. The other type of expert is the fox, and as the ancient Greek poet says it, the fox knows many things. What that means in this context is, the fox doesn’t have one big analytical idea, the fox will use multiple analytical ideas. In this case the fox may use one idea, and in another case, the fox makes a different idea. Foxes are also very comfortable with going and consulting other views. Here I have my analysis, I come to a conclusion, but you have an analysis, I want to hear your analysis. If you’ve got a different way of thinking at different analysis, a different method, then I definitely want to hear that. They want to hear from multiple information sources. They want to hear different perspectives, and they drag those perspectives together and try to make sense of all these separate sources of information and different perspectives. Now, if you do that, you will necessarily end up with an analysis that is not so elegant as the hedgehog’s analysis. It will be complex and it will be uncertain, right? You’ll probably end up with more situations where you have — say you have seven factors that point in one direction, or five factors that point in another direction, and then you’ll say well, you know, on balance, I think it’s maybe 65% it will happen. They’ll be more likely to say that sort of thing than they will be to say it’s certain to happen or it’s impossible, right? They end up being much less confident than the hedgehogs. Well, the conclusion of Phil’s research was that the hedgehogs were disastrous when it came to making accurate forecasts. As I said, they were less accurate than the dart throwing chimpanzee. The foxes had the style of thinking that was more likely to produce an accurate forecast, but here’s the punchline. The real punchline from Phil’s research is that he also showed there was an inverse correlation between fame and accuracy. Meaning, the more famous the expert was, the less accurate his forecasting was, which sounds absolutely perverse when you think about it, because of course you would think that the media would flock to the accurate forecast or ignore the inaccurate forecaster. In fact, it makes perfect sense because remember that the hedgehog tells you a simple, clear story that comes to a definite conclusion. It will happen or it won’t happen. A confident conclusion, whereas the fox expert says, “Well, there are some factors pointing at one direction, and other factors pointing in another direction. There’s a lot of uncertainty here, but I think it’s more likely than not that it will happen. If you know anything about the psychology of uncertainty, we really just don’t like uncertainty, right? When you go to an expert and you get that fox-like answer that says well, balance of probabilities, that’s psychologically unsatisfying, whereas the hedgehog is giving you what you psychologically crave, which is a nice, simple clear story with a strong clear conclusion and as a result. We find that the media goes to exactly the type of expert who was most likely to be wrong. That’s a really important and really unfortunate finding, and I wish it were as famous as Phil’s finding about the predictions being as likely to — as accurate as the dart throwing chimpanzee, because it is just so much more important. Unfortunately, there it is. That was through the culmination of Phil’s first enormous research program. [0:20:05.7] MB: I think it’s such an important finding that the smartest people, “the most accurate forecasters”, as you call them, the foxes, are often kind of the most humble and the least confident and certain about what’s actually going to happen. [0:20:18.3] DG: Yup. This is, again, if you were asking about sort of the universals of good judgment. One of the universals is a quality that I call intellectual humility. I emphasize intellectual humility because it’s not just humility. This isn’t somebody ringing his or her hands and saying I’m not worthy, I’m no good. By intellectual humility, I mean, it’s almost like a worldview in which you say look, reality is immense, complex fundamentally uncertain in many ways. For us to understand even a little bit of it, let alone to predict what’s going to come next is a constant struggle. What’s more, we’re fallible people and people make mistakes, so I just know that I’m going to have to work really hard and I’m still going to make mistakes, but I can in fact slowly try to comprehend a little bit and try to do a little bit better. That attitude is absolutely fundamental for a couple of reasons. Number one, it says you’re going to have to work really hard at this, right? Comprehending reality, let alone forecasting, is not easy. Expect to work hard if you want to do it well and accurately. Number two is, it encourages introspection, you remember I mentioned earlier, that introspection is universal among people’s good judgment. Well, if you’re intellectually humble and you know you're going to make mistakes, you’re going to be constantly thinking about your thinking so that you can try and find those errors, okay? That is so that introspection flows naturally out of intellectual humility. The third element that comes, flows out of intellectual humility is this. If you have this idea that you know, the universe is vast and complex and we can never be sure, then you know that certainty is an illusion. You should not be chasing certainty because human beings just can’t manage that. What does that mean? That means, don’t think of making a forecast in terms of it will happen or it won’t happen. Don’t’ think in terms of it’s 100% or 0%. Think in terms of one to 99%. It’s all a question of degrees of maybe right? The finer green you can distinguish between degrees, maybe the better. What I’ve just described is something called probabilistic thinking. It too is very fundamental to people with good judgment, and unfortunately, it’s very unnatural. It’s not how people normally think. In fact, how people normally think is — we sometimes call the three-setting mental dial. You know, you ask yourself, is this thing going to happen? And you say, it will happen or it won’t happen, or if you really force me to acknowledge uncertainty — because I really don’t like uncertainty — I will say maybe. That’s the third setting of my mental dial. There’s only those three crude settings, whereas probabilistic thinking says no. Throw out those two settings, it will happen or won’t happen, it’s all degrees of maybe. As I say, this is not natural. This is not how people ordinarily think, but people can learn to do it, and they can make it a habit. Scientists think as probabilistic thinkers, good scientists do anyway, and the super forecasters that we discovered in Phil’s second research program. People with demonstrably excellent forecasting skill, they are real probabilistic thinkers. It is a habit with them. I mean, I spoke with one super forecaster and you know, just in a casual conversation I said, “Do you read? Do you read much?” He said, “I read lots”, and I said, “Well, do you read fiction or nonfiction? He said, “I read both”. I said, “Well, what proportion of the two would you say that you read?” He said, “It’s about 70/30.” Then he caught himself and thought carefully, and he said, “No, it’s closer to 65/35”, right? This is in a casual conversation. Normal people just don’t think with that degree of fine-grained maybeness. People who learn to think in probabilistic terms, they can make it habitual, and they can think that carefully. By the way, the data is very clear that that is in fact one of the reasons why these super forecasters are super. [0:24:38.5] MB: Before we dig into that, because I do want to talk about how we can kind of train ourselves to think more probabilistically, and how we can learn from some of these super forecasters. Touching back on the idea of why people dislike uncertainty so much. Can you share kind of the anecdote about cancer diagnosis? [0:24:55.8] DG: Sure. Look, when I say that people dislike uncertainty, people, I get it, okay? I dislike uncertainty. I would prefer to have hard facts, it is or it isn’t. Okay, I don’t’ think they quite appreciate just how profoundly aversive uncertainty really is, psychologically aversive, it really is. Let me illustrate in fact with two illustrations. One is a scientific study that was conducted in Holland where they asked volunteers to experience electric shocks. and Some of the volunteers were told, “you are about to receive 20 strong electric shocks in a sequence”, and then they were wired up to be monitored for the physiological evidence of fear,, which is elevated heartrate, elevated respiration rate, perspiration of course. Then other volunteers were told, you will receive 17 mild electric shocks randomly with three strong electric shocks and they too were monitored for the evidence of fear. Now objectively, the first group obviously received much more pain, much more painful shocks but guess who experienced more fear? It was the second group. Why? Because they never could know whether that next shock would be strong or mild. That uncertainty caused much more fear than the pain itself. That sort of aversion to uncertainty is very powerful stuff, and you will see it in doctor’s offices. In fact, any doctor will tell you a version of the story I’m about to say. The patient comes in, the doctor has reason to suspect that the patient has cancer, tells the patient this, says, “But we can’t be sure. We have to do more tests, and then we’ll see.” They do the tests, and then the patient waits. And any person who has ever been through that will tell you that the waiting is hell. Then one day, you go back to the doctor’s office, you sit down and sometimes unfortunately, the doctor has to say, “I’m afraid to tell you that the tests confirm that you have cancer.” Almost universally, what patients report feeling at that moment is relief. They feel better and they almost always say the same thing: “At least I know.” That’s how powerful uncertainty is, that the possibility of a bad thing happening can be a greater psychological burden on us than is the certainty that the bad thing is happening. If that’s the case, if uncertainty is so horrible to us and we just want to get rid of it, it’s really no surprise then that we will turn to sources that promise to get rid of uncertainty, even when it’s not rational to do so. [0:27:49.2] MB: Now let’s dig in to kind of the idea of super forecasting, and let’s start with what is a super forecaster? [0:27:57.2] DG: Yeah, it’s a bit of a grandiose term, I have to admit. It actually has humble origins. A number of years ago, the Office of the Director of National Intelligence in the United States, that’s the office that oversees all the 16 intelligence agencies — including the CIA — in the United States. A number of officials in that office decided that they had to get more serious about analyzing the forecasting that the intelligence community does. I don’t know if you’re aware, but the intelligence community actually spends a lot of its time not just spying, but also analyzing information to try and figure out what’s going to happen next. If Russia is saber-raffling, they’re going to make a forecast. Will Russia try to seize the Crimea? You know, they’ll try to make forecasts, but on all parts of geo-political events, including economic events like what’s going to happen at the Chinese economy in the fourth quarter, that sort of thing. The officials within the ODNI decided that they had to get better at this. One of the ways that they decided they would get better at this is to sponsor what became called a forecasting tournament. What that meant was very simple. It sounds like a game, but it’s not a game. It’s an enormous research program, and what they did was they went to leading researchers and forecasting and they said, “You set up a team to make forecasts, and we’ll ask questions, and they’ll be the real world questions that we have to answer all the time. We’ll ask them in real time, so as they arise. If an insurrection breaks out in Syria, we’ll ask something about how that will proceed. So you have to forecast it, and then we’ll let time pass and then we will judge whether your forecasts are accurate or not. We’ll do this for lots and lots of questions and you guys, you researchers, you can use any methods you want, and then at the end of this process, we will be able to analyze the accuracy of all this forecast. We will see which methods work, which methods don’t, and then try to learn how we can improve what we’re doing. Very sensible stuff, you would think. As I said, they went out to leading researchers, ultimately they ended up with five university based research teams in this forecasting tournament. One of the research teams was led by my coauthor Philip Tetlock, and that team was called the Good Judgment Project. To give you an idea of the scale of this undertaking, the Good Judgment Project, which as I say was only one of five teams, it involved volunteers. They went out and they were recruited, and — through blogs and whatnot, and said, you know, basically, do you want to spend a little free time making geo-political forecasts, then sign up here. They got huge numbers of volunteers. At any one time there were 2,800 to 3,000 people involved with the Good Judgment Project. Over the course of the four year tournament, there were more than 20,000 people involved. That gives you an idea of the scale of this and the bottom line result. There were many results that came out of this because as you can imagine, the data are luminous. The bottom line result was one. The Good Judgment Project won hands down. Number two, the good judgment project discovered that there was a small percentage between 1% and 2% of the forecasters, the volunteer forecasters were truly excellent forecasters. They were consistently good, and I say consistently good because that’s very important to bear in mind. Anybody can get lucky once, or twice, or three times, but if you’re consistently good, you can be pretty sure that you’re looking at skill not luck. To give you an idea of how good they were, well, at the start of the tournament, the ODNI set performance benchmarks which all the researchers thought were way too ambitious. Nobody could beat this. The super forecasters blew past the performance benchmarks. They beat prediction markets which economist would say shouldn’t be possible. They even beat intelligence analysts who had access to classified information. Which is particularly amazing because remember, these are ordinary folks. These super forecasters, when they went to make their forecast, basically they had to use just whatever information they could dig up with Google. Yet they were able to beat even people who had access to all that juicy classified information. This is really impressive stuff and then the question is, well why are they so good? We can quickly dispatch a number of things that you might think would explain this. Number one, you might think that they’re using some kind of arcane math, right? They’re using big data or algorithms, some craziness that ordinary folks can’t understand. No, they didn’t. In fact, to the extent of the youth math, they were a very numeric people by the way. They are very numeric people. I should emphasize that point. They are well above average in numeracy. To the extent that they use math in making their judgments, it was like high school math, it was nothing particularly dramatic. Another thing that you might say would make the difference. Well, maybe they’re just geniuses, right? They’re just so off the charts intelligence that they’re just super. No, that’ snot the case either. They were tested for — they were given IQ tests, and again, they scored well above average. These are not just randomly selected folks off the street. But, they’re not sort of mental level geniuses, they’re not so incredibly intelligent that ordinary folks can’t relate to them. It’s very clear that conclusion that you can draw from this is basically, it’s less what they have than how they use it. The third element that you might think is specialist knowledge, right? You might think, well, okay, these are experts in some fields in the fields that they’re trying to forecast, and no, I can tell you categorically they were not experts in field. They’re very informed people, right? These are people who agreed to make geo-political forecasts in their spare time. It’s no surprise that they’re smart, they followed the news, they follow international news, they’re interested in the stuff, they’re very informed but they’re not specialists. We know this for the very simple reason that they were asked about all sorts of different questions, and all sorts of different fields, and nobody’s an expert in every field. So, they’re not any of those things so then, the question is, well what elevates them, what makes them different? I wish they were like one or two simple answers, a couple of clear, crisp bullet points that answers everything, but that’s not the case as so often the case, the reality is complex. There’s quite a list of things that make them different. Number one, they’re intellectually curious. I think that’s very important, it’s no surprise. These are people who like to learn, they’re constantly picking up bits and pieces of information, and no surprise, when you spend a lot of time picking up this sort of information, eventually you will have quite a number of dots in your intellectual arsenal for you to connect. Two, these are people who score very high in what psychologist call a need for cognition, which simply means that they like to think. They really enjoy thinking, they’re the kinds of people who do puzzles for fun and the harder the puzzle is, the more fun it is, which is very important because when you look at how they actually make their forecasts, its’ a lot of hard mental effort and so enjoying a hard mental effort sure helps. Three, they’re actively open minded. This is another term form psychology. Open minded is pretty obvious, that means okay, I’ve got my perspective but I want to hear your perspective. I want to hear somebody else’s perspective. I want to hear different ways of thinking about this problem. Then they’re going to gather all these different perspectives together and try to synthesize them into their own view. Now, that’s the open-minded part, but of course, there’s an old saying about open-mindedness. Don’t be so open minded that your brain falls out. Well, this folks, that’s the active part, the active open mindedness, and these folks were very active in their open mindedness. Meaning that as they’re listening to all this other perspectives and gathering these other perspectives, they’re thinking critically about them. They’re saying, does that really make sense? Is that actually supported by the evidence? Is that logical? They’re doing that constantly when they draw these perspectives together and synthesize them into their own view, which again, I would emphasize, that sounds like a heck of a lot of work. It is. Unfortunately, as I said, they like hard thinking. Fundamentally also, they’re intellectually humble. I mentioned intellectual humility earlier. That is absolutely true here, and all the things that flow from that are true. You know, they’re hard mental workers, they’re deeply introspective people, they’re constantly looking at their thinking. Trying to find the mistakes, trying to correct it and improve it, and the probabilistic thinkers, that also flows form intellectual humility. Another element I would also add is simply this, if you ask how do they actually approach a problem, how do they actually make a judgment? One of the critical differences between a super forecaster and most ordinary folks is, rather than simply vaguely mulling over information and stroking your chin until somehow an answer emerges somehow and you don’t know how. That’s a terrible way to make a forecast by the way. What they do is that they methodically unpack the question. They take a big question, and they unpack it, and make a whole series of smaller questions, and then they unpack those and they make a series of smaller questions, and they methodically examine them. Each one, step by step by step. Again, this is a very laborious method, a lot of hard mental work goes into it, but it’s demonstrably effective. There’s a famous physicist named Enrico Fermi. One of the fathers of the atomic bomb, who became famous for his ability to estimate things accurately. He actually taught this method. Fermi estimates basically involve unpacking questions so that you methodically tackle them one after the other after another. People who work in physics or engineering will be familiar with this. Fermi estimates are actually taught in those departments. In fact, engineers to engineers, this is almost second nature, this idea of unpacking the problem and methodically tackling it that way. It’s probably not. This is a bit speculative, but it’s probably not a coincidence that a disproportionate number of the super forecasters have engineering backgrounds. Software engineers, computer programmers, whatever. People with engineering background sort of get this. [0:38:35.2] MB: That was fascinating, and I think one of the most important things you said is that it’s not easy. It takes a lot of hard work to make effective decisions or in this particular context, effective forecasts. One of the things that I always say is that there’s no kind of get rich quick strategy to becoming a better thinker. It takes a lot of time, energy, reading, and introspection to really build kind of a robust thought process to improve your own ability to think and make better decisions. [0:39:05.0] DG: That’s absolutely correct. It also touches on a further factor, which I didn’t mention, which is certainly one of the most important. Which is that these are people who have what psychologists call the growth mindset, which is that they believe that if they think hard, and they work hard, and they practice their forecasting skill, and they look at the results of their forecasts, and they think about how they got them right or how they got them wrong, and then they try again, that they will improve their forecasting skill. Just as you would improve any skill that you practice carefully with good feedback over time. You might say, but isn’t that perfectly obvious? Doesn’t everybody understand that in order for you to improve a skill, you have to practice it, and the more you practice, the better it will get? Unfortunately, that’s just not true. There’s a psychologist named Carol Dwack who has done an enormous amount of researching skill, and she talks about two mindsets. One is the growth mindset that I just described, but the other mindset is the fixed mindset, which is basically the idea that we’re all born with abilities and talents and skills, and that’s all we’ve got. If I try something and I fail, I’m not going to try it again, because I have demonstrated the limits of my abilities and it would be foolish of me to waste time trying to improve those abilities. That’s why it’s very critical — and we see this clearly in the super forecasters, they have very strong growth mindset, and more importantly, they put it into action. They were making their forecasts, they were doing post mortems, trying to figure out what went right, what went wrong and why. They were trying to improve on the next round, and they did, there was demonstrable improvement. It’s very clear that underlying all of this is you have to have some belief in the ability to grow or you won’t engage in the hard work that’s necessary to grow. [0:41:10.2] MB: Long time listeners on the show will know that on here, we’re huge fans of Carol Dwack and the book Mindset, and we actually have a whole episode on kind of the difference between the growth mindset and the fixed mindset. [0:41:22.2] DG: Great. [0:41:22.1] MB: Breaking out all those things. I’ll include links to both of those things in the show notes for people to kind of be able to dig down and really understand those concepts who may not have heard the previous episodes we have about that kind of stuff. Yeah, I totally agree, I’m a huge fan of the growth mindset and I think it’s critically important. [0:41:39.8] DG: Yeah, there’s no question that in Phil Tetlock’s research, super forecasting research. The data very clearly demonstrate that. [0:41:47.8] MB: For somebody who is listening, what are some sort of small concrete steps they could take right now to kind of implement some of the best practices of super forecasters to improve their own thinking? [0:41:58.7] DG: Well, the first thing I would say is, adopt as an axiom, because of course as humans, we all have to have axioms in our thinking. Adopt as an axiom that nothing is certain, right? It’s easy to say that in the abstract, but it’s a lot harder to apply it in our lives, because if you stop and you think about your own thinking, you’ll begin to realize that you use the language of certainty constantly, which is normally fine. I’m sure in this conversation, I’ve used certainly and that sort of saying, remember at a minimum that any time that you say certain or refer to certainty, there’s an asterisk. All of us, right? The asterisk means almost. Because in fact, in reality, literally nothing is certain. Not even death and taxes. Once you start to think in those terms, you make that an axiom. You can start to make it a habit to say, okay, it’s not certain, how likely is it? Think in terms of probability, and you know, it’s often said that the ability to distinguish between a 48% probability and a 52% probability or even a 45% and a 55% probability? It sounds like a modest thing, but if you can do that concisely, that’s the difference between going bankrupt and making a fortune in certain environments, such as Las Vegas or Wall Street. Learning to think, to make it habitual to think in terms of probability is I think step number one. [0:43:32.4] MB: For listeners who want to find you or the book, what’s the best place for people to find you online? [0:43:38.3] DG: Probably dangardner.ca for Canada. [0:43:45.7] MB: For listeners who might have missed it earlier, the book that we’re primarily been talking about is Super Forecasting. Highly recommend it, as you can tell form this interview. Dan is incredibly sharp about all these different topics. Dan, for somebody who’s listening, obviously, they should check out Super Forecasting. What are some other resources you’d recommend if they want to learn more about kind of how to make better decisions and how to make better forecasts? [0:44:08.1] DG: That’s an easy question. The very first book — in fact, I would recommend it before my own books, which is something authors aren’t supposed to do, but here it goes. The very first books folks should read is Daniel Conman’s book, Thinking Fast and Slow. Conman is a course, the Nobel Prize winning psychologist, who is one of the symbol figures of our time, and fortunately, he finally got around to — not long after I read all of his papers and I learned the hard way — he finally got around to writing a popular book, and Thinking Fast and Slow is absolutely essential reading. Anybody who makes decisions in — whether it’s in business, or in government, or in the military, or anywhere eels, anybody who makes decisions that matter should read Thinking Fast and Slow. [0:44:54.3] MB: I totally agree. It’s one of my favorite books, and I think one of the deepest, most information rich books about psychology that’s on the market today. [0:45:03.1] DG: Absolutely. [0:45:04.0] MB: Dan, this has been a great conversation, and filled with a lot of fascinating insights. Thank you very much for being on the show. [0:45:11.9] DG: Thank you, it’s a lot of fun. [0:45:13.4] MB: Thank you so much for listening to the Science of Success. Listeners like you are why we do this podcast. The emails and stories we receive from listeners around the globe bring us joy and fuel our mission to unleash human potential. I would love to hear from you. Shoot me an email, send me your thoughts, kind words, comments, ideas, suggestions, your story, what the podcast means to you. Whatever it might be. I read and respond to every single email that I get from listeners. My email address is matt@scienceofsuccess.co. Shoot me an email, I would love to hear from you. The greatest compliment you can give us is a referral to a friend either live or online. If you’ve enjoyed this episode, please, leave us an awesome review and subscribe on iTunes. That helps more and more people discover the Science of Success. Lastly, as a thank you to you for being awesome listeners, I’m giving away a $100 Amazon gift card. All you have to do to be entered to win is to text the word smarter to the number 44222. Thanks again, and we’ll see you on the next episode of the Science of Success.