Making Better Decisions, The Sophomore Jinx, & The Illusion of Objectivity with Dr. Richard Nisbett
In this episode we discuss the errors people make in their reasoning and how to correct them, we explain a number of statistical principles to help sharpen your thinking and make you a better decision maker, why every $1 spent on a “scared straight” program creates $400 of cost for the criminal justice system, the illusion of objectivity, why you should NOT rely on your intuition and much more with Dr. Richard Nisbett.
Dr. Richard Nisbett is a professor of psychology at the University of Michigan. He has been awarded the Distinguished Scientific Contribution Award of the American Psychology Association, the William James Fellow Award for Distinguished Scientific Achievements, and the Donald T. Campbell Award for Distinguished Research in Social Psychology, among others. He is the author of the recent book Mindware, as well as The Geography of Thought, Think Differently, and Intelligence and How To Get It.
The errors people make in their reasoning and how to correct them
How to apply the lessons of statistics to making better decisions
Is your intelligence fixed and unchangeable?
How the industrial revolution massively transformed the way people think
We discuss the skills, not on an IQ test, that you must have to be able to function effectively in today’s age
Why job interviews are totally useless and have almost no correlation to job performance
How misunderstanding the law of large numbers can lead you to make huge mistakes
Why does the rookie of the year almost always have a worse performance the following year?
Understanding regression to the mean and how it creates extremely counterintuitive conclusions
Why Performance = Skill + Luck
Why deterministic thinking can drastically mislead you in finding the root cause of a phenomena
We explain a number of statistical principles to help sharpen your thinking and make you a better decision maker
The concept of "base rates" and how they can transform how you think about reality
We walk through a number of concrete examples of how misunderstanding statistics can cause people to make terrible decisions
If you’re like most people, then like most people, you think you’re not like most people (but you are)
Why every $1 spent on a “scared straight” program creates $400 of cost in criminal and incarceration costs
Why the “head start” program is a massive failure and what we could have done about it
How you can use the experimental method to make data driven experiments in your life
The illusion of objectivity - Why you should NOT rely on your intuition
How we massively distort our perception of reality and why our perceptual apparatus can easily mislead us
How many of the structures we use to understand the world are highly error prone
Why we are amazing at pattern detection but horrible at "covariation detection”
Why the traditional rorschach test is bogus and doesn't actually produce any results
Why you are likely are “horrendously miscalibrated” in your assessments of people’s personalities
If you want to make better decisions - listen to this episode!
Thank you so much for listening!
Please SUBSCRIBE and LEAVE US A REVIEW on iTunes! (Click here for instructions on how to do that).
SHOW NOTES, LINKS, & RESEARCH
[Book] Bell Curve: Intelligence and Class Structure in American Life by Richard J. Herrnstein and Charles Murray
[Scholarly Article] Objectivity in the Eye of the Beholder by Emily Pronin, Lee Ross, and Thomas Gilovich
[Book] The Signal and the Noise: Why So Many Predictions Fail--but Some Don't by Nate Silver
[Book] How Not to Be Wrong: The Power of Mathematical Thinking by Jordan Ellenberg
[Book] Thought and Knowledge: An Introduction to Critical Thinking (Volume 2) by Diane F. Halpern
[Book] Thinking, Fast and Slow by Daniel Kahneman
Charlie Munger Resources:
[Book] Poor Charlie's Almanack by Peter D. Kaufman, Ed Wexler, Warren E. Buffett, and Charles T. Munger
[SOS Episode] How To Stop Living Your Life On Autopilot, Take Control, and Build a Toolbox of Mental Models to Understand Reality with Farnam Street’s Shane Parrish
[SOS Episode] The Psychology Behind Making Better Decisions with Global Financial Strategist Michael J. Mauboussin
[Farnam Street Blog] Creating a Latticework of Mental Models: An Introduction
[Safal Niveshak article] Mental Models
[Lattice Work article] Charlie Munger on Elementary Wisdom and Mental Models by Brian Hertzog
EPISODE TRANSCRIPT
[00:00:06.4] ANNOUNCER: Welcome to the Science of Success with your host, Matt Bodnar. [00:00:12.4] MB: Welcome to The Science of Success. I’m your host, Matt Bodnar. I’m an entrepreneur and investor in Nashville, Tennessee, and I’m obsessed with the mindset of success and the psychology of performance. I’ve read hundreds of books, conducted countless hours of research and study, and I am going to take you on a journey into the human mind and what makes peak performers tick, with the focus on always having our discussion rooted in psychological research and scientific fact, not opinion. In this episode, we discuss the errors people make and their reasoning in how to correct them. We explain a number of statistical principles to help you sharpen your thinking and make you a better decision maker. We look at why every $1 spent on a Scared Straight program creates $400 in additional cost of the criminal justice system. We talk about the illusion of objectivity and why you should not rely on your intuition. and much more with Dr. Richard Nisbett. The Science of Success continues to grow with more than 675,000 downloads, listeners in over a hundred countries hitting number one in New and Noteworthy and more. A lot of our listeners are curious how I organize and remember all this information. I get tons of emails and comments asking me how to keep track of all the incredible knowledge I get from reading hundreds of books, interviewing amazing experts, listening to a ton of podcasts, and much more. Because of that, we’ve created an amazing free resource for you. You can get it completely free by texting the word “smarter” to the number 44222. It’s a guide we created called How to Organize and Remember Everything. Listeners are loving it. We’re getting emails all the time about people telling us how this has changed our lives, how this has helped them stay more organized and keep track of all of the stuff that they’re learning. Again, you can get it completely free by texting the word “smarter” to the number 44222 or by going to scienceofsuccess.co and putting in your email. In our previous episode, we discussed the radical mismatch between your intuitive sense of risk and the actual risks you face. We looked at why most experts and forecasters are less accurate than dart throwing monkeys. We talked about how to simply and easily dramatically reduce your risk for the most major dangers in your life. We explored the results from the Good Judgment Project, which is a study of more than 20,000 forecast and we talk about super forecasters, what they are and how they beat prediction markets, intelligence analysts that had classified information and software algorithms to make the best possible forecasts and much more with Dan Gardner. If you’re thinking about planning for next year and you want to be able to predict the future better, listen to that episode. [00:02:46.4] MB: Today, we have another fascinating guest on the show, Richard Nisbett. Richard is a professor of psychology at the university of Michigan. He’s been awarded the Distinguished Scientific Contribution Award of the American Psychology Association, The William James Fellow award for distinguished scientific achievements, and the Donald T. Campbell award for distinguished research in social psychology among others. He’s the author of the recent book, Mindware, as well as well as The Geography of Thought, Think Differently, and Intelligence and How to Get It. Richard, welcome to the Science of Success. [0:03:16.0] RN: Thanks, glad to be here. [0:03:17.3] MB: Well we’re very excited to have you on here today. So for listeners who may not be familiar, tell us a little bit about yourself? [0:03:22.9] RN: Well, the thrust of my career has been studying reasoning and fairly early on, I got interested in studying the errors that people make in reasoning. And after I had been doing that for a while, I began to think, “Well, can I correct this errors?” And at the time — we’re talking now about the 70’s, early 80’s — psychologists were quite convinced that there really wasn’t much you could do to change the way people think, reasoning is done at a very concrete level, you can’t just insert abstract rules and expect that to affect reasoning. So I bought that and I don’t know exactly why I decided to test it anyway, but I did. I started to see if I could make people be more rational, make better judgments and decisions by teaching them rules like the law of large numbers, the concept of regression, how to think probabilistically, microeconomic concepts like cost benefit analysis, and so on. I found, first of all, people do learn in college, and this is counter to prevailing there. They do learn some general rules that do improve the way they think, although it’s very spotty. Different majors are learning different things. So then I decided to see whether I could, myself, teach them this rules in a brief period of time and what I found was that I can teach this kinds of rules and 15 to 20 minutes and they stick with people at least for a few months after that. I know it because I call them in the guise of a survey researcher asking them their opinions about various things and I know if they use the rule that they should then that will come out in the answer. Sure enough, people do to a very significant extent retain those kinds of rules. So that gave rise to the book that I wrote which is brief, breezy descriptions of rules and concrete problems that they can be applied to. [0:05:29.1] MB: I’d love to talk a little bit more about Mindware. Tell me what inspired you to write the book? [0:05:34.1] RN: Well, it was this discovery that people are learning something about probabilistic concepts, statistical concepts, experimental methodology concepts, micro economic concepts, some philosophy and science concepts, logic, et cetera. They’re getting some of that in school, they’re not getting nearly as much as they could easily if professors just spent a little more time. My joke about statistics courses is that they’re taught so as to prevent if at all possible, the escape of statistical concepts in everyday life. If professors of statistics just gave a few concrete examples, I now know that would make a huge difference. They probably would say, “Oh no, you know, we don’t have that kind of time. This material has to be gotten through.” That’s not the way to think about it because the concrete examples from everyday life actually feed back into an abstract understanding of the principles. So they actually could get their teaching done quicker if they gave more ordinary examples. So my book is trying to do that, “what your statistics teacher didn’t do for you, and if you haven’t had statistics, here are some very powerful concepts that will save you a lot of grief in life.” [0:06:54.9] MB: I’m curious, are you familiar at all with Charlie Munger and his concept of the idea of mental models and sort of the notion of arraying mental models on a latticework of understanding in terms of kind of building a much richer toolset to understand reality? [0:07:10.1] RN: I’m not, that sounds like something I should know about. [0:07:12.6] MB: Definitely recommend checking him out. After the show, I’ll shoot you a few links, he’s amazing and we’ll throw some things in the show notes as well. But he is one of my favorite thinkers about kind of a very similar concept, which he calls mental models. Which is basically the idea of, in order to accurately understand reality, we have to master the fundamental principles of all the major disciplines that govern reality, everything from the physical sciences to statistics, mathematics, economics, especially psychology and kind of build a robust framework that incorporates all of those into truly understanding reality. [0:07:44.2] RN: That sounds like a great idea. [0:07:45.3] MB: It sounds like in many ways, that’s kind of the same path that you embarked down in terms of taking a lot of these concepts that get easily misunderstood and making them so that people can really grasp them in a simple and understandable way. [0:07:57.0] RN: Exactly. [0:07:59.0] MB: I’m curious, one of the things that I’ve heard you talk about in the past is how both the sort of industrial revolution and then the kind of information revolution changed the way that we need to think. I’d love to hear you kind of explain that concept. [0:08:11.8] RN: I’d be delighted. I’d say 15 or 20 years ago, there was a book written by Charles Murry and Richard Hernstein, very famous at the time, called The Bell Curve. It’s all about intelligence, which basically says intelligence is basically fixed at birth. I mean it’s primarily a heritable thing, there’s not much you can do in the way of the environment to improve intelligence, or IQ. Oh and incidentally, some ethnic groups have lower IQ’s for genetic reasons than others. Every single thing I just said is wrong, and a book I wrote called, Intelligence and How to Get It, shows how wrong all of that is. But I would also say that intelligence is broader than what you test on IQ tests. I began to be aware of it doing studies historically or studies with people who have had no formal education, no experience with the modern world, is that the industrial revolution absolutely changed the way people think. I mean, profoundly. Prior to the late 18th century, people were not really capable of thinking in abstractions, they were not capable of applying logical rules to thought, they were not capable of counter factual reasoning. This is not the way the world is, we both know, but suppose the world where that way, what would follow from that? That was impossible for them we know because we know people with so little education today are unable to do those thing. So the industrial revolution, it taught people the three R’s, reading and writing and arithmetic and then for free, we got all of these abstract reasoning skills. We continue to improve in those kinds of skills. Over the last 70 years, IQ has increased by more than a standard deviation. That’s like approximately from a hundred, where the average was a hundred 70 years ago, the average on that same test would turn out to be a 115, 16, 18 today. That’s the difference between somebody that we would expect to graduate high school and maybe have a year or two of junior college, versus someone we would expect to surely finish a four year college and possibly go on to post-doctoral work. That’s the kind of difference that we get as a function of additional cultural changes, improvements in education, and so on. Even a lot of activities that are just, they’re not undertaken for instructional reasons but for fun. I Love Lucy was a great TV show but it didn’t place many demands on your intellect. But I watched TV shows now, I can’t keep up with them. Who is that guy? What is he trying to achieve? That kind of entertainment is much more sophisticated today and of course we have computer games. Also we know that some of those are improving intellectual skills. Okay, so that’s the history of IQ and some kinds of intelligence that are related to IQ. But we live in a new era, the information age and the IQ skills are still highly relevant, but there are a lot of skills that are not represented on IQ tests at all that you have to have in the new age to be smart enough to function in our age. I’ll give you one example. If I ask people to tell me what they think would be likely to happen if you looked at the boy births versus girl births per day in two hospitals in a given town, one with about 15 births per day and one with about 45 births per day and then you ask, “At which of this two hospitals do you think there would be more days in the year when 60% or more of the babies born were boys?” Now, half the people will tell you, it makes no difference and of the remainder about half will say it would be the larger hospital that would have more such days and about half say it would be the smaller hospital. In actual fact, if you think about 15, well at 60%, that would be nine boys versus six girls. That doesn’t sound, you know, that’s the kind of thing that can happen frequently. If you had one more girl birth instead of boy then it would be eight-seven and you can’t do any better than that in coming close to 50/50, which we assume is the population value for percent of boys born. With 45 however, it’s really very unlikely. You’re now looking at 60% difference, which you would see only three or four times a year. I mean, it’s because the larger your sample, the closer you come to the population value, if you have a very small sample, you can be way off. Suppose there were three births. You’re going to automatically be hugely off the population value. As long as you’re sampling randomly, which basically is the way to think about births, the more the cases you have, the more it’s going to resemble the population value. So that’s a useful kind of thing to know for that kind of numerical example and there’s lots that happens in the world that you’ll think about differently if you know it. But I apply the law of large numbers to the following kind of problem. I say to people, I have a friend who is an executive and he told me that the other day he interviewed someone who had great recommendations from his previous companies and he had a great record of performance. But in the interview with the guy kind of seemed kind of lack luster, he didn’t have any very transient things to say about my friend’s business so he decided not to pursue the guy any further. If I say this people say, “Well, yeah okay, fine. What’s interesting about that? Happens all the time.” Well it does happen all the time, but it’s a huge mistake because it turns out that the interview correlates with subsequent performance in college, in graduate school, in medical school, in officer training school and every business profession that’s ever been examined to the tune of 0.1. That’s extremely low correlation. It’s enough to take you from picking the better of two candidates from 50/50, a coin toss up to about 53%. It’s a trivial gain and what’s horrendous about that is that typically the folder has a huge amount of information, a grade point average, previous performance, letters from people who have known the person for hundreds or thousands of hours. It’s a huge amount of evidence, that’s becoming much closer to the population value on average than you would ever get for an interview. So it’s a mistake actually to interview at all because the gain, if you could confine the judgment about the other person whether the higher the person or not. If you could confine the interview to its appropriate place which is essentially no more than a tie breaker, but we’re not capable of doing that. I’m not capable of doing that. When I interview people, I have this same illusion you know, “I really learned lot about this person’s intellect and personality,” and it’s baloney, I haven’t. But to make matters even worse for the interview, it isn’t even a sample of the population you’re after. It isn’t a sample of job performance or a school performance. It’s a sample of interview performance, and those are not at all the same thing we know, imperially. Some people, you know, extroverts are great at interviews and introverts are not so good. But you typically are hiring for skills other than a personality trait. So that would be a typical kind of way that I teach in the book. I mean, here is a principle stated in some highly informal way and here are some concrete examples and I know for many of the things that are taught in the course, I know that this kind of instruction is powerful and backs the way people think. There are things in the book where I don’t know that, but I have pretty good idea that the principles are sufficiently similar psychologically that everything in the book I do believe can have a big impact on the way people think and the kind of thing that’s necessary for the information age. 300 years ago we didn’t have the kind of information, we didn’t have the folder that we do now. But people need to be able to collect information, analyze information, analyze arguments based on information, persuade other people based on information, know how to generate reliable information from assessment or from interventions of various kinds. So you’re not information age smart if you don’t know the kinds of things that I’m talking about. [0:17:11.9] MB: You know, one of my favorite examples of kind of misunderstanding sample size, I think an example that Kahneman uses talking about I believe it’s Kidney cancer rates. And, you know, he kind of starts out with this vignette about how rural counties have the lowest instances of kidney cancer rates and then he asks people to explain, “Okay, why is that the case?” And you know, they think to themselves, “Oh, you know, maybe it’s the fresh air, there’s not as much pollution. They’re spending more time outside, et cetera.” He goes, “Okay, also rural counties have the highest rates of kidney cancer,” right? Like different rural counties. The highest and lowest rates are both in rural counties, and then people figure out they make all this explanations to the same way when in reality, both instances are just statistical artifacts from the fact that they’re just small sample sizes and so they have bigger outliers in terms of the results for cancer rates. [0:18:00.9] RN: That’s a great example. That one was new to me, I had not known about it before I read Danny’s book. But let me give you another example of something like that. I mean, if you ask people, you tell them a fact, “As you may or may not know, the rookie of the year in baseball, that is the best player is rarely the best player of the next year. This is sometimes called a sophomore jigs. How would you explain this phenomenon?” For people who had never had statistics, they will always go the causal route, the deterministic route. They will say, “Well, you know, maybe the pictures make the necessary adjustments or maybe the guy gets too cocky and he slacks off.” But actually the principle of statistical regression tells us that it’s almost inevitable that the person who’s best in a given year is not going to be best in the next year. You think about how did that person get to be the best baseball player the first year? Well, certainly by virtually having a lot of talent, much more talent than the average person but everything else went right. Two, he got just the right coaching, first three or four games he played, he did it extremely well, built his confidence, he got engaged to the girl of his dreams. The next year, the great dice thrower in the sky gave him an elbow injury so he was out for quite a while and sorry to say, his girlfriend, his fiancé jolted him. So the point being, that around any observation that we make, we’re looking at something that’s been generated by what a measurement theorist would say is true score, God’s own understanding of what the facts of the matter are, plus error. There’s always error for absolutely everything. Now, for some things, it’s vanishingly small but there is always error associated with every observation and that kind of error is you roll the dice again for this good baseball player and you’re probably not going to get all aces. Everything’s not going to come up so great for this guy because a lot of performance that you’re observing is error. Another example would be I tell people, I have a friend, she’s a foodie but she’s discovered that when she goes back to her restaurant where she’s had a really excellent meal, subsequent meals are rarely as good, why is that? People will give you nothing but deterministic answers for that. They’ll say, “Oh well, maybe the chefs changed a lot or maybe her expectations got so high that nothing could satisfy them.” This is again another case of regression. I mean, extreme values are relatively rare. If you think of the bell curve, things are way out there on the bell, there are not many of them out there. So another way to think about it, to massage people’s intuitions about why you expect to not get such a great meal at a restaurant where you had a superb one before, think about this, do you think there are more restaurants in the world where you would get an excellent meal every time or more restaurants where you would get an excellent meal only some of the time? Most people’s intuition there is it’s the second type. There are probably more restaurants where you would get an excellent meal just some of the time. Well if that’s the case, it has to be the case that if she has an excellent meal the first time, it’s not likely to be an excellent meal the next time because she’s probably sampled one of those restaurants where you can only get an excellent meal some of the time. So the regression principle is crucial for understanding all kinds of things around us all the time. Extreme scores are rare. Expect extreme scores to regress to the mean. Think of the mean as some kind of magnet, dragging events from extreme and rare circumstances back to some central tendency, which is less extreme. [0:21:54.7] MB: On the subject of regression to the mean, one of my favorite kind of mental models for understanding that is from a book called The Success Equation by Michael Mauboussin and he talks about envisioning that you have sort of two jars, one called luck and one called skill, which I think you would essentially call sort of true score and error. And any outcome you draw from the skill jar which is roughly a fixed quantity, and then you draw from the luck jar which is a random number, essentially and you add them together and that’s the result that you get. So any great streak is always a combination of essentially sort of tremendous skill with tremendous luck stacked on top of it. [0:22:28.4] RN: Exactly, yeah. Great way of putting it. [0:22:30.9] MB: I’d love to, actually before we do this, for listeners to kind of help them just understand this concept a little bit better, when you talk about sort of deterministic thinking or deterministic answers, can you kind of explain that concept and why it’s not always the appropriate way to think about things? [0:22:45.5] RN: It’s never wrong to model some situations, think what’s going on causally with it. But it’s people who give causal answers for problems like the restaurant problem or the rookie of the year problem, they won’t give a cause and they won’t go down the causal analysis root if they’re familiar with statistics. For example, a single statistics course is enough to get people to say for the rookie of the year problem, “Well, maybe if it was by chance that he did so well.” That’s right as far as it goes. People who have had two or three statistics courses will say, “Well look, that’s an extreme score, extreme scores are rare, there’s going to be regression back to the mean.” They just never go down the causal route. But if you don’t have the concept of statistical regression, what are you going to do? You don’t have anything else other than causal notions to draw on. A lot of statistical principles are ways of thinking about the world that don’t’ get you involved in the effortful business of causal analysis at all because you realize, “Look, this thing has to be true statistically. End of story.” Not that there aren’t — of course there are causal things going on but you wouldn’t be thinking about those things if you were aware of the regression principle. [0:24:08.0] MB: One of the other statistical concepts that you talk about that I’m a big fan of and I think is under-utilized for explaining and understanding reality are base rates. I’d love to kind of hear your thoughts about that and maybe explain that in a way that listeners can really simply grasp it? [0:24:23.5] RN: Right, well we often think about events using only the individuating information about that event, rather than thinking about the event as a type of event for which we may have base rate information that would tell us how to think about that particular case. That’s not a very clear way of putting it. So let me give a concrete example of the importance of using base rate and the kinds of things that can operate as base rate, should be thought of in that way. If I ask undergraduates again who have no statistics, I tell them, “I want to tell you about somebody, his name is David L. He’s a high school senior, he’s going to college next year, one of two colleges which are close to his home, one is a state university where he has lots of friends and those friends like that school very much on both intellectual grounds and personal grounds. The other one is a private college where he also has several friends and they’re not really crazy about it. I don’t’ think they’re getting such a great education there and they don’t have that many friends. But David L goes to visit each of those schools for a day and he just doesn’t have a good feeling about that state university place. I mean, a couple of professors he wanted to talk to getting in the brush off, some students that he meets just don’t seem to be very interesting. But at that private college, a couple of professors actually take a personal interest and he meet some sort of really interesting kids at the other place. So which place do you think David L. should go to?” You will never find an unwashed freshman who will tell you anything other than, “He’s got to go where his heart tells him to go, he’s not choosing for his friends, he’s choosing for himself.” But there’s two things wrong with that. One is sample size, I mean, think about it, you go to a place for a day, that’s a small sample. I mean, just by luck of the draw you get a professor who is rushed and doesn’t have time to talk to you or not interested in you, by the luck of the draw at someplace else, you get a professor who is more willing to. There’s just a lot of randomness to any information you’re going to get in such a small sample. So if you understand the law of large numbers, you’re not going to make that judgment for David L. The other thing that’s important is understand the base rate because you can think of his friends views of these places, his friends’ experiences as providing a base rate for the experiences to be expected at each of these schools and again the law of large numbers plays in the understanding why you ought to be paying deep attention to the base rate. They’ve got hundreds or a thousands of hours collectively, experience at this places and so you should use that base rate to decide what to do. People will say it’s resistance to that. They’ll say, “Well, you know, you’re asking me to do what other people are doing but you know, I have my own unique preferences and skills and songs and I don’t know that I should just slavishly follow other people are doing.” The social psychologist, Dan Gilbert, has a great expression. He says, “If you’re like most people, then like most people, you think you’re not like most people, but you are.” The base rates for human beings apply to you for most things. I’ll give an example, I just saw a musical Hamilton. I have yet to hear of anybody who didn’t absolutely love that musical. I say, “I feel with great confidence, you’re going to like that musical, whoever the heck you are.” They’ll say, “Well, I don’t like musicals.” Don’t tell me that, I don’t care whether you like musicals or not. I don’t particularly like musicals and I loved it. They’ll say, “You know, it’s hip hop music, I’m not crazy about hip hop music.” Well I’m certainly not crazy about hip hop music but I loved that thing. So you just have to pay attention to other people’s experiences, other people’s views as generating a base rate to be expected of your own experience and don’t try to collect little pieces of information like who is starring in the movie, to individuating information about this particular case. Think about what the base rate of opinion is of other people about that thing. [0:28:38.2] MB: So essentially, many people get caught up in the trap of thinking only about their own unique situation in trying to gather as much data as they can when often times if you would just sort of zoom out and look at out of everyone who has ever been in this situation, what were the predominant outcomes and at what frequency, you can often make a much better decision. [0:28:57.3] RN: Yeah, very well put. [0:28:57.8] MB: As on the side, Hamilton is awesome. I haven’t seen it, but I do love the soundtrack. Anyway, changing gears. I’d love to dig into some of the, you know, we’ve talked a lot about many of the statistical concepts that you lay out in the book and can help people make better decisions. I’d love to dig in to some of the other ideas kind of from the scientific method or how we can apply scientific thinking to be better decision makers. [0:29:19.4] RN: Great. So you;d like me to just examples of how we can make use of the experimental method? [0:29:25.2] MB: Exactly. [0:29:26.7] RN: Well, first of all, let me say that where it’s most important is public policy matters. On 9/11, 9,000 grief counselors descended on Manhattan to work with people and t hey did what seems very reasonable to me. They met with people in small groups, they asked people to tell about their experiences, about their emotional reactions and then they would assure people that their reactions are very common, there’s nothing strange or unusual about them and in the not too distant future, they’re going to be a lot better off. Sounds like a great idea. Except that it isn’t. It actually makes people worse, and there are things that social psychologist have discovered to do for grieving people that make them better. So here’s this massive investment the society is put in to something that is not doing any good, it’s costly and it’s doing some harm. Another example would be 20 years or so ago, a bunch of prisoners in New Jersey decided that maybe they could scare kids off from doing things that would put them in prison. So they brought junior high kids to present and they tell them how horrible it is, the food is terrible, it’s incredibly boring, you get up beat up all the time, sexual attacks, and so on. Again, that sounds like a great idea to me. You have a kid who is at risk for delinquency, I mean that might make them think twice about it. But in fact it actually makes kids more likely to become delinquents. Now, don’t ask me why, I don’t have an explanation, I don’t have to have an explanation, I just know what the data are. It’s now studies have been done, good experimental studies expose some kids to what’s called Scared Straight programs, don’t expose others and on average it seems to increase the likelihood of delinquency by about 13%. One estimate, looking at a meta-analysis of a number of studies that’s done, comes out with the conclusion that for every dollar spent on Scared Straight, you incur $400 of cost in terms of crimes committed and paying for incarceration. Well let’s take something really big, we’ve had with us for about 50 years the Head Start Program. We’ve spent $200 billion dollars on that to this point and we don’t know whether it does any good or not. We would know a few million dollars would have told us what kinds of early childhood programs are effective, if society were in a more experimenting mood. We do know that some forms of childcare are effective, they tend to be more ambitious and better carried out than most Head Start situations are. But it’s just people assume that it’s got to be a good idea, you take a bunch of kids in, you show them some intellectual tasks, you get them to cooperate with each other and probably some version of that’s correct, but we have no idea how close to that ideal our typical Head Start experience comes. So at a societal level, we need vastly more experiments than we’re getting. People often — all of this cases, they’re obvious to people. They’re obvious to me too, but it’s a great burden being a social psychologist because unlike everybody else, I’m constantly getting my opinions about human behavior contradicted. I mean, I’ve designed experiment — I never do an experiment unless I know what’s going to happen. Why would I do an experiment if I didn’t know what was going to happen or have a pretty good idea of what would occur? I’m not just looking at things randomly, I think this is the way the world is, if I do this, this is what will happen and half the time I’m wrong. So social psychologists are constantly having their noses rubbed in the fact that their guesses about human behavior, the way we model human behavior is way off, often, and the only substitute for that is to just do the experiment. Then at the individual level, there are all kinds of opportunities for experiments that would be informative. Am I better off if I have coffee in the morning or not? Does coffee make me more efficient or does it make me more jittery and unpleasant? The only way I’m going to know, the answer to that question is by doing a randomized controlled experiment. You come down in the morning and you flip a coin to decide whether I am going to have coffee or not? Otherwise you’re drinking coffee in a haphazard way. Oh, you know, I’m drinking it this morning because my husband made it for me or I didn’t have it this morning because I was in a rush. So there’s a huge amount of noise that you’re exposing yourself to and you can get pure signal if you just do the randomized experiment. Same thing for yoga, are you better off with yoga or not? Meditation or not? Flip the coin and meditate today or not. Or meditate for a month and then a month not. Or yoga for six months and yoga for six months not and see what the empirical questions are. Social psychologist have an expression that they’re using, that they use to each other all the time and I think it should be an expression that’s everybody’s disposal, much more than it is, and that’s “it’s an empirical question.” I mean, instead of “I tell you my model of the world and you tell me your model of the world” and we’re talking about it and in the end, it’s an empirical question. Let’s look it up or if we don’t look it up, let’s do the experiment or if we can’t do the experiment, let’s admit that there is dueling models is not necessarily the way to get you any closer to the truth and when you can do an experiment easily, it’s foolish to just assume that your plausible model allows you to have an opinion about some matter. [0:35:11.5] MB: I find it so interesting that our intuition’s often can be terribly misleading and in many cases, people who haven’t kind of studied psychology or statistics or any of these methodologies for more deeply understanding both, how the world works and how the human mind works just sort of lean on intuition or lean on their sort of, “I feel like this is the case so that seems like what’s true,” and oftentimes they can just be completely wrong. [0:35:38.3] RN: Right. My friend, a social psychologist Lee Ross, has a very important concept that I would say it’s at the floor of anything I would want to say about information age reasoning and that is that we have an illusion of objectivity. As I experienced the world, I think I’m registering what’s out there and I’m not, not for anything. Not even for the visual things. Especially not for all things. What’s being recorded on my retina is not what I am using. That’s not the information I’m using to make a judgment about for example, distance or depth perception or estimations of size and it’s easy to show. I mean, perceptual psychologist make a living by showing how easy it is to create illusions and make us make a wrong judgment about some illustration or some physical setup in the world. That’s because our perceptual apparatus is not setup to render what the world is in some actual sense. It’s setup to be what’s useful so that we distort the visual processing centers, wildly distort the picture of some object in the service of size constancy. That is, we add a dose of perceptual analysis that will allow us to see an object that’s receding into the distance as being the same size object even though the way it strikes our retina is very different from what’s correct. Our perceptual apparatus is a very complicated, layered set of mental operations that are designed to give us some correct view of the world. But those same processes can create illusions in some circumstances. So [inaudible] psychologists’ tools that we used to understand reality or things like schemas that has representations of common situations, stereotypes, heuristics, rules of thumb for reasoning and so on. All of these things are this highly error prone structures and processes are what we’re using to understand the world. We’re not registering, we’re interpreting it. We’re interpreting it moreover by structures and processes that we have no awareness of. So I think that’s helpful in all kinds of ways to recognize that we do have an illusion of objectivity or what philosophers called naïve realism. So if you understand that, it’s useful for humility. I probably shouldn’t be nearly as sure of my understanding of the world as I am most of the time because I’m using processes which can lead me astray, often. [0:38:36.8] MB: Changing gears a little bit, I’d love to talk about fundamental attribution error and some of the work you’ve done about how situations versus sort of personalities can impact people’s behavior. [0:38:48.8] RN: Right, well there’s a story that goes back to 1968 for the publication of a book by Walther Michelle. He’s the marshmallow guy that everybody knows about and he said, the book was about the power of assessment of personality traits to predict behavior. And his generalization was that if you’re trying to predict behavior in one situation, by virtue of knowing about behavior in some other situation, which could be described by the same trait, your correlation’s going to run about 0.1. That is, it’s trivial gain in accuracy of knowing how honest someone is going to be or how conscientious they’re going to be or how extroverted they’re going to be. You can do better than that if you have a very good personality instrument questionnaire or reputation. Base rate in other words, comes from knowing a lot about many past experiences and applying that base rate to this particular circumstance that you're looking at. Those correlations can go as high as about 0.3. Still not to impressive. Doesn’t mean that people don’t have personalities or that personalities don’t affect their behavior. They do, but you have to have a heck of a lot of information and you’re predicting a heck of a lot of information. It takes lots and lots of observations to predict a battery of other observations. There you can get up to predictability at 0.8, 0.85. Now, why is that? Why is it that the predictability from one situation to another is so poor? Well, it has to do with error of various kinds. I mean, you're looking at a set — why did Joe give money to the United Fund? I say, “Well, he’s a generous guy.” Well, actually, his department chairman was going to know whether he gave money to the United Fund or not so he gave it. Why did Bill not give money? Well, because he happens to be that he has a bit of an opinion about one particular program by the United Fund that he’s very much opposed to. Not that he’s ungenerous or uncharitable. So situations are normally producing or normally responsible for behavior for much greater extent than we recognize and personality traits or other dispositions like skills or attitudes or needs are often contributing very little. I mean, the situation’s driving the buss. Most behavior, most of the time. So this was a bombshell actually. I mean because he was able to show that nobody’s clinical assessments or personality traits assessments were very accurate in predicting behavior. Some things — this wasn’t his original contribution but it all went into his book. Some things that clinicians thought were predictive were absolutely useless. To draw a person test predicts nothing you know? Clinicians were thinking to themselves, “Well, the person draws a person with funny eyes, that guy could be paranoid. Or draws somebody with a big head, well I may have worries about his intelligence. Or somebody draws a person with sexual organs. That person, there’s maybe some sexual adjustment issues.” All of which, undergraduates who have no clinical training at all will see in data even though it’s not there, you built the data sat so that none of this things are true but that’s what they’ll see. “Oh, funny eyes, paranoia. I see.” We’re just not that good at covariation detection. Actually we’re shockingly bad at most kinds of covariation detection, which is strange given how very good we are at pattern detection. If there’s a pattern out there in the world, we can’t not see it. But if there’s a correlation of the given kind, most of the kinds of things, important things even that we really would like to have an accurate idea of, it’s just very hard to understand. They’re primarily determined by what the clinical psychologist actually is selling them. Can’t recall his name, first name offhand, showed, we called it “preparedness”. We’re prepared to see some kinds of association and we’re counter prepared for others. We’re prepared to see this association's, same thing is true for the Rorschach test. The Rorschach test was given to hundreds of thousands of people costing untold millions of dollars to do these assessments. What is it that people see in these ink blots and what does that predict? No one for decades ever bothered to do the experiment here or to do the systematic observation to say, “Well, how well do these Rorschach signs, how well do they do in predicting behavior?” And it turns out The Rorschach is virtually useless. There’s one or two little things that it can predict, but it’s virtually useless. So we see a behavior in one situation and we sort of take it for granted, we’ve learned something about a person’s personality traits and it’s easy to show and there are dozens of demonstrations and experiments showing that we are way over confident in our judgment about personality from judging, from looking at just one or two or three situations in which we’ve observed behavior. There’s a law of large numbers issue here too. I mean, it’s just, you know, one behavior is not a very large sample but we don’t realize that, there are few arenas where we’re aware of the uncertainty of any observation. Interestingly, sports is an exception to that. People are really well calibrated on how much you can predict. Let’s say a basketball score at a particular game from basketball scores at another game. For how well you can predict spelling test performance by elementary school kids by virtue of knowing another spelling test performance. For the abilities we’ve looked at, they tend to run about 0.5. I mean, from a serious good observation, one game or one test, they tend to run at about 0.5. So they’re informative but they’re certainly not the whole story. With people with any knowledge of sport, understand perfectly well, it’s captured beautifully my idea that on any given Sunday, any team in the NFL can defeat any other team in the NFL. That’s how much of a luck/error, that’s how much of a rule it plays in any given sports outcome. Despite the fact that people are quite good at understanding both how well you can predict an event from another event or a set of events from another large set of events, that doesn’t pour over at all onto our understanding of personality trait related behavior. You can show that people are horrendously mis-calibrated about how much information they think they’ve gotten from observing a person in one particular situation. [0:45:50.8] MB: So obviously we’ve talked about the book and for people who want to dig into and really understand a lot of this kind of mental models or frameworks much more deeply, that’s a great place to start. What would some other resources be that you’d recommend listeners check out if they want to kind of dig in to some of these topics? [0:46:05.7] RN: Well, I think Silver’s The Signal and The Noise, it’s about statistical concepts. It’s a beautiful information in age book. I mean, it tells you how you need to think about things, information that you haven’t collected yourself that somebody else has collected, how to make use of it, how to avoid making errors and determining it. There’s another lovely book by a mathematician called How Not to Be Wrong, and incidentally, he deals with a law of large numbers at length in his book just like I do in my book. It’s very similar. I was kind of surprised that a mathematician would be thinking about so many everyday life situations in terms of the law of large numbers and have so many beautiful concrete examples of how we have to think given that all of our observations have errors surrounding them. I was surprised because I don’t’ see statisticians doing that sort of thing. Somebody who really wants to get serious about inferential rules in a very systematic way, formal definitions, I would recommend a book by Dian Halpern called Thought and Knowledge. It just march us through, it’s a similar to my book in a way. Although, she spends time on things that I don’t spend much time on. She talks a fair amount about some logical principals and some logistic schemas where I think that formal stuff is not actually something that people can make that much use of. But some people would like to know about it anyway because there is some people are in jobs which require sometimes some kinds of logical formulations and it can be interesting, it can be fun to look at that stuff. Much of the territory she covers in that book, which is a critical thinking text basically, that’s what it’s intended for. There’s a lot of good stuff in there. So, you know, there is plenty and of course there is Danny Kahneman’s book, which is a near relative of my book. The title there of course is Thinking Fast and Slow. [0:48:04.8] MB: Yup, great book. Huge fan of that book and Daniel Kahneman. So, where can people find you and the book online? Your book? [0:48:12.3] RN: Well, it’s on Amazon and it’s in various versions; print, kindle and audible. [0:48:19.4] MB: Great. Richard, thank you so much for being on the show, it’s been a fascinating conversation, we really explained a lot of this concepts that can seem kind of daunting at first but are really critical component to building a deep understanding of how the world works and how your mind works and how we can make better decisions. So, thank you so much for being a guest on the Science of Success. We’ve really enjoyed having you on here. [0:48:42.7] RN: Thank you [00:48:42.2] MB: Thank you so much for listening to the Science of Success. Listeners like you are why we do this podcast. The emails and stories we receive from listeners around the globe bring us joy and fuel our mission to unleash human potential. I would love to hearing from listeners. If you want to reach out, share your story, or just say hi, shoot me an email. My email address is matt@scienceofsuccess.co. I would love to hear from you and I read and respond to every listener email The greatest compliment you can give us is a referral to a friend, either live or online. If you’ve enjoyed this episode, please, leave us an awesome review and subscribe on iTunes. That helps more and more people discover the Science of Success. I get a ton of listeners asking, “Matt, how do you organize and remember all this information?” Because of that we created an amazing free guide for all of our listeners. It’s called How to Organize and Remember Everything. You can get it for free by texting the word “smarter” to the number 44222 or by going to scienceofsuccess.co and joining our email list. If you want to get all the incredible information, links, transcripts, everything we talked about in this episode and much more, be sure to check out our show notes. Go to scienceofsuccess.co, hit the show notes button at the top. We’re going to have everything that we talked about on this episode. If there was a previous episode that you loved, you can get the show notes for every episode that we’ve done. Just go to scienceofsucces.co, hit the show notes button at the top, and you can find everything. Thanks again, and we’ll see you on the next episode of the Science of Success.