Self Help For Smart People - How You Can Spot Bad Science & Decode Scientific Studies with Dr. Brian Nosek
In this episode, we show how you can decode scientific studies and spot bad science by digging deep into the tools and skills you need to be an educated consumer of scientific information. Are you tired of seeing seemingly outrageous studies published in the news, only to see the exact opposite published a week later? What makes scientific research useful and valid? How can you, as a non-scientist, read and understand scientific information in a simple and straightforward way that can help you get closer to the truth - and apply those lessons to your life. We discuss this and much more with Dr. Brian Nosek.
Dr. Brian Nosek is the co-founder and Executive Director of the Center for Open Science and a professor of psychology at the University of Virginia. Brian led the reproducibility project which involved leading some 270 of his peers to reproduce 100 published psychology studies to see if they could reproduce the results. This work shed light on some of the publication bias in the science of psychology and much more.
Does the science show that extrasensory perception is real?
Is there something wrong with the rules of the science or the way that we conduct science?
What makes academic research publishable is not the same thing as what makes academic research accurate
Publication is the currency of advancement in science
Novel, positive, clean
What does “Nulls Hypothesis significance testing” / P-Value less than .05 even mean?
Less than 5% of the time would you observe this evidence if there was no relationship
The incentives for scientific publishing often skew, even without conscious intent by scientists, towards only publishing studies that support their hypothesis and conclusions
The conclusions of many scientific studies may not be reproducible and may, in fact, be wrong
How the reasoning challenges and biases of human thinking skew scientific results and create false conclusions
Confirmation bias
Outcome bias
“The Reproducibility Project” in psychology
Took a sample of 100 studies
Across those 100 studies - the evidence was able to be reproduced only 40% of the time
The effect size was 50% of what it was
“Effect Sizes” - how strong was the effect of the studied phenomenon
The real challenge is that it's extremely hard to find definitive evidence of whether the replication of studies
Science about science is a process of uncertainty reduction
What The Reproducibility Project spawned was not a conclusion, but a QUESTION
The scientific method is about testing our assumptions of reality with models, and recognizing that our models of the world will be wrong in some way
The way science makes progress if by finding the imperfections in our models of reality
How do we as lay consumers determine if something is scientifically valid or not?
How do we as individuals learn to consume and understand scientific information?
How can we be smarter consumers of scientific literature?
We discuss the basic keys to understanding, reading, and consuming scientific studies as a non-scientist and ask how do we determine the quality of evidence?
Watch out for any DEFINITIVE conclusions
The sample size is very important, the larger the better
Aggregation of evidence is better - “hundreds of studies show"
Meta-studies / meta-analysis are important and typically more credible
Look up the original paper
Is there doubt expressed in the story/report about the data? (how could the evidence be wrong, what needs to be proven next, etc)
What is a meta-study and why should you be on the lookout for those when determining if scientific data is more valid? But there are still risks to meta-analysis as well
Valid scientific research often isn’t newsworthy - it takes lots of time to reach valid scientific conclusions
It’s not just about the OUTCOME of a scientific study - the confidence in those outcomes is dependent on the PROCESS
By confronting our own ideas/models of reality, our understanding of the world gets stronger and moves towards the Truth
Where do we go from here as both individuals and scientists? How can we do better?
Transparency is key
Preregistration - commit to a design
The powerful tool of “pre-registration” and how you can use it to improve your own thinking and decision-making
As individuals trying to make evidence-based / science-driven decisions in light of these findings, how can we apply these lessons to ourselves?
Homework - deliberately seek out people who disagree with you, build a “team of rivals"
Thank you so much for listening!
Please SUBSCRIBE and LEAVE US A REVIEW on iTunes! (Click here for instructions on how to do that).
This weeks episode is brought to you by our partners at Brilliant! Brilliant is math and science enrichment learning. Learn concepts by solving fascinating, challenging problems. Brilliant explores probability, computer science, machine learning, physics of the everyday, complex algebra, and much more. Dive into an addictive interactive experience enjoyed by over 5 million students, professionals, and enthusiasts around the world.
You can get started for free right now!
If you enjoy learning these incredibly important skills, Brilliant is offering THE FIRST 200 Science of Success listeners 20% off their Annual Premium Subscription. Simply go to brilliant.org/scienceofsuccess to claim your discount!
Show Notes, Links, & Research
[Wiki Article] Reproducibility Project
[Research Article] Estimating the reproducibility of psychological science
[Study] Investigating Variation in Replicability: A “Many Labs” Replication Project
[Wiki Pages] Investigating Variation in Replicability: A “Many Labs” Replication Project
[Article] How Reliable Are Psychology Studies? By Ed Yong
[Podcast] Planet Money - Episode 677: The Experiment Experiment