Fresh Lens Podcast

Science Fictions

February 09, 2022 Hirad Motamed & Patricia Veinott Season 1 Episode 12
Fresh Lens Podcast
Science Fictions
Show Notes Transcript

In our modern world, science is pegged as our guide to understanding what is true. But while the scientific method is the best way we have found to make sense of the world, scientists and scientific institutions have enormous shortcomings that have come to the fore in recent years.

In this episode, we discuss Stuart Ritchie's book, Science Fictions. The book enumerates the different ways in which science can go wrong, the replication crisis, and what we may be able to do to get it back on track. To top it off, Hirad draws some outlandish conclusions that Trish vehemently disagrees with.


Get the book on Amazon here.

[00:00:00] **Hirad:** So Trish, I have been foolish. I've been getting into arguments with people on Facebook. So what happened was I think these arguments that are very good lead in to the book that we've been reading for a very long time. Now, it's been a while since we did a proper book episode. So I have been very inspired and supportive of the Canadian truckers that have been going to Ottawa to protest the COVID measures that we've been living with for two, almost two years.

And of course this is slightly polarizing topics. I've been I've been very vocal about my support

[00:00:38] **Trish:** right in there, off the bat. Here we go.

[00:00:41] **Hirad:** So let's talk about the important stuff, right? So I've been very vocal about my support, for these guys. And naturally that has led to a few people trying to make their opposition to it clear.

And I think some of the themes that emerged from my conversations with some of my social network has been this sense of people are so trusting of science that they just can't fathom why anyone would have. A second guess when like the government says you should be taking these vaccines or we need to have such and such a measure.

The common theme that kind of comes up is, well, this is science. You can't really be arguing against it because it's just factual. Right.

[00:01:24] **Trish:** facts are facts.

[00:01:25] **Hirad:** And so I think that kind of sets a nice stage for the book that we've been reading for for the last couple of weeks.

[00:01:32] **Trish:** Yeah. I think we alluded to this book in an earlier episode.

It is called science fictions and it's by Stuart Ritchie, who is a psychologist from Kings college, London. And originally I became familiar with Stuart Richie. He has another really readable, really excellent book called Intelligence: All That Matters. And I think that for awhile, when one of our book clubs was kind of doing a deep dive into intelligence research and sort of like nature, nurture stuff, I read it.

And it just was such a great primer on here's what we know. Here's what we don't know, kind of where the where everything stands. Anyway. I just wanted to give a short plug for that book because it's like really readable and not very long. And if anyone's curious about what we know about intelligence or not, that's excellent, but his new book science fictions is just released on paperback, this fall.

[00:02:26] **Hirad:** And so yeah, let's talk about why this is this book is so timely.

My impression from talking to a number of people who are very polarized today about the state of our society with regards to say COVID vaccination and vaccine hesitancy is that there is a sense of certainty and faith in science. The. They have, and it makes it very hard for them to understand why some people may have second doubts about things like vaccine mandates or even the vaccines themselves.

Right. And it's not so much that we want to get into the details of COVID-19 vaccines and the science behind that. I think if you're interested in that listeners, there's a episode right before this one that you can you can go listen to. But I think the sense of having science as as a place of certainty that we want to talk about in the context of 

[00:03:24] **Trish:** Yeah. I hope a lot of this will continue to build on our, our exploration of the epistemic crisis as how we, , conduct knowledge making how we figure out what is true, how we just approach this quest for knowledge that humans are on.

[00:03:44] **Hirad:** Totally.

[00:03:46] **Trish:** So I just want to get one quick thing out of the way. I think that you could take what we're about to say in a couple of different ways. And one is that you can't trust anyone. Scientists don't know what they're doing. You can kind of throw up your hands and be like the whole system's rotten. And we're just going to kind of go with our gut and believe, whatever we want to believe.

And that, to be clear, it was very much not what we're going to say. The point of this book. And I think why Richie wrote it is not to erode trust in the institution, but it's about strengthening it. So Richie's like listen science and the liberal knowledge making system is one of the best systems we've ever come up with as humans.

But there are problems. And just because it's what just because we can make it better is the idea.

[00:04:37] **Hirad:** I think so, but what he's trying to get at in this book is there are some really, really huge issues in the way that science has practiced. And I think science and, you know, as we talk about it on paper is great, but the practice and that kind of naive description have diverged.

And what this book is about is the ways in which that they've diverged. And , the factors that have caused that divergence.

[00:05:03] **Trish:** Yeah. And how to fix it. This is one of these great books too, that actually , has some solutions, some ideas for solutions. It's not just, you know, this is all the problems and wow. What a mess we're in. So yeah, this is a, you know, science is a highly idealistic system. That's being undertaken by humans who are generally more practical and might have other motivations than just the pure pursuit of knowledge.

[00:05:25] **Hirad:** So let's get into it.

Yeah. So to begin with, let's talk about how science is generally done, right?

There's a few steps that to put it very simply every scientist, every researcher goes through in order to get get a new finding out into the literature. ? So the first step is kind of knowing what is out there. What's the current state of knowledge is then you come up with some kind of a hypothesis that you want to.

Some new addition that you were going to have to, to our state of knowledge. And then the next step is you gotta start writing grant applications. And this by far is the most grueling part of a scientist's job. They spent an inordinate amount of their time trying to get funding for the research that they're about to do.

Once that funding is hopefully secured, then they start with collecting data, which depending on the complexity of the topic, excuse me it can take anywhere from days to decades. And once they're done with collecting the data collection, they start doing some analysis on the data. So that's a lot of statistics involved.

And then that analysis will lead to some results, which they will write up in a paper and submit it to a journal. Now what's important to note here is these journals. They're these peer review journals. A lot of people really think that that's kind of the key part of science, but these things haven't been around before the seventies, like 70, the 1970s was when we really kind of started leaning into peer review.

And there are a lot of problems with peer review, which we'll get into. But I just wanted to note that like peer review is not necessarily exactly for, for this process, but

[00:07:05] **Trish:** that it happened, but it happened less formally.

[00:07:08] **Hirad:** Yes, exactly. Yep. So we've got these peer reviewed journals. There was a lot of them out there.

Some there are different degrees of prestige and some of them are very niche. Some of them are very broad. But so the scientist will write up their findings submitted to one of these guys and then the editor there will oftentimes just reject it right away for one reason or another. But if they accept it and then it'll go through the peer review process, or the editor will find a couple of different experts in the field they will reach out to them to try to get them to review this new submission.

And what happens is there really isn't much for the reviewers in this process. And so usually these editors have to go through a long list of potential reviewers before they find two or three that are willing to take on the job.

[00:07:56] **Trish:** And you're not paid for this.

[00:07:57] **Hirad:** You're not paid for this it's time-consuming, it's laborious.

So it's not it's not fun. And so once the, once the peer reviewers give their input, there will be some cycles of revision. And then hopefully the paper is ready for publication and it will go out in a journal. So that in a nutshell is how Science has done.

[00:08:16] **Trish:** And then the scientists gets to put another bullet point on their CV, is that they got that published in such and such journal.

[00:08:23] **Hirad:** Right? And so there are two key things for the scientists , as a result of this one is the fact that they got the paper published and two, over time is that that paper would have been cited a lot for, by other authors of other papers.

[00:08:39] **Trish:** Yeah, the idea is you added, you're brick in the wall of knowledge that humans are building.

the system kind of relies on an underlying integrity of values that were nicely qualified by a guy named Robert Merton. So therefore they're known as Mertonian norms. And the first of which is universalism knowledge is knowledge. If the methods are sound, it doesn't matter who it's coming from, who meaning age, gender, race, socioeconomic status, you know, and also maybe bad things about their personality.

Like if they're a misogynist or racist or whatever, the only characteristic of a person that you have to care about is whether they're a liar. Like if they did with good methodology, a good study and came out with. Some results. That should be good. It doesn't, it doesn't matter who it's coming from. The second is disinterestedness so there should be no motivations, except for the pure quest, for knowledge, shouldn't be re motivated by money, status politics, et cetera.

Communality so scientists should share knowledge freely. So you should know what's happening out there. What other people are working on, what their findings are and organized skepticism. So every claim should be tested and examined. Nothing should be taken as gospel without rigorous confirmations. So those are kind of the, the values of science.

[00:10:08] **Hirad:** Yep.

And so we've in this kind of description of how science is done.

We've identified a few different actors, right? So let's get explicit about them. We've got the researcher, who's actually like proposing a hypothesis and then collecting data and trying to confirm or disconfirm the hypothesis. We've got the journal editors, we've got the reviewers,

[00:10:29] **Trish:** the grants, people who are giving grants

[00:10:32] **Hirad:** got the funders and we have research institutions that kind of provide some of the infrastructure for the researchers to do their jobs

[00:10:40] **Trish:** employing them

[00:10:41] **Hirad:** who are employing them.

Correct. 

So there, are five different stakeholders in this picture.

All of them have a vested interest for having high impact findings, that they have participated in creating, right? So for example the researcher obviously wants to be the, the one that has been known for making a great discovery.

The journals want to be known as the journals of record, where great discoveries are published. The funders want to, you know, when the prestige of knowing that they have funded wonderful science and similar same thing with the institutions. So all of their incentives kind of point in this direction of, we need to publish results and we need to publish the most amazing results, the best, the most groundbreaking, the most high-impact findings.

Right? And this is kind of the root of the problem because as Richie notes, there's something called Goodhart's law, which says when a measure becomes a target, it ceases to be a good measure. In this case when the number of publications and the number of citations becomes a thing that people are aiming for, they are going to game the system to get the highest number of publications and the highest number of citations.

Richie basically talks about the four ways in which people do this. And, and we are now going to talk about those ways.

[00:12:04] **Trish:** yeah, guys, this book is wildly entertaining with the anecdotes of how things have gone wrong. Like it is worth reading this book just for the anecdotes. And we'll get into some of them, maybe a couple of our favorites, but we're going to try and push through.

Yeah.

So the fault, the five faults and flaws they were hype

[00:12:24] **Hirad:** there's four. Well, I dunno, was perverse incentives. One of the things that's just like the underlying 

yeah. There's fraud, negligence, bias and hype. And so let's talk about fraud. 

So let's say you're in a hyper competitive cutthroat environment. And what you're judged by is, you know, your scientific output and how many papers you put out, How crazy your findings are, how amazing your findings are for a certain kind of psychopath. The answer to optimizing for this high output is to cheat.

It's very simple. So Richie recounts instances where people have had sat at their dinner table and just produce their data. it literally just made it up

[00:13:13] **Trish:** in order to have the outcome that they wanted. He talks about a doctor who performed all these horrible surgeries and published the papers that were patently false.

. It's a salacious anecdote of a surgeon called Paolo Macchiarini. And those of you who are avid podcast, listeners will probably recognize him from season three of a podcast called Dr. Death. And I mean, that really says it all the subject of a podcast called Dr. Death. He was basically publishing papers with. Falsified information about tracheal implants and then performing surgeries, which killed a bunch of people and publishing the results of those surgeries as if they were successful , in big journals, like the Lancet, he was at the Karolinska Institute in Sweden, performing a lot of these surgery surgeries, and they give out the Nobel prize.

So this is a very, very high level surgeon at the most prestigious institution, getting his findings published in the most prestigious journals. And he was basically butchering people. And the only reason he ended up getting caught is because he was dating this NBC producer and was supposed to get married to her.

But she found out he had like a secret wife and kids, and started to do more digging about what he was lying about. So that's it's actually a really good podcast too, to listen to, but

[00:14:43] **Hirad:** so this guy was a celebrity in his field and he was publishing all these papers after every surgery. Talking about how amazing it was.

And in some cases, the patients were dying before the paper was even accepted for publication and he would just leave out that information. He would just leave out that adverse outcome. Sometimes he would, who would have one sentence in the paper that would say there were some complications to the surgery.

He wouldn't specify that the complication was death. And move on. And notably, when a group of frontline healthcare workers who were responsible for taking care of his patients, try to blow the whistle on what he was actually doing. The Karolinska Institute threatened their careers, silence them, and protected the celebrity doctor.

[00:15:31] **Trish:** And I think that they like ruined these people, these whistleblowers career. Like, I don't think that they've ever really managed to come back from it.

[00:15:37] **Hirad:** Right.

[00:15:38] **Trish:** Yeah. So that's just one really sad example, but there's, there's a lot of them, there's a lot of people who, yeah, like you were saying, like made up stuff out of thin air, who would Photoshop results of slides for cells, cultures and stuff.

[00:15:55] **Hirad:** Yeah, people have published results for things that they've never done. They have yeah, image manipulation was was a big theme. And then, yeah, my favorite one was Diedrich Stapel which he literally just sat at his dinner table and produce the data that he wanted to have for his study.

And it was notable is Richie sites this one's anonymous survey of scientists where 2% of them admitted to committing fraud at least once. And so you can only imagine that the real number is much higher than

[00:16:25] **Trish:** well, and it was interesting because then they also asked how many of your colleagues do you think have falsified data at some point?

And then the number came out to be 14% or something. So I think that's easy where everyone's like, well, I never falsified data, but I definitely think my colleagues, some of my colleagues are, which is interesting. And I think like, I mean, it is definitely a contingent of people who are psychopaths and just kind of want to get their own way.

But I think that. There's almost a more insidious thing that happens is that people think that they're right. And they they're like my study, wasn't going right. But I'm so sure this is true. Someone else's study that was similar, got these results. And so Richie is a little bit sympathetic to saying that.

Sometimes he thinks that good people. Okay. Not good people, but that

[00:17:17] **Hirad:** they're not all psychopaths like Paolo Macchiarini,

[00:17:19] **Trish:** . Yeah. They're all not psychopaths that they kind of feel like the rule breaking is a necessary evil in order to bring the attention of the world, this problem, or to push forward sort of a narrative that they think is true or right.

And you know, like you're, you're really emotionally invested in this research, this as your entire career it's you can understand how people can make bad decisions.

[00:17:43] **Hirad:** Yeah So, yeah, fraud is only the most egregious way in which science goes wrong. And sometimes it takes especially like morally questionable people like Paolo Macciarini committed at these really egregious scales.

But there are more subtle forces at play as well. One of which is bias which can be conscious or subconscious and through which scientists can subtly nudge their data in the direction that they want it to go.

[00:18:13] **Trish:** The biases, the layers of bias go from the very bottom of the top. Like you do. So you could start with like, well, my results are a little bit messy, but so I'm, I'm pretty sure that this one data point should be excluded for whatever reason or I, you know, I think that there was a problem here.

Maybe I'm just gonna massage the numbers a little bit to make my point more convincing,

or maybe I didn't get the result I want. So I'm going to change my question a little bit, and I'm going to run the numbers again and see that if this new question, if I get better, if I get better results or more compelling results, and that's, what's known as data mining. 

[00:18:52] **Hirad:** Yup So there's data mining and there's P hacking.

These are two of the most common tricks used to essentially we don't want to get into the details of the statistics here, but these are when you collect a bunch of data on something and then you don't get the result that you want. And then you just keep massaging the data to make it look like it shows something.

[00:19:16] **Trish:** So a P value is basically what are the odds that I would get a result like this, if it was totally just random chance, like, because you know, this is how statistics works is you take a small sample of the population and you're trying to, you're trying to extend that to the broader population that finding.

So if, so there's a target that's been hit, which is 0.05, which means that there's a 5% chance that whatever effect I'm seeing is totally random and not actually true.

[00:19:52] **Hirad:** And that's what scientists are generally aiming for when they say they're, they're trying to get their P value of less than 0.05 means that they are trying to aim for less than 5% chance.

That what the effect that they're seeing is just random noise. Right. But what happens with P hacking.

They manipulate the data so that they get the P value at less than 0.05. But what they're actually looking at is still just random noise. And if they were completely transparent in the way that they collected the data and the way that they analyze it, anyone would see that that's actually what's happening, except they just, they aren't as transparent about it.

And they actually think that what they're doing is, , you know, a good study design or whatever it is. And, and so a lot of, a lot of these studies that have P values of less than 0.05, they're actually just P hacked studies. And they're actually just looking at random noise and their effects don't mean anything.

[00:20:45] **Trish:** Yeah. It's kind of become this arbitrary target where you're just aiming for that and it's become less about making it a good study so that you're sure you're honed in on something true than it is getting this one number out of your statistics.

[00:20:58] **Hirad:** And again, this is very much like the publication bias that we were talking about because point O P value of 0.05 or less has become such a convention in, in scientific circles people are just trying to get that number to below 0.05, using whatever dirty trick they can and forgetting that, like what, what that actually says about the effectiveness of their study is, is not related at all.

So you could have a study of, you know, P value of 0.08, and that could be a much more solid study than a study with P value of 0.03, depending on a lot of other variables. So on that, on the note of P hacking.

In one poll of 2000 psychologists and this poll covered a range of different P hacking practices, apparently approximately 65% of them admitted to engaging in some of these practices.

[00:21:52] **Trish:** Well, I think some people didn't even realize it was wrong.

[00:21:55] **Hirad:** Yeah. So my favorite example is that of a guy named Brian Wansink. So this guy was a former researcher in in consumer behavior. And by now he's got a number of his papers have been retracted, but the way that he got caught with, you know, his P hacking practices was that he published a blog post where he said that his the students in his lab, they analyze a bunch of data.

And they were looking for they were doing a study and they found null results. So basically the study came up their hypothesis didn't pan out. And and so he instructed his students to keep massaging the data until they get something with p-value of less than 0.05. And he told, he told the world about this in a blog post that he told his students to do

[00:22:41] **Trish:** this is how you run a successful study.

Yeah.

[00:22:44] **Hirad:** And so he, this is how oblivious he was to the fact that P hacking is wrong. And when people saw that, then they started going through multiple of his papers. And over time they found a number of different issues with work that he had published in the past. And eventually he ended up resigning. So this is this is actually a theme throughout this book, as I was reading it, it just seems like scientists don't actually know statistics.

[00:23:07] **Trish:** Yeah. I mean, I get it. Statistics are real hard. I took one university level statistics course. It wasn't easy, but I also understand how they can get, especially in the social sciences, how you can get so far in your career without understanding it.

Because I kid you not the first semester I took of statistics for social science. There were no numbers in it. Hirad like zero numbers. It was literally all how to design survey questions that wouldn't have some other bias or wouldn't be like, leading questions are good. We were like in order to try and be, you know, as neutral as possible, which is important, but like also kind of not important,

[00:23:45] **Hirad:** no numbers,

[00:23:47] **Trish:** literally no numbers

[00:23:49] **Hirad:** listeners.

You don't see me. But my Palm is on my forehead right now.

[00:23:54] **Trish:** Yeah. So that's a big problem. I also just wanted to touch back in, on data mining for a minute, because I think that it's really unintuitive of why it's such a bad practice. Like you think that if you have all this data, then like, why shouldn't you look to see what's in it?

Like, why shouldn't you run a bunch of different hypothesis and wouldn't this be efficient? And wouldn't this be, you know, just like a good use of resources. And Ritchie has a really good example of this that helps explain it and why it's problematic. So let's say you have a bag of coins and you suspect that they're weighted so that they will come up heads more than tails.

And so you take the first coin up and you flip it and you get heads five times in a row. He'd be like, wow, well, that is pretty good evidence that this hypothesis is true. Look at that. The odds of getting five heads in a row is quite low. Well, let's say you did that, but the first one came out, you got heads twice and tails three times, you could say, well, in this case, I was just, I was checking to see that the coin did indeed have a head or tail.

Side. And then the second time you did it, you get some other results. And you're like, oh, I was, I was checking the temperature to see if the temperature matters of it. And so you basically, you keep flipping coins until you get one that comes up heads five times and then you're like, ah, see, but instead of just saying, well, on my 10th try, I got a coin that flipped heads five times in a row.

It's actually, you may you make up a story. You're like, well, yeah, the first nine coins I flipped, I was actually testing for other things.

[00:25:28] **Hirad:** Right. And sometimes you, you might even just exclude that from the study. You don't have to report that this was your 10th try at, flipping the coins.

[00:25:35] **Trish:** yeah So, I mean, I guess long story short is that statistics are complicated and it's hard and it's easy to manipulate them to say what you want.

[00:25:44] **Hirad:** Yeah. And so kind of going back to that publication bias that we were talking about with like people trying to have more the most groundbreaking, the most high impact papers there's another bias here, which is that the in theory. Science is actually much less about things like peer review and it's more, much more about things like replication, right?

So Balaji Srinivasan who's a Silicon valley icon. He always talks about this idea of every time we are using our cell phones to communicate, we are replicating Maxwell's equations, right? So as far as like electro how electromagnetic waves our understanding of how electromagnetic magnetic waves travel and work this is being tested because we've built all these devices that are relying on it.

And every time these devices work, that is a replication of that understanding of how electromagnetic waves operate. Well, replication is really the core of science. So you want other people to be able to get the same finding as you, because that's what it means to have learned something about the world, right?

Under the same conditions you get the same results . The problem with this is because everyone is biased towards publishing high impact studies. Nobody actually wants to publish situations where like replication doesn't happen or where they find no effects whatsoever. Right? So in one in some fields that Richie knows the attempts at replication are as low as 0.1%.

So most of these studies, they get peer reviewed, they get published, but nobody ever tries to replicate them. And we'll talk about what happens when they do try to replicate them a little bit later.

[00:27:23] **Trish:** It's not good.

[00:27:25] **Hirad:** it's not good at all. But it's also important that we publish things that we study and and they come up with no results whatsoever because that's still a part of our Canon of knowledge and.

They like most journals are not incentivized to publish stuff like that. And most scientists, when they don't actually find something, if they don't resort to bias or fraud to make it look like they did find something they will just shelf this study because that's not really what gets them the next grant for the next study.

[00:27:53] **Trish:** Right? We should be valuing null results as much as we value actual results, but unfortunately that just doesn't make for a sexy headline.

[00:28:03] **Hirad:** That's right. And it doesn't get grants in the future. So there's one other bias that I wanted to make sure we definitely talked about which is the fact that because science is such a social enterprise, all the social follies of humans work in science just as they do in any other context. So Richie talks about research done for Alzheimer's disease. For years, there has been a theory out there that certain plaque formation in the brain called the amyloid plaque is actually the root cause of Alzheimer's and numerous attempted cures have been created to try to get rid of this plaque.

And by all accounts, a lot of them do get rid of the plaque, but they don't actually treat Alzheimer's and Richie notes that alzheimer's is one of the slowest progresses we've ever made towards curing a disease. And part of the reason for this is that this theory of the amyloid plaque could be completely wrong.

This, this hypothesis of, of the amyloid plaque being the root cause could be completely wrong. But within the field of Alzheimer's cure research, there are, there's a cabal of high ranking professors and researchers , in important positions. And I'm going to quote from Richie's coat from a science writer, Sharon Begley, saying the dissenting researchers (so those that don't subscribe to the amyloid plaque hypothesis) described how proponents of the amyloid hypothesis, many of whom are powerful well-established professors act as a cabal shooting down papers that question, the hypothesis with nasty peer reviews, torpedoing attempts of a heterodox researchers to get funding and tenure.

So we've essentially got a little mafia that is absolutely married to their explanation of where Alzheimer's come from. And they're not entertaining alternatives. So we might actually just have to wait for this group of researchers to die off before someone can explore a new theory of where Alzheimer's come from.

[00:30:04] **Trish:** Yeah. And that's one thing that I really enjoyed about Richie's book is like, you know, he's a psychologist and he definitely spends a lot of time talking about a lot of problems with particularly psychology studies, but he really runs the gambit. He talks about problems in physics, in medicine. So it really goes to show how I feel like our entire scientific edifice has got a lot of cracks.

[00:30:28] **Hirad:** Yeah. Yeah. This is definitely not. So we'll, we'll I think throughout the course of this, we'll talk about psychology most, but that's the only, the one that, we know the most about in terms of how bad the issues are.

[00:30:40] **Trish:** Yeah. And I mean, on one hand, I think that psychology is the most alarming to talk about because the problems are, but on the other hand, some of the findings they would make are so kind of dumb that I feel like, like, no one cares. If you're like, oh, it turns out power posing. Isn't really a thing like that doesn't really change people's lives.

But in medicine, like we're talking about human suffering and just a lot of wasted money and, you know, you know what I mean? It's just, it, I think it's much more grim to talk about it in a lot of other fields where this is really affecting people's lives.

[00:31:19] **Hirad:** For sure. Should we talk about the negligence?

[00:31:23] **Trish:** Oh yeah. Negligence. I feel like this book goes from depressing, even more depressing to even more depressing, right with every progressing section. But I mean, it's, it's really, really good. So negligence is literally just humans being humans if, to err, is human, which is me and these papers are full of errors that aren't even on purpose.

So a couple of Dutch researchers made an algorithm named stat check that check statistics. It's sort of a spellcheck for numbers. So it's not even that they had to have the whole data set. They were just looking at kind of things like if you are, you know, dividing a number by 20, there are only like certain numbers you can come out with if you're doing that.

So they would look at that. And so basically in one survey they did nearly half of the papers had some sort of numerical inconsistency.

[00:32:16] **Hirad:** Yeah. This was a survey of 30,000 papers. Nearly half had errors basic numerical errors. Most of them were minor, but around 13% of them of the errors were conclusion changing.

[00:32:28] **Trish:** Yeah.

And this is the problem too, because many of these studies just like don't really share their data sets at all. So you actually really don't know what's going on in the bigger statistical analysis. This is just sort of the paltry numbers that are reported.

[00:32:42] **Hirad:** Can we just double click on this a little bit?

So this is something that I found out through, through the course of this book, scientific studies don't contain all of the data or all of the details of how they got that data.

[00:32:58] **Trish:** And they don't even have to, like, if you ask for it, they don't even have to share it.

[00:33:01] **Hirad:** They don't have to share it and they, they're not incentivized to share it because probably the only, the only thing is in it for the original authors is downside.

Like someone's poking holes in the, in your past work. Why would you want to participate in that?

[00:33:15] **Trish:** Yeah.

[00:33:17] **Hirad:** Did you also know that the FDA doesn't get detailed data about how clinical trials on anything? is something I learned about recently. This is, this has been a precedent in the U S I don't know where it comes from, but the, the raw data collected during clinical trial is considered proprietary information by the pharmaceutical companies.

And so the FDA kind of gets like a truncated report without all the raw data.

[00:33:47] **Trish:** You just have to take their word for it.

[00:33:49] **Hirad:** At some point. Yeah. But we can, we digress.

[00:33:55] **Trish:** And I mean, so that's just negligence in like double check your numbers guys, but then apparently there's a lot of problems with conducting research too.

So, cell lines and stem cell research is conducted with these sort of eternal cell lines. I don't, I don't actually know too much about the science behind this, but there are the cell lines and you use them to conduct research, but they can get contaminated really easily. So they're not even the cells that you thought they were.

And apparently this is such a huge problem that like, there are thousands of papers that cite research that were conducted on incorrect or contaminated cell lines. 

[00:34:31] **Hirad:** So a lot of people doing research with these cells, they don't actually know what they're doing research on. They might be, think they're doing research on like human lung cells, but they're actually using like, I dunno, mouse.

Something else cell. And and apparently this has been a problem for decades. There have been papers published on this problem since the fifties, I think. And people still haven't got their act together.

[00:34:56] **Trish:** Yeah. 

And so the other area that Richie identifies as a huge source of problems is hype in that we all want to be breaking ground, but we're never building any houses.

[00:35:09] **Hirad:** Yeah. If he says a few if you spend all your time breaking new ground, they end up with a bunch of holes. Well, this is also a case where you know, these, these bad incentives for the scientists, with their push for publishing or perishing gets mixed in with media and journalism with their push for catching eyeballs. And there, the result is not good.

[00:35:29] **Trish:** Yeah. And I think that everybody understands this. If you read the news with any consistency, it's every week they're telling you something new that you should be eating or should not be eating, or is definitely going to give you cancer or is maybe a little bit healthy and, you know, or some groundbreaking cancer treatment that never really seems to actually come around to treating people with cancer.

So this is really frustrating, but this is what we're incentivizing, right. To get like a New York Times headline saying that you were the researcher and you've, you've changed the game somehow.

[00:36:02] **Hirad:** Yeah. And the advantage for the researchers in this case is, you know, we, we've talked about a bunch of different issues so far, we've talked about fraud, bias, hype and these are huge issues in science.

And if you read some of the large-scale studies, that Ritchie cites these are not trivial. This is not like an isolated incidents that were blowing out of proportion. This is the things where you really can't rely on scientific findings because of these factors. But this is after peer review.

This is after some checks and balances have been put in place. When you do science through press release, even those few checks and balances that have all these holes in them already. Aren't there anymore. And so people will publish their research findings through the media and they will put on these press releases and it'll generate buzz.

It'll generate publicity. They can use the buzz on the publicity for their future grants, or they might use them to get book deals or give lucrative TedTalks. And, and yeah, most of it is just bunk and it just disappears into the ether after a while, but the, the researchers make a bunch of money.

[00:37:05] **Trish:** yeah, and if it does come out later, like, oh, maybe it wasn't quite as big a deal as we thought it was.

Or maybe, you know, it's more nuanced or more complex than that, like that. That never sells any papers right. To say that. So you just kind of always end up in the system where they're hyping up the newest thing, but it never really seems to have legs.

[00:37:26] **Hirad:** Yeah. Yeah. The corrections, if they're first, there are mostly absent and if they are there, nobody hears about

[00:37:33] **Trish:** Exactly. 

[00:37:34] **Hirad:** So like one example now you may have heard about over the last few decades is this idea of a growth mindset instead of a fixed mindset. So this was based on some rework done by Stanford psychologist, Carol Dweck, and she published popular books on the subject and gave Ted talks.

And her assertions were incorporated into school curriculums and the corporate training materials that you should always be having a growth mindset and that leads to better life outcomes. But when there was a meta analysis of the research on the correlation of growth mindset, and in this case it was school performance.

It found there was a correlation of 1%. So there was an effect, but it was not worth the, of all the bombastic claims that were being made about it.

[00:38:18] **Trish:** Well, just think of all the time and like effort that were wasted , over junk, like that,

[00:38:25] **Hirad:** Yup. Similar things recently well, you may have heard about the Mediterranean diet and again, that was also, there was all sorts of issues with the paper.

And you know, it's, it's the whole thing with the, and lie what does that saying that the lie can travel around the world before the truth can put his shoes on? It's basically the same thing. It's not a lie. Exactly. But these are just scientific findings that are not really solid at all.

And and we just, they just get hyped up in the news.

[00:38:51] **Trish:** Yeah. And I get it that science is hard. Like nutrition research is really brutal because it's so complex and it's hard to do, but that's why maybe every time someone finds some barely statistically significant finding, we don't need to start blasting it, like it is truth.

[00:39:08] **Hirad:** Yeah. So going back to.

conversations on Facebook with, with friends,

[00:39:15] **Trish:** quote, unquote air quote, friends.

[00:39:16] **Hirad:** Possibly former friends at this point. So there was I was having this conversation about, you know, the, the fact that you need to do your own research. And one of the things that. That was brought up to me was the Dunning Kruger effect. And so for those of you who don't know that , then in Dunning Kruger effect, is this finding in psychology where people who are very low in competence in a certain area , they are overly confident about their competence. But, and there's, this famous chart is called a, you can, if you Google "Mount Stupid" you will find this chart.

So it's a chart on one axis is competence. And on the other accents is confidence and it basically starts off at very low competence and high confidence. And then it kind of goes through a little valley of called the imposter valley where your competence is getting high and higher, and your confidence is going down.

And eventually when your competence gets very high, your confidence also starts going up again.

Well I, this was, this was shared with me. And then literally three days later, I wrote someone who did their own research on the Dunning Kruger effect. And it turns out that that chart actually has nothing to do with Dunning and Kruger's paper, because Dunning-Kruger basically just tested their hypothesis at one moment in time, it was never about kind of growing and your confidence changing as your competence changes. So I don't actually know where that Mt Stupid chart comes from, but it's not from their paper.

And what was interesting about their paper as you kind of dig into it is one of the things that they used to gauge people's competence was they gathered around a bunch of comedians and these guys were considered experts in the field of comedy and they had them rank a bunch of jokes as funny and not funny.

And then when there was a guy in this panel that seemed to disagree with the others. They removed him from the panel so that the panel of experts all agreed with each other. And then the competence of the participants in the study. Was it ranked based on how funny or not funny, they found the jokes and basically how much, how much their sense of humor aligned with this panel of experts.

So I haven't seen a take down of den cruiser, but I'm pretty sure this study is already questionable in a few different ways. , and what has happened though, is like, that study is real, someone has actually performed that scientific study. It's probably very questionable judging by this, this one section of the methodology, but the version of it that we popularly know about what the chart of Mt Stupid has actually nothing to do with that paper anyway!

[00:41:57] **Trish:** But people like that, they can basically insult people with a scientific term.

[00:42:02] **Hirad:** Yeah, I actually think people who memorize all the cognitive biases are the people who suffer from them the most as a rhetorical technique.

Yeah. So we we've talked about fraud, negligence bias and hype.

And the reason underlying all of these is these perverse incentives that we talked about at the start, which is you need to find groundbreaking research and you need to maximize your citations and the result of the things that people do to game the system here is at best slow progress. And at worst is false findings that make their way into critical decisions that we make.

[00:42:47] **Trish:** Yeah, and then lead other researchers astray for years.

[00:42:52] **Hirad:** And so the reason why we're talking about this today, though, and the reason why Richie has been writing his book is that over the last decade, the dam has been slowly breaking on this and the field where it all started was psychology, which is why we have the most advancement in knowing what's wrong.

Basically at some point tried some replication studies of some very foundational studies in psychology basically going back several decades. These are things that were in every psychology textbook and every psych students would be learning about in school and they did not replicate.

[00:43:27] **Trish:** So what's what does it mean if you're doing a replication study? What's that?

[00:43:30] **Hirad:** Basically you try to do exactly what the original author did. You set up a new experiment. You run it as closely as you possibly can to the original experiment and you hope to get the same result.

[00:43:43] **Trish:** Right? And if you got the same result, I'd be like, great. You know, they have good findings.

This is, this is looking solid. Another data point of it. 

[00:43:51] **Hirad:** It's like making another phone call on your cell phone with electromagnetic waves, like when our understanding is confirmed. And if you can't find it, then that should raise some eyebrows, especially if it's a study that people are relying on, in the real world.

[00:44:04] **Trish:** Well, and if you can't replicate it, it doesn't mean that it's false or wrong, but you just don't know if it's right.

[00:44:09] **Hirad:** but yeah,

[00:44:09] **Trish:** you haven't, you don't know anything

[00:44:11] **Hirad:** you have no information. That's like the science is not helping you in that case because the whole point of science is these repeatable confirmed understandings. Right.

[00:44:20] **Trish:** But I just wanted to say that because it doesn't mean it's disproved.

It just means like you still don't know anything.

[00:44:25] **Hirad:** Exactly. So once this was initially discovered in psychology, then people started kind of organizing more large scale replication efforts. And in some very large replication attempts, only 39% of the results could be replicated. So that means the vast majority of papers in psychology do not replicate.

[00:44:46] **Trish:** So are there some famous examples of this...like the Stanford Prison Experiment? 

[00:44:52] **Hirad:** Yeah I think I forget if that was one of the like the early ones that broke the dam, but I think I was that like a replication effort or was it also that they found out the guy was lying about how he ran the experiment?

[00:45:03] **Trish:** It was all bad, but it definitely, it never replicated.

[00:45:05] **Hirad:** Yeah. So yeah, the Stanford prison experiment was this study in the 1970s kind of testing out what happens when you put people in different power position positions of power and how they treat others. And in this experiment, a bunch of test subjects were quote unquote prisoners, and a bunch of them were quote unquote prison guards.

And the study seemed to prove that when you put people in a position of being prison guards, they get more cruel and they treat people badly. Now, this study was so iconic that the author testified in the trials of some of the guards of the Abu Ghraib prison when they you know, when they were caught with the mistreatment of the prisoners there.

And it turns out that when he performed the original study, he had basically instructed these subjects who were acting as guards on how to behave. The whole thing was completely bunk.

[00:46:03] **Trish:** Yeah. So if you think, be like, well, okay, psychology research is psychology research. I have bad news. It's also all through the quote, unquote hard sciences geology, geophysics papers failed to replicate, and it's actually even worse. Let's say you don't want to redo an entire study, but you want to just rerun the data and see if you can get, you can reproduce the same results using the exact same statistical tests on the exact same data. A bunch of those came out wrong.

[00:46:37] **Hirad:** That is just mind blowing.

Yeah. So you don't, you don't even, you just take their data and you rerun the analysis and the conclusion you get is not the conclusion they reported.

Yeah. . And what's like we talked, we talked about this briefly is in a lot of these fields with that are very consequential. Not only are there not a lot of replication attempts the, the studies often don't even report their protocols in sufficient detail to be able to analyze them.

So where does this all leave us? We've got, we've got all these things that can corrupt the practice of science and has corrupted the practice of science. . 

[00:47:18] **Trish:** So we've got all these perverse incentives where people want to make money and they want to be successful, which are normal human motivations. But don't work in a system where we're asking people to be completely idealistic and do something for like the greater moral, good than their own, want for a comfortable life and recognition amongst their peers and stuff.

This is from Richie.

"Perverse incentives work like an ill temper genie giving you exactly what you asked for, but not necessarily what you wanted, incentivize publication quantity, and you'll get it, but be prepared for scientists to have less time to check for mistakes and for salami slicing to become the norm.

Incentivize publication in high-impact journals and you'll get it, but be prepared for scientists to use P hacking publication bias and even fraud and their attempts to get there. Incentivize competing for grant money and you'll get it, but be prepared for scientists to hype and spin their findings out of proportion in an attempt to catch the eye of the funders. On the surface, our current system of science funding and publication might seem like it promotes productivity and innovation, but instead it often rewards those who are following only the letter rather than the spirit of the endeavor." End quote.

[00:48:38] **Hirad:** So let's maybe talk about what we take away , from the lessons of Science Fictions.

[00:48:44] **Trish:** I personally felt this is, I mean, this is listeners are going to realize that I'm a very simple creature who often is self-conscious this made me feel better for. Feeling like scientific literature is difficult to understand and often have poor quality and not understanding how to use it to make decisions in my life.

[00:49:08] **Hirad:** Well, I think that's, that is actually a very key point to touch on. Right. And this kind of comes back to this, these discussions that have been having on in like praise of doing your own research. Right? And I think this idea of doing your own research has been kind of mocked in that you, you can't really go out there and collect data.

The only people who are doing that are scientists and you need to trust the scientists. And there's, first of all, like we talked about at the beginning of the episode, there's a sense of faith in science, right? And that's not actually the domain of science. The only time you should be not questioning things is in the domain of religion.

So within science you should always be questioning and I don't care who you are. It doesn't matter who you are. Right. The only criteria is if you think as long as you can think, then you should, you must question. Right. I would all say it's actually your responsibility to think, instead of advocating that to a bunch of people that, you know, they, they are just as human as you are and have maybe subject to perverse incentives. And then there's the question of what happens when these scientific findings make it into things like policy in, science is just one of many inputs and of course you should do your research on how that process is happening. Right. To me, the question of doing your own research is , well, first of all, you not going out there and gathering data like a scientist would, it's actually impossible.

There are tens of thousands or hundreds of thousands of papers being published every year. And nobody can really even keep track of all of them. Right. But what you are doing is when something is about to impact your life, you have to dig deep into where it comes from. Because if you dig just a little bit below the surface and there's like practical considerations, you probably can't dig all the way down to, you know, primary sources.

But if you dig just a little bit below the surface, you may find that the thing that is about to impact your life or did this, the scientific finding that you may be basing your decisions on might not be as solid as you thought.

And in many instances, the people that have actually moved things forward in a given field were people that were not experts in that field. And this actually, this is a reason why I really wanted to make sure we talk about that Alzheimer's research finding , or this dynamic of, you know, powerful cabal shooting down alternative hypothesis.

That kind of behavior is innate to humans. And it's also why I think we.

In the context of of work. I had a mentor who who was helping me think through how to manage a business. Right? And one of the things that he told me that really stuck with me was that if every time something bad happens, you create a new process to prevent it from ever happening.

Again, pretty soon your processes will turn into an auto-immune disease, which means your processes will get in the way of your business actually doing the thing that it's supposed to do. And so at some point you need to take a risk of something bad happening, right. That's kind of, you need to be okay with that.

And how much of that is okay, is a bit of a judgment call, right? And whether you make the right judgment or not, is what determines if you're good at it or not.

Within the field of science, I feel like we've fallen prey to the exact same thing, because we have created all these institutions, all these formal systems within which things are supposed to work, but those systems can very much stifle innovation as much as they can help it.

So in a lot of these conversations, I hear people talking about science. Like it's a, it's an article of faith. And I hear people talking about credentials like lack thereof kind of precludes one from thinking, right. And it doesn't, it, you are, you may not be qualified to talk about the details of the field, but you can judge someone's understanding based on how well they can describe something based on what their incentives are based on what kind of you know, bureaucracy or institution they're operating in.

Right. You can make judgment calls on all of those. And I think to me, that's kind of what it means to do your own research. And the other thing is like, when you don't do your own research, who is interpreting the science for you, right? Science, like even if the science is completely solid, by the time you hear about it, as someone who is not doing their own research it has gone through multiple systems of bad incentives through journalism and media.

And so it has just it's unreliable.

[00:53:43] **Trish:** I know, and I find that the, to me personally, the least compelling argument for something is you should listen to me because X, Y, Z, this is what I do for a living. And I think that obviously that would make you an expert, but I shouldn't listen to you just because you. Did like, because you are, you know, you have skill in that field.

Like, I still think that you have to make good arguments. You have to be able to like present the research. And just like a little hack that I have for life is like, you know, you're probably talking to a really good scientist and a scientist that you should listen to. If everything they say is really qualified.

If it's everything they say is really carefully worded, if they never liked to make sweeping generalizations, if they never liked, like, that's how you kind of, like, I find that, I know that like you're really talking to someone who knows their stuff, whether they will be like really nuanced, really qualified, like really like sort of convey the complexity.

[00:54:43] **Hirad:** Well, the reason for that is because science never tells you a clear answer. There's always caveats. It's the way that we've been talking about it in, in like popular in the popular press lately is like the science says this, no, the science never says anything.

Every time the science has a finding it, there is a bunch of fine print on it. There's all of these caveats about the context and the way that the study was run and all sorts of other details that you have to take into consideration and the situation in which you are making your decision may not map exactly to that solid finding.

And you just have to use reasonable judgment to, to make your calls. And then the question is like, are the people that are making the decisions using reasonable judgment or are they using the same values even to decide between possible alternatives.

[00:55:32] **Trish:** Well that's exactly it because science will also just sort of, they'll tell you facts, but how you are going to interpret or how are you going to use those facts? Like that falls into such a gray area that had a lot to do with our values and our morality and, you know, different judgment calls about what's going to be good for different people.

And I think that. We talked about this in another episode, like, I get really uncomfortable when you start using science and morality kind of in the same way to do the same things.

[00:56:01] **Hirad:** Yeah. And I think that's exactly where we've ended up where people don't understand why science may be questioned and because they just see it as this monolithic truth. I think that, I think that's actually what it is. I think like there's a, the word science and truth are being used interchangeably and they're not,

[00:56:22] **Trish:** it kind of, it's treated like every scientific paper is coming down off the mountain on stone tablets. And that's just not, that's not how we need to be treating this institution.

[00:56:34] **Hirad:** for sure.

[00:56:35] **Trish:** , I mean, good on you for taking up causes on the internet. I basically am unwilling to interact with almost anyone on the internet.

[00:56:42] **Hirad:** there are people who are wrong on the internet. What do you expect me to do?

[00:56:46] **Trish:** I know, but I do think that, like, one of the thing that bothers me is this almost like religious fervor that people use to defend their views using science as kind of their ammunition. And I just feel like you should not treat science or scientific institutions as your church. Like you are getting into dangerous territory where you are kind of putting all your belief and to try and , feel like you have some sort of control over your life 

[00:57:14] **Hirad:** I think in the absence of religion be it playing the prominent role that it used to play in society. People are now looking to science for certainty. And that's, that's not the place to look. I think we should bring back religion just, just to make sure we keep the, if we want to keep science, science we should bring back religion just to make sure people can get their certainty from God.

[00:57:34] **Trish:** And like we, cause like with science, you should always be like, the response is , well, I think this thing is true.

Prove it, like it should, I think that we've just lost a lot of the rigor. Yeah. And people are, but they're treating it as if that, you know, we've got the new Pope, who's the expert in the field and you just have to believe everything they say, because they are, supposedly the best expert.

And I actually like, this is funny because this is coming from two people who love science and who do like, I will like defend the institution till the day we die. But I think that that's also why we need to strengthen it and make it as Bulletproof as we possibly can.

[00:58:08] **Hirad:** I think I w what I love is thinking, and that's why science is appealing. And I actually think a lot of people, first of all, I would say a lot of people who get very educated are actually brain dead. They cannot think like they, they think like if something is not peer reviewed, it's not, it's not factual.

And that's bullshit. You have to be willing to look at, engage with the world. Right. I have one example that I that I can give is like a couple of friends of mine that like going fishing. Right. And you can spend many hours studying theory on what certain fish at certain areas at certain time we'll be eating or you can go there and you can have a bunch of bait with you and you can just try it. Who cares what the theory says? Just try it. And you'll probably catch more fish that way.

[00:58:59] **Trish:** But, I mean, there's obviously room for both. I don't know. This analogy makes me slightly uncomfortable because I don't feel like that means that none of the theory is worthwhile at all.

[00:59:09] **Hirad:** Okay. So I'm going to say something controversial here to paint an example So within the pandemic context you know, there has been all this talk about ivermectin, right. And whether it works as a treatment for COVID and whether it doesn't right.

[00:59:23] **Trish:** This is why every Neil Young is all in a, is that where we're going? Are we going with Spotify and Joe Rogan or, no?

[00:59:29] **Hirad:** Not yet. No, no. We're not on Spotify as well, so no disclaimers on this podcast.

But so it's the number of people kind of felt like ivermectin worked there have been a bunch of studies on it. The studies, you know, you can pick potential problems with them, but this is what we know about ivermectin. It's a popular drug that doesn't have too much side effects.

It's billions of doses are given to people every year. And, and some doctors seem to observe around the world that it seems to treat COVID patients. Now, maybe this doesn't actually scale. Maybe there is certain contexts in which it helps. Maybe there's 

[01:00:07] **Trish:** some 

[01:00:08] **Hirad:** confounding things 

contexts 

[01:00:09] **Trish:** things

[01:00:09] **Hirad:** there are confounders.

Yeah. Who gives a shit? Just give it to people anyway. It's cheap. It's abundant.

[01:00:15] **Trish:** like, come on. Like, no, this, I totally disagree with this. 

[01:00:19] **Hirad:** Why? 

[01:00:20] **Trish:** I think that , the baseline standard for medical research is that , you should do nothing unless you're pretty sure it's going to help.

[01:00:29] **Hirad:** No, I don't think you should do nothing. 

[01:00:31] **Trish:** It's first do first do no harm. It. 

[01:00:33] **Hirad:** Doesn't say you're, you're, you're confident about the harm with our ivermectin. Cause billions of doses have been given away.

[01:00:39] **Trish:** well, I don't know. I don't feel like it's wasteful. Like I'm not confident that like, it's totally no risk. I haven't liked some taken too much and poisoned themselves.

[01:00:51] **Hirad:** Yeah. Because they're, they're taking things into their own hands because then because it's getting vilified in the media.

[01:00:56] **Trish:** No I know. I just, like, I don't think that, that's way too loosey goosey for medical. If some people want to take things into their own hands and take this drug, that's fine.

But I don't think medical practitioners should ever just be like, well, we don't really know. So just try this thing and we can see, I think that a lot of medicine does operate that way, but I think that's totally wrong.

[01:01:15] **Hirad:** No, I think looking for randomized control trials for everything and not taking action without the presence of the most solid findings there is, you know, there's a difference between how you operate in the, in the real world and what you claim to be able to know in theory.

Right. That's what I'm trying to get at. 

[01:01:33] **Trish:** Well, I know, but , this is the thing that makes me mad. So, because , so many people just go to their family doctor for antibiotics when they have some sort of viral thing and they're like, no, no, no, but in my like anecdotal experience, it makes me feel better.

And like I'm always really scared of antibiotic resistance. So this is the drum that I always bang on about, but it's like in the family doctor, it's like, fine. You want it here. You makes you feel better. I don't care if it's the placebo effect, take it out of my office. Like that is not good medicine.

That is not how we should be operating. There are downsides that even though antibiotics are safe and.

[01:02:06] **Hirad:** yeah, I mean, , you're pointing at something that's a legitimate potential downside that we do know about antibiotics. 

[01:02:12] **Trish:** We didn't know about it for a long time. Right. We just figured it out eventually.

[01:02:17] **Hirad:** But again, I guess this, to me, this is like a, I mean, you're right. That's a good point. But we also knew that that can be said about anything. Like anything can have downsides that we don't know about. 

[01:02:29] **Trish:** Yeah. And so that's why I'm saying like, proceed with caution. Don't just throw caution to the wind and throw every single idea you have it.

We need to progress systematically with good research. And I mean, I think the ivermectin thing has gone insane , I don't understand why we can't just do a good study on it and figure it out. And that gets into politics and stuff, which I think that like, so you've got good points with like the way this thing is being treated by political groups in the media is kind of insane.

But I also don't think that the answer is just like. You know, do whatever you want. 

[01:03:03] **Hirad:** No, I do, I have you actually think that this should be led by doctors who they may not have the backing of a randomized controlled trial, but again, they will have a lot of anecdotal data points.

And again, it's, it's it's, this is the thing that I, the, the drum that I'm banging here, a difference between a course of action and a piece of scientific knowledge, right? When you say something is scientific knowledge, it's based on the studies that you've done based the amount of replications that has had, but when you act in the real world, it's about values and judgment, right.

And what I'm saying is , you need to be very explicit about that, you can take judgment into account if you dismiss any possible course of action because no randomized control trial supports it. Or if you just rely on something that is you know, there's two ways you can make a mistake right there.

You can make the mistakes of commission and mistakes of omission, right? And mistakes of commission would be, you have a drug that you think is safe, but you think it's safe because of a study that was done in a very flawed fashion, the mistake of omission would be you have a drug that is safe. But you would not use it because it's, it's upside has not been proven through a randomized controlled trial or whatever.

I think you can make mistakes in both sides and there has to be room for understanding that there's judgment that goes into that.

[01:04:34] **Trish:** Okay. I think judgment can happen on a personal level. Like you can decide to join a drug trial or try an experimental therapy. And that can be on you. I don't think a medical practitioner should ever be making that judgment call. And like, this is the thing. Cause like the human body is so weird and there's so much psychology in it.

Like, would you say though, then, , I think it's completely unethical to prescribe something because of the placebo effect. Like even though people are reporting positive outcomes from something it's just the placebo effect, then you're like, well, if someone feels better, why not give it to them?

Anecdotally, this has been working fine. 

[01:05:16] **Hirad:** Yeah why not? 

[01:05:17] **Trish:** No, because it's not actually doing anything. It's just a placebo. Because you want to find things that treat illnesses not just like, make people feel better. 

[01:05:28] **Hirad:** Do we understand where the placebo effect comes from? 

[01:05:31] **Trish:** Somewhere in your brain.

[01:05:33] **Hirad:** like, I, I don't, I don't actually know a whole lot about the placebo effect, maybe as much as I should, but...

[01:05:38] **Trish:** It's frightening.

[01:05:39] **Hirad:** Well, yeah, but then it seems to me that that has been my impression that it is frightening and it seems to do a lot of things that we think they shouldn't be able to do. Right. So what's wrong with it, if it, if it, if it does the thing.

[01:05:55] **Trish:** Well, I think that you have to do the thing in the okay. Basically it might, I just think that like, you have to go with trusted actual therapies. And not just like, it seems to work for some people. Sometimes I think that there's often going to be adjacent adverse effects that you need to be careful about.

And I don't think that you can just always take, be like, ah, makes it seems to make people feel better have at it. I just, that's not good medicine. That's not how we should be operating. We need to find truth in it. And I know that's not satisfying and maybe it's not a compelling argument, but.

[01:06:38] **Hirad:** Well, to me, it's like, no, the objective as an, as an engineer in my head, if something achieves the objective you want to achieve, someone comes to you as a, as a medical practitioner, it says , I have a pain somewhere. Right. And he know that if you give him a sugar cube and that they think is a, is a treatment pill. They might feel better as a result. You've done your job.

[01:07:00] **Trish:** No, because your job as a physician isn't just to like magically make people feel better by whatever course is like-

[01:07:08] **Hirad:** Haven't healers done this for thousands of years?

[01:07:11] **Trish:** Like a healer, right? Like if you want that, you can go to like your like Reiki or like Reiki practitioner.

Right. And then , that's fine. I think that there's lots of alternative ways that people can go. But like, if you're talking about a physician who's supposed to be able to tell you what the science is behind something, the actual science.

[01:07:30] **Hirad:** that's a, that's, that's an interesting distinction.

Hm. Well, I guess, I guess what I would say to that is my, you know, a couple of years ago, I would never consider going to someone that has That practices, some kind of traditional medicine anything. But my view on that has changed. I think there is value in things that we, you know, if they're extremely low risk, I'll give you like a simple example is an Iranian culture.

If you have a stomach ache they tell you to drink mint tea with some sweetener that I don't know the name of in English. But, and if you, if you drank that, it seems to cure a lot of stomach issues. Sometimes it's worked for me, sometimes it hasn't, maybe it's just the times that I took it and it, and it worked for me has been random.

I don't care. I'll drink it. It's just tea who cares. I don't need a randomized trial for that. 

[01:08:24] **Trish:** And and you can go and you can make that decision on your own. Like whether a doctor should be prescribing people to like go to the store and buy this one, certain tea and sweetener and telling them that that's going to cure them.

[01:08:36] **Hirad:** So this is the, yeah. So you, so you were saying you're making this distinction that like the, the medical doctor should be doing things that are purely scientifically proven.

[01:08:48] **Trish:** Yes. 

[01:08:49] **Hirad:** I think the medical doctor should be solving the problem and whatever means they can to solve that problem is fine. We're in this situation now, I think it's a very unique situation.

So I'll be curious to get your thoughts on this. Like back in the day, let's I forget where I read this example, but let's say you had a doctor in town and there's one doctor would be seeing all the patients in town does have one person that everybody would go to for medical assistance.

And there wasn't a whole lot of medical knowledge, let's say like 200 years ago. Right. But there might be a simple case where you see a bunch of people on one side of the river are getting sick with something. And then a bunch of people on the other side of the river are not getting sick with something.

Then you have to actively start do going through a process of elimination. Like, what is it on one side, but not on the other side that could be causing this and you try a few different things and you find something that works. That's your job. That's still your job to this day, even though you're not as directly involved in the process of identifying.

And I think part of my problem with actually the medical profession specifically is they don't, I don't think they do any of that. I think what they do is they get prescriptions from on high about what kind of treatments they can give under what conditions. And they kind of just apply it almost algorithmically.

[01:10:07] **Trish:** Right. I think that you're having issue with the process and not the, the kind of the ideological, like you're, you're saying that like, the ideology is fine for that, but somewhere in the process, it gets messed up.

[01:10:25] **Hirad:** Well, my thing is like, you got to engage with the real work is solving some problems that people are coming to for, right.

To me, there's I actually , this could be my personal bias. There's no difference between an engineer and a doctor, except the machine that you operate on is, is different and more complex and current in terms of, of

[01:10:43] **Trish:** more complex. And the things like people get so many weird, like psychosomatic things and like mental illness is all so much of a part of it.

And we basically don't understand almost anything about that. And I just think that it's it's, it's sounds nice. You're like, yeah, like, and I think a lot of people do treat the medical system that way. We're like, I'm going to show up and I want them to solve this problem. And I don't really care. They usually want it to be like a magic pill or whatever, instead of just being like, well, maybe you should exercise and eat a salad from time to time.

, I think that it's just a little bit. I don't think that it's, it's that simple of just like fixing a problem. It doesn't matter what the cost is or it

[01:11:24] **Hirad:** No, I, I think it matters what the cost is, but it sometimes, you know, that the cost is low. And my thing is like again, the mistakes of omission, like what, what are the times when you know that the cost is low, but you may not have exactly the , the randomized control trial that proves something works.

Like if it solves a problem or if it seems to solve a problem in this one instance who cares if a randomized controlled trial.

because you might be wrong, right? Like, this is a thing 

the cost is low, cost is low,

[01:11:54] **Trish:** might not know what the cost is. Right. Like a lot of times, a lot of drugs come out being touted as the new thing.

And then people have a bunch of birth defects in they're like, oh shit, maybe we were wrong. And that's the thing is like, I'm just saying that like, you don't necessarily know the costs.

[01:12:10] **Hirad:** Yeah. So I think, I think you're right about things that are novel. I think about things that have been, what I'm talking about is specifically in the cost of, you know, the cost is low and you have to have reason to know that the cost is low.

So you're right. Like if something just new popped up and you were like, well, we don't know this thing, but again, like what we're doing here, Trish is we're applying judgment. Like, what we're saying here is if people have been drinking this mint tea thing for a couple of hundred years and it's been fine, It's probably fine if you drink the minty treatment, right.

It doesn't really matter. But if some new compound comes out, some new medical procedure out and nobody's gotten in, then it may not be okay to just Willy nilly, give to people. Right. And you don't need science to tell you any of this. And, and when the scientific professional science professional may make this mistake of telling you one thing or the other.

So they may tell you that this drug nothing, not naming any names, but like any medical treatment that has not been rigorously tested It is safe and effective for you to take that person may be wrong. And because they just cannot possibly know that similarly, the person that says, this Chinese medicine that people have been drinking for a very long time may have downsides.

That's much less likely because people have been doing it, whether it works or not as a whole different question. But yeah, I think, I think like we, we're using judgment in these cases and I think it's perfectly fine,

[01:13:46] **Trish:** But I don't think that you can institutionally start. Well, this is-

[01:13:50] **Hirad:** Well, that's the whole problem. That's, that's exactly. That's the reason why we find ourselves in this mess is because we want to institutionalize things and you're right. We, you don't want to institutionalize what I'm describing, but I don't know how to get out of that problem.

[01:14:05] **Trish:** I think you're also, you're kind of talking about like, this is the way humans, , think, right? It's like, we kind of look at things around us. We pick up patterns, we use it in our everyday decision-making , we are bad scientists just by nature, which is why we had to dream up this whole system of how we can actually get around all our own biases and pattern recognition of where there may or may-

[01:14:26] **Hirad:** hang on. Why are we bad scientists?

[01:14:28] **Trish:** Like a individual, because we're just, we're full of biases. We're often see patterns where there's not, I think that humans are always just relying on anecdotal evidence of what they see in front of them. I think like the average person is terrible at scientific thinking. I think no, everyone is bad at scientific thinking as it's something that has to be beat out of you and taught to be always that skeptical and always that unsure and always that questioning.

So I I just want to make sure that those two systems are separate. Like, yeah, you can do what you want and go with anecdotal research of what, people around you are saying and what you're seeing in front of you and kind of connecting dots between patterns, but that is not science and you can't pitch it as science and you got to like leave science to be science.

And when there's a, like a double blind, randomized controlled trial on something, then you can maybe start to say some of it's science.

[01:15:21] **Hirad:** I agree with that completely. 

[01:15:22] **Trish:** Maybe that's kind of what I've been trying to get onto. 

[01:15:25] **Hirad:** But my problem is when science meets the real world, right?

How are we going to use science to solve our problems? And what we need to acknowledge is when we decide to do something in the real world, science is just one of many inputs. So you have your values. You know, anything you try to do in the real world will have trade offs and you gotta make some kind of a call about which trade-offs you can take.

And that kind of comes back to what values you Right? And you also have judgment because the scientific finding may not be directly applicable to the situation that you're applying it for. And, and then if you're talking about some government agency, that's trying to decide on how to set policies.

We have political considerations, right? So science is just one of many inputs when we were talking about doing things in the real world.

[01:16:19] **Trish:** Yes Yes 

[01:16:20] **Hirad:** And we need to be very explicit about that. I think should be separate from things like Chinese medicine, for sure. But when we were talking about application, it's a different question, multiple different inputs have to go into that.

[01:16:36] **Trish:** And I would say yes, when you're talking about politics, I would say yes. When you're talking about social policy, I'm not sure when it comes to medical advice, if like, I think that should be like the 90% science and maybe less anything else, like considerations about like cost.

[01:16:55] **Hirad:** Well, I think one of the things that I I've been mulling over the last couple of months is just the way that now when we talk about medicine I feel, I feel so woke right now.

Cause I'm going to, I'm going to talk about like the Western ways of knowing and the other ways of knowing. But there's a lot of different systems that are all interacting for us to come up with medical knowledge. So let's say we we're talking about a particular treatment, right?

You have to take into account all the economic incentives of pharmaceutical companies, where their grant money's coming from what patent laws are going to protect their profits how long. And so I disagree that should be the only place that we rely on for finding potential cures or potential treatments, because it has so many problems that are so hairy that we're never, we're not going to solve them anytime soon.

And you're really narrowing down your field of vision. If that's the only place that you're going to look for potential findings.

[01:17:59] **Trish:** Right. But again, that's a process problem. So , I would agree with you that there's a lot of problems in the process of, or just like the existence of big pharma and how they're conducting science and how they're sharing the results and what their incentives are for what diseases they want to treat and what they don't.

And you know, whether you want to just feed people Lipitor for the rest of their life, or you want to tell them to maybe like, yeah. Have a salad. But I mean, again, that's like, that's a process thing just because it's, you're saying like, well, I don't really like that. This is the system we're in. I'd be like, I agree. I don't really like it either. That doesn't mean that like you just throw the whole thing out. I think that's exactly what Richie is saying we can't do. 

[01:18:39] **Hirad:** I'm not saying throw the whole thing out. This is one of many inputs, but you have to understand that this input is limited. This is what, this is the judge. The understanding that those incentives are there in a certain way, changes your judgment about the scientific output. And, and also about some of the other outputs. I'm sorry, inputs not But so, so if so when you're deciding on how to act in the real world you may know that, the medical treatments that you presented with are a function of a system with such and such incentives.

And therefore maybe for some circumstances, you want to broaden your horizon and try some things that are outside of that system. 

[01:19:20] **Trish:** That's well, within your rights to do

[01:19:21] **Hirad:** And if you told me that like five or six years ago, I would have totally left you out of the room because now of course you have to stick with the science, but I actually don't think you, when, when you see how the sausage is made, you have to consider eating a salad sometimes. 

[01:19:36] **Trish:** Exactly. Exactly. It. I know. And I think that's the thing you just can't have blind faith in this system. Right. It's like, it's people with a lot of different motivations doing a lot of things. And like, we kind of treat it like it's this, you know, untouchable perfect institution.

And it's so far from that. But I guess if that's one thing that I've been struggling about is like, what is a healthy amount of skepticism to have?

[01:20:02] **Hirad:** Yeah, I think that's the, that's a million dollar question.

[01:20:06] **Trish:** Because On one hand, just because something's hard and complicated doesn't mean that we should shy away from it. Like climate research is all super complicated and difficult and we haven't been collecting data for very long, but it's also hard to know what to do with a lot of that.

[01:20:24] **Hirad:** Yeah. I think we should do an episode on that in the future.

[01:20:27] **Trish:** let's do it 

[01:20:28] **Hirad:** That's that's what we do with it. I said, do an episode,

[01:20:31] **Trish:** But do you feel like that you have that problem as well with like, not knowing, not like I think that everyone should have a healthy dose of skepticism, but not ending up in this crisis where you throw up your hands and you're like, well, I don't really know, I'm just going to totally go off rails and veer into left field and do whatever I I want. 

[01:20:50] **Hirad:** So this is why I think values are also one of the inputs into this. So my brain works a certain way. I'll tell you how my brain works, I know that this is not going to apply to everyone and trying to get a whole society on board with this is just going to be hard. And you're going to have to bring people on board with their values, blah, blah, blah. But in my head if you cover the downside and certain things are just no brainers, right?

So I think one of the things that people really go on about with like say climate activists, they really want to get rid of fossil fuels. And this is not a field that I've like read a whole lot about this. I don't have strong opinions on this is based on like some cursory things that have made sense to me, but what fossil fuels are, for example, or it's like million millions of years of energy collected and condensed into these chemical compounds, then now we burn up to like really centered energy. Right. And where the earth gets, all of its energy is from like sunshine from wind. And eventually these things turn into living organisms. And then we, we like those things get condensed. So anyway, there's, there's-

[01:21:52] **Trish:** , it's old plants from the Carboniferous to, for us that compressed a certain way and now-

[01:21:55] **Hirad:** yeah. And plants got their energy from the sun. Right. So so this has happened and now we're trying to replace fossil fuels with something like wind and solar, which are a moment in time kind of thing. So when you're getting solar energy, it's as the sun is shining in any given moment, that's getting converted into energy. 

Something is it's impossible to replace something that took wind and solar over millions of years and condensed it into this chemical compound that we're now burning in no time and try to replace it with the slice in time kind of energy source. So what happens is like a lot of activists come up with this idea of like, we need to go go to sustainable resources like wind and solar.

But what they're actually missing here is that our entire way of life is going to get disrupted with that. And even civilization is probably not possible without...

[01:22:50] **Trish:** No, I have the answer. Uranium.

[01:22:54] **Hirad:** Right. 

So, yeah. So exactly. Let's say I don't need to know about I don't need to be convinced that climate change is real in order to support nuclear energy, right.

Or in order to support more green sources of energy, right. Or more sustainable sources of energy. All I need to know is that they're more sustainable and that their disruption to life is not going to be so great. Right. And once I know that , you can argue about the climate science, all you want actually don't care.

Right. It's, it's something that we win regardless. Like whether climate change is happening or not. If we go to sustainable resources, as long as it's not in such a way that is disrupting our lives. Then we win. And the problem with a lot of climate activists has been that they are proposing things to, I think they actually kind of get a kick at proposing things that like, do disrupt our lives because they think it's a sacrifice we must be able to make.

And they're so virtuous for pushing us to make the self-sacrificing. But, I think that's , that's the wrong way of going about it. But anyway, I think that's , that's my position on that is , I don't need to know the climate science doesn't matter to me at all. Right. Again, it comes back to the values and the judgment, right? 

[01:24:03] **Trish:** Yeah. I would agree with that. So on a scale of one to four,

[01:24:09] **Hirad:** I thought it was one to five? 

[01:24:10] **Trish:** One think I to four. One, you hated it. Don't bother. Two - read it, if it's free. Three- I liked it. I would recommend it, I would pay money for it. Four- it gets prominently displayed on my bookshelf, this is such a favorite. Where does Science Fictions land for you?

[01:24:31] **Hirad:** I'll say three.

[01:24:32] **Trish:** Yeah, it's a solid three, maybe a four for me.

Not because it's necessarily the type of book that I want to have and display forever, but I just think literally everybody who gets to university as like a first year should read it.

[01:24:48] **Hirad:** Yup. Yup. And should read it multiple times every year. If the more degrees they collect, you get a master's degree.

You got to read it twice every year. If you get a PhD, you got to read it five times every year.

Exactly. Exactly. Well, is that it for us today covered a lot of ground?

I think we did

[01:25:08] **Trish:** Well, it was a pleasure as always.

[01:25:11] **Hirad:** And we'll talk to you soon.