RSS Feed
iTunes Link

Episode 156 - The Scientific Method: How We Get to What We Know

Download the Episode

Recap: The scientific method is the process through which most modern science is done. Whether done explicitly in its formalized steps or not, it underlies the very basics of how we know what we know, why science is inherently a self-correcting process, and why when there exists a long-standing scientific consensus with broad support, it should not be taken as a political whim by a few motivated people.

Additional Materials:

Episode Summary

Opening Monologue: I'm going to come right out in this episode and say that it may be a bit unpopular with some listeners. It is a very thinly veiled episode in response to the man who, as I record this, a little over a week ago was sworn in as the 45th president of the United States. I debated for a long time on doing this kind of episode because I really do try very hard to keep the podcast apolitical. And then it hit me: What is true - or "the truth" - is, or at least should be, apolitical. Facts are facts. If something "alternative" really were a fact, it would not be an "alternative fact," it would be a "fact." Just as with "alternative medicine" -- if it worked, it would just be called "medicine."

Structure: With that in mind, looking through the episodes I've done, I have never really done one on the scientific process. I've said many times on this show that the point of the podcast is to use crazy claims to show how science is really done, or "how do we know what we know?" But, I've never really done one on the scientific process which, in form and function, gets us to that point where we can say whether something is a fact or something is not.

The Scientific Method

The scientific method is the formal process by which science, in its current incarnation, works. It's really a very basic system that I'm going to talk about as 7 iterative steps.

The first step is to make an observation. All science is supposed to start with, effectively, you see something interesting. Very basic.

The next step is to think of questions that come from that observation, usually of the basic form, why does what I see happen the way it happens?

Step three is where you go a step beyond the basic question stage and try to come up with possible answers to those questions, what we call hypotheses. So, if I see an apple on the ground (step 1) and wonder why it's on the ground (step 2), I may think that perhaps there's something that the apple came from and that pulled it to the ground (step 3).

The next step is to come up with ways to test those possible hypotheses. If a hypothesis is not testable, the scientific method stops at step 3, you can't move on, and most would agree that it no longer falls in the realm of science. Hence why I have a problem with string "theory," which is not currently testable.

But, if you have ways that you can test those hypotheses, you can move on to the next step, gathering data while you are in the act of carrying out those tests of those hypotheses. This is often the longest part of the process and the most tedious. One of my former officemates in grad school ran tests of simulations Martian surface material and got a lot of knitting done during late nights in the lab. During a summer project in undergrad, I became quite proficient at creating paper airplanes while I waited for code to run on my computer.

One of the things that makes this process so tedious is Step Six, which is where you iterate. You refine, alter, expand, or reject hypotheses and develop new testable predictions and gather new data to test those new predictions. If your data do not support your hypothesis, then you need a new hypothesis. If your data do support your hypothesis, you need to do it again to make sure you're not fooling yourself.

Finally, when all is said and done, you could reach Step Seven, which is where you had your hypothesis, you have your tests of your hypothesis, all your data supports your hypothesis test, and so you now have a model that can be used to explain your initial Step One observations. Rinse and repeat.

Following the Scientific Method

Now that you have a basic idea of the scientific method, the question is, do we really follow it in various disciplines of science. My freshman year roommate from college after his first day of classes came back to the dorm room and said that in physics class, the professor said we never really follow the scientific method. But in chemistry class right after it, the professor said that we follow it exactly.

As another anecdote for context, I've judged a science fair on Boulder, CO now for almost ten years running. One year, one of the organizers asked if I could answer a question by a member of the public who had pretty much just literally walked in off the street. The man off the street said that he looked at the poster displays by the science fair participants but no where did he see the scientific method and he wanted to know why.

The answer I gave him is the answer I'm going to give you to the question of, do we ever REALLY follow the scientific method?

The way most scientists tend to work is that the scientific method all happens at once, or at least, multiple steps happen at once. But, all steps are at least implied within all stages of a research project.

Let me give you a round-about example from research I started a decade ago:

I was in a graduate class about Mars, and in lieu of much homework or any tests, we had to do a term research project. I went to the professor - who ended up later being my thesis advisor and you heard on Episode 55 - and he asked what I wanted to work on. I had no idea but said the last term, I did a project on impact craters on Mercury, and I didn't hate them. He studies water on Mars and he pointed out that there's this weird kind of ejecta around some kinds of craters on Mars, and no one really knows why they formed. That was an observation, Step 1 of the scientific method. Step 2 was part of that, the interesting questions: How do they form?

In my class research project, I didn't even bother with Step 3, formulating hypotheses. Instead, I skipped right to Step 5, which was to gather data. My project was mainly a data-gathering project about these craters so others might use the data I gathered to do more interesting stuff.

After the class was over, we decided to turn the class project into a graduate student research proposal to NASA. For that, we had to go back to Step 1, explaining the observations and Step 2, the interesting question of why they form. We explicitly then spent most of the time on Steps 3 and 4, describing the hypotheses that other researchers had made and how to test those hypotheses. I then showed Step 5, a sample of the data I would gather.

It got funded. In the course of the project, I realized I didn't have enough of Step 5: Data. I realized that to really study these craters with all the modern data we had for Mars, I needed a background, global crater database against which to compare them. In other words, I made a Step 1 while I was doing Step 5.

And so, I skipped all Steps 2, 3, and 4 and went to work to continue Step 5 but with a greatly expanded purpose and scope. My thesis ended up stealing Step 2 - the interesting questions - from past researchers but I used all the data I had gathered to do Step 6, refine, alter, expand, or reject hypotheses made by those people when they had worse data than I did.

Three years later, I wrote a real NASA grant to go back and really study these special kinds of craters on Mars. That grant is probably the most scientific method -like grant I've ever written where I explicitly went through every stage of the scientific method, explaining the observations, questions, hypotheses, how to test those hypotheses, the data we needed to gather to test them, how they would be refined based on what the data may show, and how we would make conclusions based on our data. After being rejected one year, we resubmitted with minor changes, and it was awarded, and now I'm working on it with five other people who are each doing different parts of it.

As another example, going back a few years, when I was working on my thesis, one type of data I added about each crater in my database was how deep they were. Two years after I graduated, I wrote another paper about that after measuring them a different way. I also have reviewed a few papers about crater depths on Mars and in doing so, Step 1 happened along with Step 2: I observed that there was no standard definition of crater depth, that led to people making different conclusions from each other even though what they were measuring was something completely different, and that led to the interesting question of could there and should there be a standard way to measure crater depth?

There were no other steps to that question from the scientific method. But, I ended up leading a massive review paper with about a dozen other people in the field where we have reviewed all past literature on the data people use to measure depth, the techniques to measure depth, and the results. What the paper shows is that Step 7 of the scientific method, some of the general conclusions people have about crater depth on different bodies in the solar system, is actually wrong. So we did step 1, step 2, and from reviewing step 5s from other people, showed step 7 stuff was wrong!

And so, in a year or three, I'm going to write another grant proposal to go through the scientific method and try to reach a new step 7 that's consistent with everything we now know and all the different data and analysis methods that are out there.

Going from just me to be a bit more general, people researchers write proposals for funding, they tend to more closely follow the scientific method. At least, they do if they expect to be funded. If the reviewers can't tell what the proposer wants to really do, why they want to do it, or anything else, the panel is generally not going to recommend being funded, especially when there are proposals that do lay things out much more clearly. Once you get funding, it varies. Often, people will still follow the process they laid out in the grant proposal, if mix things together a bit more just due to the nature of how lab work goes.

But when they write up the results, unless the journal has a specific style that requires specific sections that address each step in a specific order, it is very hit-or-miss. The best papers still lay things out in an order that still roughly approximates the scientific method, with an introduction and background section that goes through steps 1 through 3 of observations, questions, and hypotheses. A methods section may also incorporate hypotheses, describe how they set out to test the hypotheses, and how they gathered their data.

Most of the paper is then usually spent explaining what the data are, in painstaking and often very boring detail.

That's because it's the data that drive all of the conclusions, and it's the conclusions that let you get to the final step of the scientific method, developing general theories that can be used to predict future observations, hence repeating the cycle of the scientific method over and over again.

What I didn't say in there but did imply and is perhaps one of the most important parts of this is that the data are often the most important part of a scientific paper, and they are often the most important part of the scientific process. If the data aren't any good, the conclusions aren't going to be any good and you can't use them to make future predictions.

It's also the data that one has to look at when one reads a scientific paper. When I read a paper, or when I explain to others how to read a paper in very much a do-as-I-say-and-not-as-I-do manner, I don't look at the conclusions at all. I look at their data. Do I trust their data-gathering process? Does it seem reasonable? Do the data gathered seem reasonable? Does it look like the researchers have accounted for everything, not only in what the data are, but in the uncertainties associated with that data?

It's only after that step and if the answer was "yes" to everything that I even bother to look at the conclusions. Because, if the data aren't valid, then neither are the conclusions.

This is in marked contrast with some pseudoscientists who pretend they are doing science. For example, the host of the "Skeptiko" podcast (spelled with a "k" and not a "c"), Alex Tsakiris, is very fond of saying that a certain author in a published, peer-reviewed paper concluded something, like proof of an after life. What he fails to understand, repeatedly, is that the conclusions don't matter if the data themselves are invalid.

The same thing goes for a common whipping boy on this podcast, Richard Hoagland. I was listening a few days ago to a Coast to Coast AM broadcast from September 5, 2002, where he was claiming he had, yet again, undeniable proof that there were cities on Mars. Problem is, he was completely misinterpreting how spacecraft instruments work, and besides that, most have concluded since that time that he had been fed faked data by someone who wanted him to make a fool of himself. So, this is another case where you have to investigate the data before you can even move to the next step of the process.

Skepticism in Science

This brings me to the next and final step of this process: Skepticism's role in science. My dad, who you heard from in episode 114 about ethics in science, recently was given the highest award offered by the board of trustees at the hospital at which he works. In his acceptance speech, he talked about the scientific process and how he was lucky to be in an environment where failure was an option: your experiment doesn't always work out the way you planned. And when it doesn't, you have to pick up and keep going.

But he also talked about how, in science, you not only have to understand your failures, but you have to understand your successes. And you have to understand when your successes really are successful. He told the room of trustees, colleagues, donors, and a few family members that self-imposed skepticism is one of the most important parts of the scientific process. If he had an experiment turn out the way he wanted it to, he treated that with more skepticism than ones that did not. He refused to take his results at face value and to think that the outcome was real because he recognized that we are perhaps the best at fooling ourselves rather than other people, and because of that, we have to treat our own work - especially when it confirms our preconceived notions - with utmost skepticism.

Moving Forward

That brings us to my wrap-up where I'm going to address what may lose me a few listeners. As I record this, the current administration in the United States' executive branch of government is perhaps the most anti-science, anti-reality administration we have had if not ever, then at least in a long time. The man at the head of it is a conspiracy theorist. It is an objective fact, an objective, undeniable observation that the crowd at his inauguration was significantly smaller than that of his predecessor.

Instead of accepting that, he chose to contact the head of the National Parks Service to dispute photographs taken that showed the crowd size. When the director did not do that, he had his press secretary dispute the crowd size to the public, and his counselor, Kellyanne Conway, went on the news and said that the description of crowd size was just an "alternative fact."

This is a trivial and perhaps even trite example. But it's a simple one that shows the problem. We have objective data in not only photographs taken from the same vantage point, but bus permits and airplane records that give an idea of how many people went to the National Mall on January 20. The conclusion from that data is therefore pretty solid. And yet, he refuses to believe that.

Further, this man believes literally dozens of other conspiracies, including that vaccines cause autism, California didn't suffer a drought in the last few years, or that climate change is a conspiracy invented by China. Those are just a few of the non-political ones.

Speaking as a scientist, it's incredibly disheartening to have someone at the head of federal agencies who has such a profound distrust for the most basic process of how we know what we know, one who would rather make up "alternative facts" - also known simply as lies - to justify their position.

Throughout the last more than 150 episodes, I've tried to show you, well, to sound a bit like a broken record, how we know what we know, why we have confidence in what we know rather than what a pseudoscientist claims, where there are still open questions because we don't have the data we need, and sometimes even how you can do your own simple experiment to demonstrate why something that I've explained is real as opposed to the counter by whatever claimant I'm addressing.

I fully realize that stuff I talk about doesn't usually matter in the big scheme of things, but my hope is that the methods of analyzing claims can help you in your own life when confronted with something that really does matter and really could have a big effect on your life. Unfortunately, we now have a president who clearly does not listen to this podcast. I honestly do worry not only for the decisions he will make when he refuses to accept basic and trivial things, but also more important things.

Anyway, this has turned into more of a disappointed rambling than something pointed.

To try to tie things together in a more cohesive way, there is a reason why we know that vaccines do not cause autism, how we know when there is a drought, or how we know what's going on with our climate. There's a reason why more than 97% of climate scientists agree that global temperatures have increased beyond normal in the last 100 years, and "why" all comes back to the scientific method, and how things are done.

Provide Your Comments: