RSS Feed
iTunes Link

Episode 93: The Importance, Methods, and Faults of Peer Review

Download the Episode

Recap: Peer Review is one of the foundations of modern science, and yet, outside of science, it is a poorly understood process even though many think they understand it. It is also one of the common threads of ire among pseudoscientists, mainly because they cannot pass it. This episode goes through how peer review works not only for scholarly papers but also how that research may have been funded to begin with, grant review.

Puzzler for Episode 93: There is no puzzler in episode 93.

Answer to Puzzler from Episode 92: There was no puzzler in episode 92.

Q&A: There was no Q&A in episode 93.

Additional Materials:

Transcript

Claim: Peer review is perhaps one of the most common aspects of the formal scientific process today, one of the more poorly understood processes that people think they understand, and the most common process that's railed against by the pseudoscience crowd. I thought I'd take a break from more astronomy/geology/physics topics and talk more about the "how we know what we know" aspect of this podcast from a more fundamental standpoint today. This is not only going to be for scientific papers, but also for the other end of the process, grants. I'm also going to be speaking from my own experience, and while what I talk about for papers is fairly uniform across all fields, it varies significantly for grants.

The Basics of Peer Review

The concept of peer review lies in its name: Your work is reviewed by your peers. What this means is that the work you do - from your data-gathering process to your data to your conclusions - is all examined and picked apart by people who know the field, ideally, as well as or better than you do.

It's true that this is predominantly a negative enterprise, as in it's much easier to write a review that finds lots of things wrong than it is to find lots of things right. For example, think of a rating on a popular service such as Yelp! for a restaurant. It's much more common to say, "service was slow," "my food tasted like soap," "my meat was raw in the middle," "there were rats running around," "my bread never came," and nit pick little or even big things than just to post a review saying, "Everything was great! Service was fast, food was good, prices were reasonable." There are also only so many good things you can say - I think I just covered all of them - but there are innumerable problems you can come up with. It's an unfortunate fact of the process and just the way that we, as humans, work, and I'll talk a bit more about it later when I talk about reviewing grant proposals.

Another aspect of peer review is that it is most often anonymous. That is to protect the reviewer and let them feel free to be more honest about what they say. In some cases, you can tell the person that you reviewed their paper -- in fact, I've had some reviewers send me their review of my paper at the same time they send it to the journal because they know journals tend to be slow getting back to authors. I've also had some reviewers contact me directly with questions. Other reviewers, however, simply remain anonymous.

With that in mind, it's hard to get more specific about the process without talking about specifics, so now I'm going to talk about how this works with scientific papers, at least in my experience.

The Variations for Papers

First, there are the authors, those who write the paper that they want published. A paper can have anywhere from a single author to hundreds of authors. The corresponding author, usually the first author, then chooses a journal to submit their paper to. This is something I wish I had a little more training in, how to select a journal. From my experience, what you want to look at are two different things.

First is the journal's scope - what they're actually interested in. It would make no sense for me to submit a paper about galactic superclusters to a nursing journal.

Second is the journal's impact factor. This is a complicated number that indicates how many times papers published in that journal are cited by other papers, where bigger is better. Science and Nature, I've said many times on this podcast, are the two most prestigious science journals in the world. You can quantify that by looking at their impact factors, where Nature has an Impact Factor of 38.6, and Science has an Impact Factor of 31.0. Those are pretty high, and there are few that are larger than that -- most that are higher are field-specific and are medical, such as the New England Journal of Medicine with an IF of 51.7. Within your field, the IF varies considerably. For example, in marine engineering, the highest impact factor journal is only 0.7. In my broad field of geoscience, the highest is Nature-Geoscience with an IF around 8.5, and before that journal came out a few years ago, it was probably Earth and Planetary Science Letters, with an IF around 4.

Okay, with that digression, back to peer review. You choose a journal, you submit to it. Based on keywords and topic areas, your paper is usually assigned to an associate editor. That associate editor then finds reviewers. In some journals, like all the ones I submit to, you are required to recommend reviewers. You can also specify some people who you do NOT want to review your paper, though that is no guarantee. The reason for this "blacklist" is that it is inevitable that you will have someone who just does not like you or your work and you don't think you could get a fair review. Ever since I submitted a review for someone where I asked them to give more information on how they did something and they accused me of not having my own ideas and needing to copy other people, I have blacklisted that particular person from reviewing any of my papers.

After the associate editor determines that the paper is appropriate for their journal - and they can always decide it isn't - then the associate editor will contact potential reviewers, either from the provided list, from knowing the field themselves, or even from just looking at who you tend to cite a lot in the paper. They'll usually send you the paper's abstract, the author list, and the institutes the authors are from. The associate editor usually tries not to invite someone from the same institute as the authors because of possible conflicts of interest, but it's up to the editor and the potential reviewer and there are rarely blanket bans on such things, unlike for grants. The reviewer can accept or decline.

If the reviewer accepts, then they are typically given a few weeks and are expected to read through the paper with a fine-toothed comb.

The default way I think about this is that the paper is good and should be accepted. I then read through and look for reasons to falsify that null hypothesis. Remember grade school when your English (or other language) teacher picked through your essays and corrected spellings, grammar, and told you that you didn't back up your ideas well enough? That's what the process is. At least for me, I print out the paper and I go through it with any color pen but red. I try to follow their methods, look at their data, and I say if they don't present enough data to justify their conclusions. I say if their conclusions don't jive with other published papers and they haven't adequately explained why they are different, or why they reached different results. I tell them if parts of the manuscript are not understandable or if the methods aren't standard and they haven't given enough information to justify them. If their figures aren't clear, if something should be moved to a table or appendix.

The purpose is NOT to rail against them. The entire purpose of the process is to make sure that if they're adding to our base of scientific knowledge, that it makes sense and they have adequately justified it. If I get a paper that talks about a new technique for something, and their new technique takes longer, requires more data, and gives a less accurate result than old techniques, I will ask them why this should be published. If I get a paper that claims that Mars actually has no atmosphere, but they haven't discussed the decades of previous data that show it does, I'm going to write in my review that their paper does not discuss contrary evidence or explain why their new data should supersede all those previous results.

Then, I write up my review and send it back to the associate editor. Usually, journals have more than one reviewer for a paper. I submitted to one journal where the associate editor and three other reviewers reviewed my paper. I've submitted to two journals where only one reviewer reviewed my paper. But mostly I've had two reviewers, and most reviews I have submitted have been one of two reviews.

It's then the associate editor's job to read the reviews and make a decision: Reject the paper, send it back to the author with major revisions requested, moderate revisions, minor revisions, or accept as-is.

The author(s) then have usually 2 months or so to revise their paper. And, they don't have to. They can go to a different journal. Or, they can re-submit to the journal and the key is to respond to every review point. I've submitted responses to reviewers that are 24 pages long. You are expected to go through every point the reviewer raised and explain either what they misunderstood and so you don't have to change anything, or explain what you meant and detail what you have changed. In my limited experience with about two dozen papers, typically it's half-and-half, but this is highly variable by author and by field and by journal.

Once done with this revision, you then re-submit, just as I happened to do with a paper earlier this evening that I'm recording this. Your resubmission has the revised manuscript and your response to the reviewers, which again the associate editor is supposed to read. If only minor revisions were required, sometimes the paper can just be accepted at that point. Otherwise, it goes back, again, to the reviewers who go over your response and the new paper.

Sometimes this back-and-forth can take awhile. I once started to submit a paper in late 2008, and it wasn't until mid-2010 that it was accepted after three back-and-forths with a very nit-picky reviewer critiquing my use of commas. Typically, an associate editor who's paying attention doesn't quite let it go to that level -- after all, it is THEIR decision, NOT the reviewers' -- but it can happen. On the other hand, sometimes it's a very fast process. I once went from idea for a paper through writing it to submitting it through reviews and revisions and paper accepted in the space of 10 weeks.

If I had to summarize the purpose of peer review at this point for papers, it's not that peer-review is meant to be a barricade. It's meant to be a process to ensure that if papers are going to be published in a professional journal, that reasonable attempts have been made to ensure that results are accurate, clear, and make sense or at least explain results in the context of previous work.

The Variations for Grants

The process is different for grants. And every grant program is different from each other, so I can only speak for NASA's grant review process. Over a year ago, I wanted to do a blog post on my experience sitting on a NASA grant review panel. That was nixed by the program officer for that grant program in order to protect confidentiality, which, at least for NASA, is part of the U.S. Code. The review process is based on a provision in NASA Federal Acquisition Regulations Supplement Part 1870.102, and the peer review process has to be fair, impartial, and confidential not only in fact but in appearance. Confidentiality is stressed over and over again when on these panels, but since I've now been on several and have been an external reviewer for others, I am reasonably comfortable discussing the experience without fear that I'll be letting on which I was on.

To start with, submitting a grant is not dissimilar from submitting a paper, except in that it's much more high-stress because you're asking for money to put a roof over your head as opposed to just trying to get your research out there. The process of finding reviewers is much more delicate than finding reviewers for academic papers. Conflicts of Interest are taken much more seriously because again of that provision that not only can there not be a conflict of interest in actuality, but there can't even be the appearance of a conflict of interest, or COI.

What a COI is - at least for NASA - is when someone is from the same organization as ANYONE who would get money from the grant, has been in an advisor or advisee role for ANYONE who would get money on the grant within the past three years, is a collaborator on the proposal or has collaborated with the principal investigator on the proposal in the last two years, or is related to anyone who's getting money on the proposal or is a close friend or adversary, OR could benefit monetarily in any way from the proposal. Again, even if you don't "feel" conflicted, in order to make sure there isn't even the appearance of COIs, if you're on the panel and that proposal comes up, you have to leave the room, or if you're an external reviewer you can't review that proposal. It's not just to protect you, or to protect the proposal team, but it's also to protect NASA, and COIs are taken VERY seriously. And they can be a royal pain to deal with and figure out ... but the bottom-line really is to "follow the money." If there's any way that you could in any form benefit or your institution benefit from the success or failure of the proposal, then it's a COI.

With that in mind, a panel is selected and panelists are expected to review several proposals. It varies considerably by program and funding agency, but I can say that in my experience, I have generally been expected to be the chief reviewer for about 3 proposals and secondary reviewer for just as many. And tertiary reviewer or "reader" for that many again. In addition to the panelists, external reviewers are asked to also review some proposals. They can accept or decline, though I've always tried to accept just to get more experience with the process, do my duty for the community, and be able to put it on my annual evaluation as Professional Service.

In discussions with my father, who is in the medical research field and has sat on A LOT of NIH (National Institutes of Health) panels, the NASA process is then VERY different from the NIH process, which is why this discussion, though probably broadly applicable to multiple fields, is really specific to NASA. With NASA, we sit around a table and go through the reviews. The chief reviewer is in charge of presenting the proposal, their review of it, and summarizing the external reviews. The secondary reviewer then gives their assessment. And despite what people may think, the more reviews, the better. If there's a lot of agreement, then that makes things easy. If there are a range of opinions, then we have to go through and figure out if the people who ranked it highly missed something, or if the people who ranked it low are quibbling over things that should be fought over in the literature rather than doom a proposal or if they are adversaries. In the end, the panel votes and moves to the next proposal.

After voting, the chief reviewer is tasked with writing up a consensus review and sanitizing it beyond the point of it being helpful, which is a personal pet peeve but perhaps best left out of this podcast. It is completely anonymous. The review has to go through not only the chief reviewer, but the secondary, then the entire panel, then another panel for consistency, then back to your panel, then to the program officer, and then there are usually a few iterations between the chief reviewer and the program officer. It's a very back-and-forth process, and this is where what I mentioned 15 minutes ago really comes to bear in terms of balancing the good and the bad.

Pretty much every panel I've been on has always had an issue coming up with good things to say about the proposal. In contrast, it's very easy to write a review saying that such-and-such is wrong, or not well explained, or there's too much travel. But, you can only say so many times that the team has a publication history that shows they are knowledgable in the work they propose to do.

For confidentiality purposes, I'll use a review of one of my own proposals, one that the panel really REALLY did not like. The only strengths they listed were, "The PI and the investigator team have demonstrated expertise in [this area]," and "the proposal addresses a compelling aspect of planetary science that is essential to advancing knowledge of [this field]."

On the other hand, they identified five major weaknesses. One was, "The proposal does not justify attempting to construct globally comprehensive databases of secondary craters from large craters on [Mars, Mercury, and the Moon]." So, they really didn't like the main idea. Second one was, "The proposal did not discuss how image resolution limits would affect the derived [statistics]." So they didn't like the method. Another was, "The proposal did not clearly explain how target property differences would be determined from [the crater statistics]." So, we didn't explain how the data gathered would relate directly to the analysis we wanted to perform. Another was, "The proposal did not contain essential details on the approach for the laboratory [part]."

The point in going through this is to not only give you a flavor for how feedback is written, but also how it can be hard to come up with good things to say about work that has yet to be done, but it's easy to come up with bad things to say about how things are not described well enough to justify funding your proposal instead of another, especially these days when the stats are only about 1 in 8 proposals submitted to NASA get funded.

With grants, the peer review process is much more a gate keeper than with papers, and that's very much a symptom of very limited funds.

The Problems with Peer Review

With all that in mind, and this non-conventional episode running long, I want to briefly go over the real problems with peer review. One is that reviewers - at least for papers - are unpaid. This has been a gripe for a long time considering that journals charge hundreds of dollars to publish your paper and then put it behind a paywall, but the people that make that possible - the reviewers - don't see any of that money. Consequently, there are prominent people in many fields who have refused to review papers anymore.

Another complaint is that peer review, while it's anonymous (or can be anonymous) for reviewers is NOT anonymous for authors. So while they can see my face when I submit a paper, I can't see theirs'. And, if my paper is presenting great work but that reviewer happens to not like my cousin's boyfriend's former college roommate's pet, then I may not get a good review. But, if they hadn't known who I was, it would have been a very positive review.

Finally, for grants, the two main things people point to are that there is no rebuttal and the panel is rarely made of experts in that exact field. For the rebuttal part, while the process of publishing papers almost always involves back-and-forth between the author(s) and the reviewer(s), you don't get that for grants. Once the decision is made, that's it, and your only chance for rebuttal is to modify your grant and submit next year. Here's also where I've been told NIH is very different from NASA -- for NIH, if you resubmit, you have to respond to the review from the previous year point-by-point, just like with a paper review. With NASA, there is no such requirement, and unless you happen to have a reviewer who saw the proposal the previous year - which can happen but tends to be rare - then the people reviewing will have no idea that this was submitted previously unless you specifically put it in the grant that it's a resubmission.

For the expert part, especially for relatively small fields like astronomy and geophysics, it's best explained by this thought experiment: A normal researcher is typically supported by several part-time grants. Each of these are typically from a VERY small sub-set of NASA programs, and their expiration dates are staggered. This means that pretty much every year, you will be writing a new proposal to one of those very few programs which are relevant to your work. So will the guy or gal in the next office who does work very similar to you. So will the guy or gal at another institution who does work very similar to you. Because of this, the people who are BEST able to evaluate your work because they know the most about it will have COIs and can't review your proposal because they're submitting to the same program. This means that any panel will almost necessarily be handicapped because we'll be in a usually related field, but we won't have the years of experience you do to best evaluate your work.

For example, I study impact craters. Specifically, I study crater populations and statistics. When I'm on a panel, I'm usually asked to read any crater-related proposal, but that could mean someone is going to do field work on a crater on Earth to use as an analog for craters on Mars -- and I have never done field work. For another example, I wrote one paper once on Martian volcanism. On another panel I was on, I ended up being the secondary reviewer on all volcano-related proposals when I have never really studied volcanism in my research.

It's unfortunate, but that's the way things are. In those cases, you have to carefully read the background information, sometimes go to the references, and draw in your knowledge from basic principles to determine feasibility. And, that's also where the panel really relies on external reviews from people who know the field better than they do. It's a problem though, and while there are work-arounds, there is no perfect solution.

Why PseudoScientists Don't Like Peer Review

One might think that's what the pseudoscientists rely on to pick apart peer review and complain about it. One would be wrong.

The two people I've heard rail against peer review - and this is in my own listenings since I'm sure there are many others - are Skeptiko host Alex Tsakiris and the Coast to Coast AM science advisor, Richard C. Hoagland.

For the former, Alex seems to misunderstand peer review in two ways. I pointed out in a 2009 blog post that Alex was happily trumpeting preliminary results from his psychic experiment on his podcast, pointing out that the results were showing psychic stuff is real. Then, we never heard anything again. If he had gone through peer review - and apparently even going through his own review process later on - he never would have talked about these early results as if they were real because people would have questioned his methods and sample size.

The second way that he seems to miss the point is that he has stated: “Again, my methodology, just so you don’t think I’m stacking the deck, is really simple. I just go to Amazon and I search for anesthesia books and I just start emailing folks until one of them responds.” In other words, his way of finding experts - and reliance on individual "experts" is another issue of Alex's - is that he goes to popular books and e-mails people waiting for someone to bite. Peer-reviewed papers are picked apart by people who study the same thing as you do and are familiar with other work in the area. A book is not. A book is read by the publishing company’s editor(s) – unless it’s self-published in which case it’s not even read by someone else – and then it’s printed. There is generally absolutely zero peer-review for books, and so Alex going to Amazon.com to find someone who’s “written” on some subject will not get an accurate sampling. Even if one uses the "best" rankings on Amazon because, right now, Mike Bara's "Ancient Aliens on Mars" is somehow ranked #1 in the Astronomy > Mars section on Amazon.

Published books on a fringe “science” topic are done by the people who generally have been wholeheartedly rejected by the scientific community for their methods, their data-gathering techniques, and/or their conclusions not being supported by the data. But they continue to believe that their interpretations/methods/etc. are correct and hence instead of learning from the peer-review process and tightening their methods, trying to bring in other results, and looking at their data in light of everything else that’s been done, they publish a book that simply bypasses the last few steps of the scientific process.

The same goes with Richard C. Hoagland. It's been awhile since I had a Coast to Coast clip, and for those who were going through withdrawal, here's one to perhaps satiate you: [Clip from Coast to Coast AM, January 15, 2009, Hour 1, starting 30:46]

“You follow your curiosity, which is what science is supposed to be. It’s not supposed to be a club or a union or a pressure group that doesn’t want to get too far out of the box ’cause of what the other guys will think about you. … This concept of ‘peer review’ … is the thing which is killing science. …

It’s not the peer review so much as the invisible, anonymous, peer-review. Basically, before a paper can get published, … you know you have to go through so many hurdles, and there’s so many chances for guys who have it ‘in for you,’ who don’t like you, or who don’t like the idea you’re trying to propose in a scientific publication, can basically … stick you in the back … and you never know!

“One of the tenants of the US Constitution … is that you have the right to confront your accuser. In the peer-review system, which has now been set up for science, … the scientist – which [sic] is basically on trial for an idea – because that’s what it is, by any other name it’s really a trial, is-is attacked by invisible accusers called ‘referees,’ who get a chance to shaft the idea, kill the idea, nix the paper, tell the editor of whatever journal, ‘Oh, this guy’s a total wacko …’ and you never have the opportunity to confront your accuser and demand that he be specific as to what he or she has found wrong with your idea.”

I don’t know what journals he’s talking about, but for all the ones I know of, his claims are wrong. Just as with the US court system, you have appeals in journals. If the first reviewer does not think your paper should get in, then you can ask the editor to get another opinion. You’re never sunk just because one reviewer doesn’t like you and/or your ideas. And, I don't know what reviews he's talking about with papers -- in every review I've written and every review I've seen, the reviewer is usually very, very, VERY specific about their issues with the paper.

As to the anonymity, while I personally don’t like it at times, it’s necessary as I discussed earlier. Without a referee having the ability to remain anonymous, they cannot always offer a candid opinion. They may be afraid of reprisals if they find errors (after all, grants are also awarded by peer-review). They may also not want to hurt someone’s feelings. They may have their own work on the subject they think you should cite but don’t want to appear narcissistic in recommending it. In short, there are many very good reasons to remain anonymous to the author(s).

However, they are not anonymous to the editor or the editorial staff. If there are problems with a reviewer consistently shooting down ideas that they have an otherwise vested interest in, then the editors will see that and they will remove the reviewer.

Something my former officemate is fond of saying is: “Science is not a democracy, it’s a meritocracy.” Not every idea deserves equal footing. If I come up with a new idea that explains the universe as being created by a giant potato with its all-seeing eyes (and if anyone gets that reference from a (hint!) 1990s television show, let me know) then my new idea that I just made up should not deserve equal footing with the ones that are backed up by centuries of separate, independent evidence. The latter has earned its place, the former has not.

That is something that most fringe researchers seem to fail to grasp: Until they have indisputable evidence for their own ideas that cannot be otherwise easily explained by the current paradigm, then they should not necessarily be granted equal footing. Hoagland’s pareidolia of faces on Mars does not deserve an equal place next to descriptions of the martian atmosphere backed by telescopic, satellite, and in situ measurements from landers and rovers.

Wrap Up

By way of wrap-up for this different kind of episode, I think it's important to again stress that peer review DOES have issues. But, we try to make the best of an imperfect system because there are no better alternatives. Even after peer review happens and a paper may be accepted, review continues by other work using it and finding it to be reliable or not, just like drugs continue to be monitored after they're released and may be recalled if unforeseen problems emerge.

What peer-review typically IS good at is weeding out particularly bad research and ideas that don't have solid evidence to back them up. That's one reason why pseudoscientists in particular tend to strongly dislike the process: They simply can't pass it. Their ideas are so tenuous, do not have enough good evidence to support them, cannot adequately address criticisms, and/or can't explain why previous observations did not find it or found the opposite, that they aren't accepted. And, in the mark of a pseudoscientist rather than a scientist, they complain about the system and go off to late-night talk shows to spread their ideas rather than take the criticism with a thick skin and work to better their evidence.

Provide Your Comments: