What Can You Do When Two Reviewers Contradict Each Other?

Posted

So you’ve received the reviews back from your NIH study section, but the feedback is not so clear. How should you address the reviewers’ comments?

In this situation we encourage you to use your best judgment. Take a look at all of the reviewer’s comments and criterion scores* and the scientific review officer’s summary of the discussion and then make a decision on how best to proceed from there. If the summary statement is unclear, you can always contact your program officer for clarification.

*Remember that the criterion scores and comments made by the assigned reviewers usually reflect their reading and assessment of your application before the review meeting. They are sometimes updated after the review meeting, but not always.

74 Comments

  1. Dear Sally,

    Thank you for the advice, but please explain how to deal with a situation in which on the first submission of an R01, the summary statement indicates that the study section is “highly enthusiastic” concerning the submitted proposal, and that there are “minor concerns” the proposal. Then, when I resubmitted the proposal with these comments in mind, the proposal was considered especially important but that a new set of reviewers was brought in and that 2 of the 3 assigned reviewers thought I had addressed the prior comments very well, while the 3rd reviewer was negative and brought up a whole new set of comments not factually supported by published findings. Since this was a resubmission, my lab’s entire line of research was essentially terminated, and all the staff that worked with me for 10 years have to be let go. This situation exists due to the fact that on the second submission my grant scored 18% due to one reviewer’s new comments that had not been formulated in the first review. I hope that you New Investigators out there will see just how unfair the current review system is. Essentially, one is left with no recourse due to the fact that another submission is not allowed.

    1. I heard that the third reviewer is a problem almost all the times. The NIH should provide grants to experts as third reviewers rather than asking someone without expertise to provide broad perspectives of grants. Since the third reviewers get grants with little or no expertise, they usually do not appreciate the importance of those grants. At the same time, the third reviewers serve as the primary and/or secondary reviewers for grants where they have expertise and tend to appreciate those grants.

    2. Yes, indeed the review system is not only totally unfair but corrupted. I have had the same experience and worst. I have submitted SBIRS that go nowhere. I always get contradictory statements: e.g., 1) the investigators are highly qualified and experienced vs. the PI and team of investigators have no experience in the field; 2) the budget is accepted as presented vs. the budget is excessively high and need to be reduced; 3) the environment and facilities are excellent and appropriate for the goals of the study vs. the facility is inadequate for the study; 4) there is no justification of the vertebrate animals vs. a justification of vertebrate animals is well presented in both the approach and the vertebrate animals section; 5) the data sharing plan was not addressed vs. the data sharing plan addressed how the results and compounds to be developed will be shared during the different stages of development; 6) the proposal is highly innovative, well written and have potential to make a significant change in the field vs. the proposal is speculative and due to the lack of preliminary data the enthusiasm is dampened; 7) the target compounds are not identified and have not been validated vs. the identified compounds have been validated and the target proteins are known;……….Addressing the reviewer’s trivial comments does not work and it is a waste of everyone’s time because no matter how well you address them the next group of reviewers will make totally different comments that are unrelated to the first review. Furthermore, the purpose of SBIR Phase I is to demonstrate feasibility of a proposed well conceived idea which is strongly supported theoretically with a sound experimental approach; yet the SBIR is not scored because it does not have the preliminary studies as a proof of feasibility; I can go on and on and on…… the bottom line is that the academic reviewers do not read the proposals, provide trivial comments, have no idea of how to develop products, yet the review all the SBIRS….as a result….. hundreds of excellent proposals have zero opportunity under this grant review system.

    3. Absolutely right George. My advice to you is this: get in bed with the SRO. I am saying this only half jokingly. S/he knows who the tough and easy reviewers are. S/he knows who the prior reviewers were. S/he (if you are really chummy with him/her and buy him/her lunch, and yes many will gladly accept a free lunch or dinner) will also tell the reviewers to give your application a good score.
      How do I know this? Because I am a charterd member of a review section. I have seen it all, and anyone who says otherwise is lying.
      How do the SROs feel about this? They love it!. They (most of them have failed in science; hence find a secure govt job) love the power and attention.

      An I also agree, it’s a broken system. But the CSR/NIH will never admit to it-too many powerful and unaccountable peoples’ career are tied up in this system.

      1. As an SRO I find the above statements by insider both insulting and untrue. I did not fail at doing science. I cannot be “bought.” I have always tried to uphold the highest standards in the review process because 1) I believe in the validity of our system of peer review 2) I want to be fair and do my best for all applicants while at the same time promoting the best science out there 3) power and attention are of no interest to me and 4) personal integrity is!

        I agree that the system has its flaws…any enterprise involving human beings will have flaws, but I haven’t seen any system that does a better job. It’s always a challenge trying to recruit reviewers, especially with what seems to be ever tighter rules/regs regarding conflict of interest, and with tighter budgets many people don’t have the time to review. All their energies are focused on writing renewals or new applications for potential funding.

        1. Dear “Experienced”,

          There is a lot in the comments of “insider” that is correct. You obviously feel otherwise, and you may even be otherwise, but your denials do not apply to all SRO’s. I have served more than 12 years on study sections, and have felt the favor of an SRO (because I resolved an unpleasant “problem” for them), and the wrath of one (because I refused to solve a “problem” that made some metric of their study section not how they would have liked it to look).

          I am utterly, thoroughly, and uncontrovertably convinced that more than one SRO appoints reviewers to their study section whom they know to be harsh or not influential within the study section, and assigns grants to them, when the SRO knows the applicant and thinks they “deserve” harsh or otherwise unfavorable treatment. I have gotten phone calls from SRO’s prior to study section meetings in which I am encouraged in no uncertain terms to put either a positive or a negative spin on a grant.

          So why don’t I scream, shout, and demand their resignation??? It should be obvious! I am still submitting grants, and can’t afford enemies among that group.

          As I have written in another comment, the fix that this system desperately needs is that applicants MUST be permitted to answer every criticism. We cannot allow proposals that were months in the preparation to be trashed by someone who is not accountable for their opinions, even if that accountability is only in front of other study section members. We should be allowed to respond to any new criticisms by any reviewer, and allowed to say that they are WRONG when they are wrong, without being penalized. That isn’t possible when new criticisms are raised on the last permitted revision.

    4. George Holz’s reply is an excellent example of why eliminating A2 applications was such a mistake. The review process is simply too random in that a poor score by one out of three or more reviewers can prevent an A0 application from getting a score. An A1 application that was unscored as an A0 almost inevitably does not reach a fundable score upon review. Therefore, why bother resubmitting an unscored A0 application? Allowing A2 submissions, even if only allowed for the top 20% or so of the A1 proposals, allows for some slack in the system that can help overcome a single unjustified negative review.

      1. Allowing A2 submissions for the top A1 proposals (indeed I would say all scored applications, but the top 20% is a good start) is an excellent idea. This will help to eliminate the problems seen when the third (usually non-expert) reviewer is the only negative voice.
        In the last five years I have seen review panels shrink dramatically and reviewers asked to go far afield of their own expertise. The third reviewer is often someone who is totally unaware of the general field of the grant, let alone new developments. In my opinion this really hurts peer review and leads to the situation described above. A recent EUREKA panel I served on, where there were five reviewers per grant- but only two were expert – took this problem to the extreme in that all five reviews were treated equally.

  2. Sometimes a new reviewer does spot items missed previously, but this appears to represent a difference of opinion that may or may not have been justified. When the pay-line gets into the single digits, some really good proposals are going to be left behind. I’ve been on study sections since the early 1960s and can only conclude that current funding trends are going to lose us a generation of investigators. The NIH is making do as best they can in an adverse situation. I suppose the appeal process might be tried.

    1. I find this comment, as well as others, quite accurately reflecting the current state of NIH affairs. But it doesn’t touch the bottom. I must say, the situation is actually horrifying. In my opinion, the science in the US is systematically destroyed, and this process is going for more than a decade already. Today, the results begin to fully show up, but it’s just a beginning, of course. Indeed, we are going to lose a whole generation of experienced and truly motivated investigators, and at least another generation of young and would-be-coming to the field… I’m extremely pessimistic because current climate offers no solution whatsoever. Sadly, among elected representatives there is no effective group of individuals who would seem to understand, even if at intellectual level, this huge national problem we face now. And there seem to be no mechanism (means) to explain the gravity and consequences of this to the general public. Well done…

  3. Please understand that a reviewer’s comments are not designed to “fix” your application into a fundable body of work. The reviews written are to provide you with some hints and guidelines as to why a given reviewer placed your app into a specific range of significance/impact scores. The range takes into account all the other apps being considered in a given session, and frankly, for a really tough session, your status might change (up or down) on a resubmission. The old review system allowed critiques to go on for pages and pages of whys and wherefores, detailing the good and bad elements of every proposal. That system was great for young investigators because they got really valuable feedback, but such reviews were very very difficult to write. This new system just allows a few bullet points and when this method was initiated, a lot of folks questioned how valuable it would be for new investigators because reviewers now just “skim” the highlights of perceived problems when they write their bullets, but rarely list them all. Your revision attention to only what they listed is therefore not comprehensive, and a new reviewer might easily place different emphasis on different problems. Usefully, there is a box where reviewers can add extra comments and one hopes this is used constructively, especially for young applicants. If you really want to know how your app fared, look closely at the overall score, regardless of what is written in the bullets, THIS is the overall attitude towards your proposal. I always recommend taking your critique to an experienced reviewer who can perhaps help you read the real subtext of what reviewers thought was positive or negative contributing to your score(s). As for your comments on reviews not taking into account “factually supported .. published findings” please also be aware that just because something is published, doesn’t necessarily make it a fact. The literature is full of controversy. Peer review of manuscripts is not an exact art, nor is application review. The current system is as fair and honest as it can be, when real people are involved. I sincerely hope you come to believe this, and remember it too, when you are asked to be on the other side of the critique writing (manuscripts or grants).

    1. Those reading this thread who actually want to do everything that is in their own power to get funded–rather than engage in unfounded conspiracy theories and refusal to accept the basic arithmetic of doing away with A2s–would do well to read Dr. Palmenberg’s comment. And then read it again. It accurately describes the real dynamic of reviewer and study section behavior and–unlike the “factual errors of incompetent and biased reviewers solicited by nefarious SROs unfairly killed my application” absurdity–is a useful guide to adjusting your applicant behavior to maximize the chance of getting funded.

    2. For Ann Palmenberg,

      You mentioned: “As for your comments on reviews not taking into account “factually supported … published findings”.

      Please note that my original statement did not refer to published findings that might contradict the reviewer’s critique. What I was provided in the Summary Statement was a critique in which one reviewer invented a novel concept not supported by published findings, and that argued against my main hypothesis that was fully supported by preliminary findings. In other words, the reviewer invented a reason to criticize my grant under conditions in which no published findings supported the reviewer’s contention. This strikes me as especially dishonest yet my Program Officer urged me not to appeal the review since appeals have such a low success rate. In fact, I was told that no approved appeals were eventually funded after re-review by my study section. This seems to make the appeals process irrelevant.

      What is your opinion – is it legitimate for reviewers to offer negative opinions based on unpublished findings? I don’t think so. Who is keeping track of this? What safeguards exist to prevent reviewers from using “inside” or “secret” information with which to criticize grants? What safeguards exist to prevent reviewers from simply inventing contradictory sets of ideas that refute a submitted proposal, but for which there is no basis in fact, as established in the published literature?

  4. This reply to a common and important question is utterly vacuous, while Holz describes consequences that are real and highly destructive to American science. Because there is so little money available now for research, and prospects for more are bleak, reviewers have become nihilistic (“we can’t distinguish among the best 10% of proposals, so why exert yourself on those in the 10-50% range”). Yet, each individual reviewer down to the 3rd discussant is essentially vested with veto power (any nattering nabob of negativism can condemn even the best proposal into the unfunded range).

    If only one amended proposal is permitted, then applicants should only be measured on their responses to one set of criticisms. The appearance of new criticisms should automatically qualify a proposal for submission of an amendment so that an applicant gets a fair chance to answer them. After all, applicants are almost always more expert about the subject of a proposal than the reviewers. Applicants should be treated accordingly.

  5. George

    I completely agree though my experience is a mirror image. In Y34 of a project, with a lot of people on board and the best productivity in the grant’s history, we were triaged and received a 40+ percentile. One reviewer gave it a 1; another gave it a 4 and a mail reviewer gave it a 5. The first reviewer forced a discussion. The hostile reviewers were technically (and ethically wrong) and appear to have colluded. The SRO notes reflected other faulty attacks that weren’t even mentioned in the short reviews (another innovation that I find masks poor reviewer qualifications). My solution? Down to the wire and fully committed to my work and my people, I requested that the offenders be recused. With a new panel, and technically minor scientific revisions, I moved to 6%. Will I get funded? Who knows? But the reviews were much fairer the second time around. The 2-and-out game is doing far more damage than success rate statistics indicate.

    R

  6. Yes, bringing in new reviewers for a resubmission is standard practice. Technically, no information about your prior submission is meant to be retained by the SRO, so they really should have no ‘formal’ information as to who reviewed it last.

    I think that new reviewers are brought in on a percentage of A1s to avoid clustering of scores at the top end. But really, the whole review process is a joke: if you went to multiple study sections with the same grant, the SD would probably be like 30 or 40%, thus the funding line is so far in the error bars that it is just a waste of time and they should just do a lottery for the top half and save everyone the trouble.

  7. This is great!

    Q: How do I deal with conflicting reviews?

    A: Use your judgment.

    If that is how I answered concerns on my applications, they’d get triaged.

  8. First, let me point out that this blog entry does not constitute advice, per se. We encourage you to “use your best judgement” is advice in the same vein as “we encourage you to write a good proposal” and “we encourage you to respond in a manner most likely to result in a funded grant”. It’s not actually advice. Please stop writing blog entries along these lines.

    Second, to address the question posed by the entry itself: what can you actually do? Your most realistic option is to exhaustively address the criticisms offered by the critical reviewer, while thanking the positive reviewer. You can argue with the critic, which will convince precisely no one, particularly the critic, if s/he receives your resubmission. If the critic gets the resubmission, this is your only fundable option: address each and every criticism leveled by the critic to the extent possible. It is exceedingly unlikely that your critic will give a thoughtful “huh, hadn’t considered that, yup”, reconsider his/her comments, and give you 1’s across the board after you illustrate the error of his/her ways. If you get a new reviewer, well, who knows? Sometimes they check to see previous concerns were addressed – in which case you’ll want to address them – and sometimes the don’t – in which case it’s a crap shoot. Good luck.

  9. Most reviewers are either incompetent or don’t take time to understand science proposed in the application. In addition, it is my personal experience (I used to participate in the review process), that most reviewers favor their friends/collaborators grant applications. For the review process to be fair, NIH needs to institute blided Review Process. In the absence of such a review process, NIH should stop wasting US Taxpayers money because the current system is grossly unfair and flawed.
    It is ridiculous that Program Directors trash the grant when only one out of three reviewers gives deterimental critique, without evaluating or having the ability to evaluate such critique. It is unfortunate that the bad comments of one reviewer, justified or not, overcome the good comments of the two reviewers. Sadly, often the bad comments have no sceinetific base. I am interested to know if the Program Director ever asked for scientific evidence to support the unfavorable comments of a reviewer.

  10. Dr. Holz’s experience is a classic example of why the recent change to a “two-strike” rule is so wrong. While most members of Review Panels are excellent, one bad apple hitting a resubmitted Application becomes a tragedy. The old admonition “Measure three times, cut once” should apply to the Review Process….

  11. As a former NIH Extrmural Branch Chief, I have seen this occur during a SRG meeting. However, a good chairman and SRA should have this “resolved” before the final vote is taken. Each reviewer can make their case to the committee. When the final vote is taken, and if the overwhelming majority of the SRG sides with one reviewer, the vote of the recalcitrant reviewer can be eliminated as an “outlier.” He/she has not made a case that can convince an impartial panel. Their vote should not be counted. At least that’s the way it used to be done.

  12. I agree with George Holz’s comment. I have seen this happen several times on a second review (including one of my R21s) that essentially terminated the proposal that received an excellent score, but just missed the pay line. There is no chance to respond to a new reviewer with new criticisms (whether valid or not).

  13. This is something that the NIH administration should deal with. The scientists who are part of peer review and who apply for grants have petitioned for allowing the A2 for cases such as this but it is not in the countries best interest to have outstanding grants gotten rid of because of problems such as that described.

  14. There is no formula for success in grant submissions when the payline is this low. The vast majority of grants are going to fail. There is frustration with the review process, which is imperfect, but that’s not the primary problem that prevents worthwhile grants from achieving fundable scores. Blaming the reviewers is like killing the messenger. To be successful a grant must be seen as high impact, low risk, and it must fit the cultural norms of the study section to which it is assigned.

  15. I believe it is really the SRO’s job to ensure we get a good review. Unfortunately, this seems to happen very rarely. If NIH tried to ensure that the same reviewers were used during A1 then I believe the system would improve. TBF, I was helped by a SRO once. He called me up during the afternoon of a review to tell me that my grant was pulled from a particular study section due to negative bias and ensured me that I would get a fair review, which I did. My project was funded. I think NIH needs to develop sterner guidelines on how these reviews should be conducted and hold the reviewers accountable to these guidelines. I do not think NIH is biased but reviewers like all humans are not perfect.

  16. I assisted a research resident in the submission of an NRSA postdoctoral fellowship application. One review stated, “However, the project is considered ambitious for a two year period of time.” The other reviewer concluded, “It should be possible to complete the experiments outlined in far less than two years.” The application was not funded, and the resident did not gain much respect for the NIH peer review process with these conflicting critiques. Clearly there is no agency control over the reviewers.

    1. That’s classic, Richard, simply classic. And NIH wants to encourage young investigators? They sure aren’t doing it that way.

    2. IME, this is exactly the sort of discrepancy that gets hammered out during discussion. The fact that the reviewers didn’t bother to edit their critiques afterward has no bearing on this. Something like this is very, very rarely the difference between triage and discussion, btw,

      1. I disagree with you about this things necessarily getting hammered out in discussion. Reviewers assign an overall score that is not necessary reflective of the individual criterion scores. If a given reviewer is left with an overall impression of the proposal that is colored by “too ambitious” or “not ambitious” enough, along with a few other minor comments, an overall score of 3 or 4 can be enough to pull a proposal out of the discussion pile.

        1. I did not say “necessarily”, I said IME, BugDoc. Mire importantly, we have to assume from the comment that this was, allegedly, the only issue. My point is that if the disposition of the score hinges on this issue, that proposal is going to be discussed. If there are many other issues dragging down the score, this minor disagreement is irrelevant.

  17. The biggest problem here is the NIH cutoff of submissions at 1 revision. Supposedly this was done to limit submission of “not ready” proposals, lower reviewer workloads, and prevent reviewers from “writing” the proposal. But this idea is flawed. If the idea is to fund the best science then why are we throwing out proposals that scored in the 12th percentile? In “normal” times this proposal would have been funded, now if it was an A1 it is prevented from being funded. This makes no sense. Sure, if a grant is triaged twice, then the reviewers should be able to say they don’t want to see it again. But otherwise, if it is being improved and is evaluated as good science, why should the NIH block good ideas from funding?

  18. I have participated in many study sections and concluded that the biggest flaw in the peer review system is that as far as any one application is concerned, the study section is actually composed of only 3 reviewers (primary, secondary and reader). Unless you are personally known by one of the reviewers, they are unlikely to go out of the way to challenge a bad reviewer or a louder reviewer. The remainder of the study section will hence vote in accordance with the views of the louder reviewer. This results in significant bias of funding applications by applicants that are known by the study section panel. One’s application could also get reviewed by one of the study section members who is your professional competitor and they you’re toast. Given the low funding rate, it really only takes a few minor comments to drag one’s application into the unfunded range. This all results in a bit of a lottery. Is there a better system that is practical ? Maybe if the submitted grants were reviewed by more than just 2+1 reviewers and the burden on reviewers reduced by somehow limiting the number of applications one can submit per year? i.e. force applicants to write their best proposals which in turn get the best reviews.

    1. Exactly right. You need more reviewers to reduce random biases due to differences in backgrounds and interests, let alone competition.

  19. One solution to the problems like George Holz’s is to ensure that the same three reviewers evaluate the revision. Once you review a grant, you are on the hook to review the revision, even if you are rotating out. Do not bring in new reviewers for the second time at all. If one of the reviewers is absolutely not available for some reason, just the use the remaining two instead of brining in someone new.

    If you bring in someone new, the applicant has to be given a chance to respond to their concerns. This opens a pandora’s box, because it is hard to tell which concern is “new” and which is not. It is best not do do that.

  20. Absolutely right Observer. The fate of a grant should not depend on just 2-3 reviewers. The whole idea of a meeting is that EVERY one of the reviewers gets an equal chance to critique and score EVERY application.

    And what is the CSR’s solution to this? Only the 2-3 assigned reviewers are allowed to score the application in IAR. Moreover, before each SS meeting begins, the SRO instructs all members that if they wish to deviate more that 0.1 points from any of the scores given by the assigned reviewers (I forget exactly what it is nowadays with the new scoring system), you have to clearly “justify” your score. Results: No one wants to rock the boat, and every one plays the game; legitimate challenges to the scores of the assigned reviewers is extremely rare. Unless ofcourse, the application under consideration is from a friend (or enemy) of yours (I won’t even go into how totally useless the system the CSR has in place for “identifying” conflicts-according to the CSR simply sign a piece of paper and you have no conflicts!).

    1. That sure wasn’t true in the IAR I participated in a few weeks ago. It was run exactly like an in-person meeting and everyone got into the discussion and debated things actively. Everyone scored each grant at the end, just like in a normal study section meeting, and not only that you were able to consider your score much more carefully because you had the whole discussion in front of you when you finalized it (plus all the time you needed to do additional literature searching to make sure you understood the background on things you weren’t sure about). All the reviewers did their jobs well and the SRO was active in moderating the discussions as needed.

  21. Elimination of A2s was a terrible idea because it wasted an opportunity for clarification and the implementation of proposed changes. It would allow applicants like George Holz to get a fair hearing despite one rotten apple.

    I ‘ve found the bullets to be easy on the reviewer but often unhelpful to the applicant. The reviewer can use a bullet statement to mask his/her real problems with the work – by not having to spell them out, they cover their back. It is a real sop to lazy and biased reviewers.

    Also, shortening the SS meetings to one day has made things worse. At the end of the day many reviewers (including the SRA) are in a hurry and because the applications reviewed are the ones with lower scores (<40 percentile), it is tacitly assumed that the likelihood of funding is low; sometimes there is a palpable rush to finish and go for dinner, especially if 'too much' time had been spent on more 'worthy' applications.

  22. I listened to the podcast on appealing your review and the naivete displayed was remarkable. Statements like “a single reviewer can’t bring down a grant” or “the program officer was likely at the review” are laughable. They seemed to think that a single bad review will be offset during the discussion and the good reviews will win. This of course assumes the bad review did not pull the grant into triage. Since some review panels have a triage line at 3 this is easy to do. Also, I have NEVER had a program officer present or phone in to hear ANY of my grants reviewed. Is it just me, or is this a common occurrance?

    1. You’re right on Peanuts. Just like many SROs, POs don’t give a rat’s behind about how fairly or unfairly the review process is.
      All it takes is ONE bad review (which could be from a reviewer who has a hdeen agenda or conflict) to kill an excellent grant. No one cares…not the study section chairman (unles he is a buddy of the applicant), not the SRO, not the PO.

  23. The review system is like the system of laws, it works better than any other over the long-term, but can be frustratingly inaccurate in the short term. It relies on deficiencies becoming egregious and consistent before there is a reaction and a modification is made in the protocol to address them. I don’t think it is fair to cite any one case of review like those cited above. Instead, one needs to have a view of how it works across this very large system. Unfortunately, there will always be victims, like the many African-Americans and women who had no say (no vote) in the policies of this country before the law was changed. But that’s how it works, the permanent change improves the system going forward only after it has worked through a rigorous process of debate to show that it is indeed an improvement. Is the A2 system an improvement? trust the system wide data, not anecdotes.

    1. Then let’s change it but not wait another 50 years to do so.

      Let’s demand that our tax dollars be allocated in a more transparent, fair, and accountable fashion.

      Let’s weed out the corrupt and self-serving reviewers, SROs and POs.

  24. Dear “Experienced”,

    Don’t take it personally. Perhaps I didn’t put it as diplomatically as I could have, but as Bert Singlestone testifies, people like him and I who have been in the system, served on many study sections for many years, know exactly how it works. Perhaps you are a diligent and honest SRO, but there are many who are not. It only takes one bad apple…

    I have had exactly the same experiences as Bert. SROs who will tell me, in no uncertain terms, this application must be triaged and this one not; SROs who have asked me for names of reviewers to whom they should assign my application (when I am in his/her good books). When I have tired to be honest and resist the pressures of an SRO, I end up not being invited to serve on study sections

    Using legitimate and up-front means, published by the NIH, never work. For example, couteously asking an SRO (with whom my relationship is rocky) to exclude a reviewer who I know has a personal or professional conflict with me, usually evokes the response “it can’t be done”. So I go through back channels

    What impression does one came away with after such experiences? The system is CORRUPT, and if you try and change the system, you’ll get burned.

    Am I ashamed of what I have done? Sure I am. But’s it’s a matter of survival.

  25. I have been continuously funded through NIH RO1’s for over three decades. I have served on NIH study sections (RO1, R21, program projects, etc) for over 25 years, including a stint as a regular member on a Chartered Study Section for 4 years from 1993-1997. During that time the payline for NIAID was at <10% as it pretty much is now. However, there was not anywhere near the frustration level expressed herein, during that time. I think that the NIH review system at present is very problematic for a number of reasons. They include but are not limited to:

    1. The inability to submit more than two times. (Hell, even in baseball you get three strikes before you are out at the plate, and you are still not even out of the game!). As mentioned above, it only takes one misguided reviewer to sink an outstanding research program. That is INSANE and frankly does not bode well for the health of medical research in the US. Most people would not come close to believing what I have seen transpire at study section meetings from supposedly qualified reviewers who clearly trashed a grant for completely unfounded, even unethical reasons. In this regard, I wonder whether NIH has any reliable data about the impact of submitting a grant application three or more times. In the past I saw grants that had been submitted more than 5 times, and that had very little, if any impact, on the flow of the study section meeting. We dealt with them and even some of them got fundable scores. Who came up with the idea of only two submissions? I think that they need their head(s) examined!

    2. Along those lines, I do not think that reviewers are as well qualified as they have been in the past. There are many reasons for this, including the increased complexity of science but also a more arrogant attitude by inexperienced reviewers about what they think they know, versus what they really know. They do not even pay attention to the qualifications of a highly experienced investigator when they make flawed critical remarks about something that they know very little about. My sense is that there are more junior faculty members as study section members than in the past, which also contributes to this problem more than many realize. In the past we rarely saw an Assistant Professor on any study section. I personally know about a recent situation where an inexperienced reviewer took the place of a highly experience one due to personal circumstances that arose just before the study section met. In that case, the inexperienced reviewer who took over the review of that grant trashed it based on highly flawed reasoning, when I know that it would have been highly lauded by the experience reviewer, who later told me so. I had not experienced that type of behavior in the past. There are simply no checks and balances for this type of behavior. How can that be good for US science?

    In general, I have a sense that the NIH review process is very nearly broken and this has clearly been exacerbated by recent poorly thought-out decisions pertaining to: (i) the number grant submissions allowed, (ii) the very poor new scoring system (i.e. less numbers that can be reasonably used to score an an excellent application) (iii) the increased inexperience of study section members and (iv) their increased unwillingness to take the time to understand the science they are supposed to be evaluating. The review process has clearly become more arbitrary and capricious, which is not the direction it should take no matter what the payline becomes.

    1. Michael Vasil’s comments really get to the heart of the matter. When it was introduced, the new review format and the elimination of A2 submissions struck me as an unmitigated disaster in the making that was designed simply to make life easier for reviewers and the NIH bureaucracy in the face of the rising number of applications/ resubmissions as the paylines have gone lower. Nothing I have seen, either as an applicant or a study section member, has caused me to revise this opinion. It is a sad statement for our country that funding levels have reached the single digits, but equally sad that the response of the NIH has been to dispose of applications more rapidly by reducing the quality of the review process. Vasil’s comments concerning reviewers are equally valid. Many less experienced reviewers tend to be narrowly focused “hotshots” who lack a broader perspective in their respective fields and thus, often fail to understand important merits or failings of the proposals they are charged to evaluate.

    2. Funny, my experience has been that older/experienced, middle career and young noobie investigators are approximately equally as likely to be out of their depth, ride hobby horses and/or exhibit entirely predictable biases. I also find that when experienced investigators complain about younger reviewers it is generally because 1) they are being held to the same standard as junior investigators, particularly when it comes to writing a coherent proposals, 2) they can’t grasp that their same-old, same-old science isn’t competitive anymore and 3) confirmation bias.

      Given that *all* reviewers exhibit biases, the best solution is the competition of biases by endeavoring to constitute review panels that are as representative as possible.

  26. Learning the SBIR process is difficult. Meeting all of the submission requirement, including putting together a solid team, even more difficult when you are starting a new business based on the proposal success. I recently was awarded a Phase I submission on my first try. Successfully completed it and submitted a Phase II.

    I, like many others, had one bad reviewer. S/he gave high scores on the Phase II proposal even when no justification was given in some areas. How do you respond to that?

    I do appreciate what all the reviewers were trying to tell me and will work hard to make sure our resubmission addresses the negatives. For me, I still want to believe in the system and believe that there are fair and impartial individuals who just want to help you succeed. My company’s success moving forward depends on it.

    My major concern is the long delays between proposal submissions. You have a functional team for Phase I and are put in the position of formulating another team for Phase II because people go elsewhere because of the down time. This is especially true if you have to do a resubmission.

    I am finding out quickly that a lot of grant writing success has to do with knowing how to play the game or hiring someone who does know.

  27. What I find intriguing is that NO ONE in the science community appears to support the new A2-less policy. A recent petition to reinstate A2s, signed by thousands of NIH-funded PIs was off-handedly dismissed by the NIH.

    Do they actually listen to their constituency?

    Obviously, elimination of A2 proposals has made things easier for the feds. But when it is overwhelmingly seen as damaging to research, isn;t it time to re-review the decision?

  28. I share many of the opinions expressed above, but we must realize that arguments from personal, anecdotal experiences with reviewers will always be turned into “sour grapes” stories. I.e. why is it always the case that a “fair” review is one that funds our grants and vice versa? It would be more productive, in my opinion, to simply state what’s broken about the current system along with ideas for how to fix it. If people insist these things don’t happen, then we can bring out the anecdotes. So far, we’ve got at least the following problems and suggested fixes:

    1) Inconsistent reviews (“it’s too high-risk” and simultaneously “not innovative”). There should be some mechanism to force the committee as a whole to decide which view is more reasonable or declare a “mistrial”.

    2) Criticisms with no opportunity to respond. One motive for eliminating A2, as I understand it, was to force more careful consideration at the A1 stage, but that assumes reviewers aren’t just looking for any excuse to eliminate proposals. Maintaining the same reviewers might work, but only if they are not allowed to introduce new criticisms (this would also provide incentive for more complete reviews the first time).

    3) Sensitivity to outliers. Maybe trimmed mean scoring with a larger pool of reviewers works better. Isn’t that how the Olympics handle the problem? Better management by the chair would help.

    4) Undisclosed conflicts-of-interest. It’s easy enough to check collaborations nowadays, but harder to detect competitive conflicts, especially given a shrinking pool of willing expert reviewers, almost all of whom will be competitors for the same pool of research dollars. Standardize the practice of allowing PIs to blacklist up to n potential reviewers, as some journals do. (Also, be sure young investigators are educated to how study sections really work.) Establish blind review (but this is very hard to do given the nature of the work and the need to establish investigator competency.)

    5) Increasingly arbitrary and capricious reviews due to a combination of factors. Could CSR institute some mechanism for monitoring and measuring this? I.e. if most of us agree that the variability in scores for the same proposal across (related) reviewers would be much greater than the payline, why not measure it with a test proposal from time to time? Is there a system of checks and balances that could reduce the potential for abuse by an SRO?

    6) Inexperienced reviewers. Some sort of mentoring system? Reduced load for novice reviewers so they have more time to survey related literature? CSR has little control over the pool of available reviewers, but I, for one, would feel better if the project manager, who presumably knows something about the science and scope of the program, could provide input. Maybe this could be limited to advice in the form of answers to direct questions about the science in a proposal. Obviously there was a reason for introducing a firewall between review and project management, but the comments above indicate that opportunities for abuse have just moved from project management to the SROs.

    1. Eliminating supposed “competitive conflicts” flies directly in the face of those whinging about reviewer expertise and experience, does it not? The more specific the expertise for a given application, the more likely the reviewer is going to be a scientific rival.

  29. The idea that one reviewer cannot sink a grant is ludicrous. With hypercompetitive paylines and no weighting of the opinions of reviewers 1 2 and 3, Reviewer 3 can absolutely submarine a grant for the most ridiculous of reasons. I have seen it from both ends, in study section, where a third reviewer thinks he or she has to contribute something and tries taking the grant down a peg or two, and in a submission I recently got back where reviewers 1 and 2 gave us 1’s and 2’s across the board, and reviewer three clearly had not read the program announcement we were were applying to, or half of the grant. The result: 30th percentile, better luck next time. I’ve seen the same behavior form young scientists and experienced reviewers in the grant panels; and I know I’ll have to be damned careful not to take my current frustration out on the next set of applications I review. The system is highly subjective and flawed.

  30. Anecdotes aside, this post is not very helpful at all. The gist of it is “look at your pink sheets, then decide”. Well gee, thanks! What would I have done without this sage advice?

    One thing I’ve got in the habit of doing, it putting the scores IN the rebuttal, so the reviewers can actually see (be reminded?) how they scored it last time. It helps with pointing out discrepancies between reviewers, and provides a solid frame of reference for the (new?) reviewers to see if the new proposal is better or worse.

    Regarding the whole A2 thing, one of the key impacts of this is the implication of getting a triage score initially. Realistically, what are the chances of your score going from >50% to <10% in a single round? I've seen grants jump an absolute maximum of 20 percentage points between rounds. So, if you got triaged first round, you will get maybe a 20% score on the A1. 20% is still unfindable… game over! Why even bother with the A1? To get funded (<10%) on the A1, you'd better get <30% on the initial submission. Worse score… don't even bother resubmitting. I think a lot of people don't grasp this, and continue with an A1 submission even if they got triaged first time. This is just wasting time and effort – far better to go back to the drawing board and submit it as a new proposal, saving yourself a cycle.

    1. On a recent A1 I went from 46%ile to fundable, and I’ve heard plenty of similar anecdotes from colleagues. It absolutely can be done.

    2. Get a <30% on A0 to get funded? My R21 went from 27 percentile to 32. All new reviewers. Why even bother with a response letter, carefully crafted if the new reviewers even don't give a about the response?

  31. Hmmm. I was asked to sit on a grant review panel. It will/would be my first in this field. But reading this discussion makes me wonder about bailing out. It would be good experience, maybe, and possibly good for the cause, but I certainly don’t need this level of aggravation.

    An optimal system might bring newbies in as observers for one or two panels, teamed with mentors as mentioned earlier. But apparently that’s not how it works.

    So, I’d appreciate any suggestions from experienced reviewers on how a new reviewer should properly function in the situation described above.

    1. The discussion about all the horrible things that happen in review is horribly overblown and not representative. You should definitely give it a look-see and find out whether your experiences are similar to the lurid descriptions in this thread.

    2. What DrugMonkey said. What you are reading here is a self-selected parade of horribles concerning NIH study section. My experiences of study section service have been extremely good, and have given me faith that the NIH peer review system actually does an outstanding job at an very difficult task.

      And not only that, but it will be extremely instructive for you as an applicant. You will realize that all the paranoid raving about nefarious SROs intentionally “torpedoing” applications with harsh reviewers, etc, is absurd. And you will learn about the *real* dynamics of review that do influence your scores, so that you can employ effective grantsmanship.

  32. 1. Get rid of -A1. NO RESUBMISSIONS.
    2. Get rid of triage. All proposals and critiques should be discussed.
    This will eliminate “moving targets” and patently nonsensical critiques. Problem solved.

    1. “Problem solved.”

      This comment is addressed to all of the commenters that promulgate “solutions” to the “problems” of NIH peer review:

      If your proposed solutions are enacted, does that mean that when applications fail to receive fundable scores (which the vast majority will do, given the current funding climate), the applicants will still consider the system “fair”? Most importantly, does that mean that *you* will consider the system “fair” when your grant fails to achieve a fundable score?

      1. CPP:
        I can distinguish between a rational, but negative critique, and a nonsensical critique based on “no, because 2+2=7”. I can accept a negative, but meritorious critique. Moreover, I can genuinely appreciate such a critique (in worst case, it makes me stop wasting time on something dumb). I can even accept a rejection based on the argument “this is reasonably good, but we have different priorities”. Or “this is too risky”. Fine. Amen. Thank you.

        Unfortunately, NIH critiques are often enough based on the above-mentioned “2+2=7”, which is just a manifestation of “I am not feeling comfortable with this piece of science, so now I have to manufacture some justification to make the SRA happy, even though I don’t really understand the science involved”. And this is where NIH has no proper checks and balances in place.

        I am privileged, because I work on method development, within a rather exotic area. This allowed me to accumulate a substantial body of critiques from both NIH and NSF, with the “competence” factor emphasized. There is practically a guarantee (statistically!) that out of three, two of the NIH reviewers will have no clue about the science involved, and the third one will be more or less knowledgeable. Out of the clueless two, one will write a lukewarm, generally positive critique, based solely on my own appraisal of the methodology. The other one will write a wild nonsense, just to justify his “no”. Then, the SRA will assemble a Summary Statement and any attempt to dispute the nonsense will be considered a vile, sacrilegious attack on the peer review system, which is “good and ethical” (by definition).

        NSF may be not perfect, but discussing ALL applications by the whole panels there, and a stronger position of the POs allow for filtering off the most egregious nonsense (perhaps also by pre-selection of the reviewers? I am not sure). And this is good enough for me, even when ultimately my application is not funded. CPP, I really can accept the 10-15% success rates as a fact of life. I just don’t want to receive Summary Statements that read like “Alice in the Wonderland”.

        Please also consider that these 10-15 success rates mean that one idiot can make a difference between funding and not funding, but this is a topic for another discussion.

        1. GG, if within the imperfection of peer review, NSF is responding to your views on quality reviews and NIH is not, just go to NSF and get your science funded, tested and utilized. It is like sending your paper to a Journal that is turning it down once and once again. Go to the Journal where your science and ways of presenting it can be favorably viewed.

          Your reviewers maybe saying 2+2=7. Perhaps what you are presenting appears to be like 2+2=7 and reviewers don’t have the time to check it out.

          1. I DID get NSF funding. With some moral discomfort.

            NSF funds everything from astronomy to physics to climate science, doing it with a budget that is $6.8 billion (2011 appropriation), vs. NIH’s $29.5 billion. Considering this funding disparity, it is simply not right when NSF has to take up the slack and fund method development relevant to drug discovery.

  33. There is practically a guarantee (statistically!) that out of three, two of the NIH reviewers will have no clue about the science involved, and the third one will be more or less knowledgeable.

    If it is happening to you repeatedly that more than one reviewer is failing to understand the science underlying your proposal, then you are apparently doing a poor job of *explaining* the science involved properly for your audience. Unlike NSF, the membership of NIH peer review panels is publicly available, and you can see who the standing members and (for prior rounds) ad hoc members of a study section are. You should be writing your application with a specific audience in mind.

    1. CPP, I will respond with an example. To “write my application with THIS specific audience in mind”, I would have to get myself a lobotomy. Or smoke something really good:

      1. “SIGNIFICENCE/Weakness:
      It seems likely that a malevolent group engineering a disease vector would adopt as a first stratagem enabling it to resist the current antibiotics and this would largely bypass the stratagem being developed in this proposal.”

      2. In just the following paragraph, the SAME(!) reviewer wrote:
      “APPROACH, Strengths
      This approach would be antibiotic independent.”

  34. As a junior investigator I am horrified at what I have read here. This is the way science succeeds or fails? really? after all these years of training and anguish it comes down to this? what a shame…

    Some posts here suggest that you can modify the process by asking hostile reviewers to be eliminated…well how exactly does one know who did what on a study section? am I missing something?

  35. I was wondering if anyone here has advice about the appeal process. I resubmitted an R01 to a notoriously malignant study section, only to have it score 30 points lower the second time (despite several publications and a significant amount of data in the interim).

    1. I have no advice to offer but just a reminder to NIH that the last update of the appeal process and its regulation took place in 1997. NIH should seriously consider to review it and place that regulation into perspective.

      The appeal process is a critical component of peer review‘s quality control (to minimize, if not eliminate, errors/arbitrariness and maximize rigorousness). It should not be left lagging behind and obsolete. Particularly after all the “Reviewing Peer Review” that has taken place during Dr Scarpa’s tenure.

  36. I think the reviewer should emphasize more on ideas rather than experience/risk when score a grant. I suggest that grant should be reviewed blinedly without knowing who the applicant is and which institutes he comes from. No revision is allowed. Do you think revision make a significant difference? I bet most revisions, like my own, is to put patches to it and to specifically address the critiques (if there are correct), or you can image that a fair/bad science turns into a bright/million-dollar grant after few months?

Before submitting your comment, please review our blog comment policies.

Leave a Reply

Your email address will not be published. Required fields are marked *