Assuring the Integrity of Peer Review

Posted

Eight months ago, CSR Director Dr. Richard Nakamura and I posted a blog on “A Reminder of Your Roles as Applicants and Reviewers in Maintaining the Confidentiality of Peer Review.” We asked you to imagine a scenario: you are a reviewer for an upcoming panel meeting, and shortly before the meeting an investigator associated with an application communicates with you, asking for a favorable review in exchange for an academic favor. We asked what you would do – accept the offer, ignore it, or report it?

We used the blog as an opportunity to remind all of us how important it is that we all do our utmost to assure the integrity of peer review. Failure to do so, we wrote, will “result in needless expenditure of government funds and resources, and erode the public trust in science.” Furthermore, we noted that there are potentially serious consequences for reviewers and for investigators or others associated with applications who engage in behavior that violates the integrity of NIH peer review.

Unfortunately, our blog foreshadowed just such an event. NIH has recently determined that there has been a breach in the integrity of the panel review process of a batch of applications.

NIH takes the integrity of peer review seriously, and we appreciate that the vast majority of individuals also take the integrity of peer review seriously. Accordingly, after much thought and deliberation, we decided we had no choice but to cancel the panel’s review. The consequences are serious: dozens of applications will need to undergo a re-review.

When the integrity of peer review has been breached, it affects everyone. We regret that the dozens of affected applicants who did nothing wrong will face substantial delays in getting their applications reviewed and processed. We appreciate that the panel reviewers spent a great deal of time and effort reviewing dozens of applications, traveling, and participating in meetings. NIH must assure a fair process for everyone and will not stand by when the integrity of our peer review process is compromised.

We are grateful to the tens of thousands of reviewers and applicants who do play by the rules and who take as seriously as we do the critical importance of the integrity of our processes. This case is a reminder for all of us that we must be ever vigilant.

77 Comments

  1. Surprise. Surprise. Let’s get our heads out of the sand! I have been a NIH grant peer reviewer for 25 years on 3 different study sections. I have had SROs come to me and tell me to score a grant well because “he is one of us”. I have heard reviewers say openly during the review session that so and so was a graduate student in his/her lab and then give the grant an outstanding score, without the SRO intervening. I have brought this to the attention of the SRO but no action was taken. I have had people I haven’t talked to in 10 years email me to tell me to score their application well for “old times sake”. I have ignored their emails but haven’t reported it because I know the NIH won’t lift a finger, and these same people will be on review panels that score my grant. It’s totally corrupt and the NIH officials ought to wake up to this reality

    1. Similar experiences as a study section member. NIH officials know that signing a COI and confidentiality agreement doesn’t eliminate the many undisclosed personal conflicts and biases that reviewers hold. They just don’t want to do the hard work, make the uncomfortable choices, and have the difficult conversations to ensure that these conflicts and biases don’t influence the review of an application.

      1. I have been reading this group of comments on peer review over the past few days/weeks. Frankly, they are very distressing. (But not exactly new news. I am still actively involved in grant reviews.) This is a reply not just to Chris Merchant, but to all the people who have been making comments.
        I have been involved in peer review for many decades. I can recall when site visits were a staple of peer review! And when study sections always (yes, always) met in person and looked one another in the eye during discussions. And yes, when almost everyone carefully read all the applications. (Given that they were looking one another in the eye, they had no choice!) Even those not assigned to review the grants. And when people did not use laptops, or later when they did, they read the grants and were all (well, almost all) familiar with them.
        The deterioration of standards over time has been both appalling and heartbreaking. Yes, it is now clear that the peer review system is severely broken. Only the most extreme examples have been mentioned in this discussion. There are many more that are less severe. I can’t believe how many reviewers make comments that reveal that they do not know the subject matter well or have not read the grant application carefully. And this is supposed to be peer review ???!!!!
        If we are going to value peer review, we have to give it value. We have to invest in it. No more online review sessions! Reviewers need to look one another in the eye. And to be very familiar with the contents of the grants being reviewed. SROs need to address seriously the kinds of issues raised in the various comments.
        I do reviews for both the Innovator and the Pioneer Awards. Reviewers for these have usually read ALL the grants being considered….and carefully. Their scores reflect this. So do their questions. We need this level of care for ALL NIH grants

        1. Dr. Andreasen, As an eminent physician scientist, a Presidential National Medal of Science recipient, thank you for lending your voice to this matter. Perhaps if honest, seasoned, unselfish and highly respected individuals like yourself speak out, those charged with stewarding the NIH will listen and right these wrongs.

        2. The reviewing system was not in this shape for the first five years of my 15 years service including program project grants. It was remarkably fair and equitable even for the first half of the time that I served on the study section. The SROs were familiar with each grant and kept the rules intact for COI or inappropriate comments “This guy has never done anything worthwhile in his life”.

          Based on my experience as a reviewer: The problem entered, I believe, as a) fewer reviewers knew all of the new data in many of the rapidly growing fields, leading to many more errors that are introduced into the review. If two reviewers, for example, both of whom were are expert in the field argue with Mini-Review level knowledge, then the outcome is arbitrary and useless. Two classes of reviewers are present on study sections that can review the same proposal: a) the true experts who can easily summarize and evaluate the paper and b) the expert, who as a professional, should understand the applicant’s work, if it is was well written. So this produces a mess of scientifically exciting and justified applications with applicant’s who can teach the reviewer. The panel cannot discern who is expert in the field. The mass of grants of non-superstars appears to be reviewed with a deficiency in expertise.

          The solution to this problem is an increased size of a panel that has more expert reviewers as a consequence of mandatory rules that applicant MUST review. The usual excuse, which has degraded the choice of reviewers for papers as well, is that “I don’t have time”. Well, who does? That is just the deal!.

          But I hear many groaning at “a larger study section”. An option is to use the new NSF system (no, not to change rules every week) to submit at any point in the year. IThis would allow a smaller study section of applicants studying related work to be judged fairly. That provides more fluidity for the administrators to generate fair study sections at least once a. year. That will, of course, reduce the frequency of possible submissions, but the study sections would be of higher quality and to accept your grant if it is judged rank highly.

    2. Your comments suggest that you know nothing about NIH peer review, as none of these behaviors would be tolerated in a review meeting. I’ve been an SRO for 12 years and I can attest to that.

      Despite that affiliation, I am posting as a private citizen exercising my right of free speech. I am not speaking officially on the part of NIH.

  2. We appreciate the steps you are taking for the integrity of peer review. However, as an applicant and as a reviewer, my impression is not always healthy. For one or many reasons, these could be a personal, racial or location; applicants are suffered most. Several applications, due to minor weaknesses, get serious punishment from the reviewers. On the contrary, multiple reasons, some investigators always get benefit from the reviewers even having poor productivity. Thus, it is my request to Drs. Nakamura and Lauer to take a necessary step so that each applicant can have the opportunity to communicate with additional members/committee, who are not attached directly with the peer process system, for justice or assurance. It is my understanding; we are losing good science projects because of the review system. We cannot blame each other but we can think differently, and we should think differently to protect our scientific community.

  3. Will you be contacted the affected PIs about the delay for review? Many labs are dependent upon funding for survival. This will also alert them to possibly apply elsewhere in the interim.

  4. As a reviewer for many years, I have the following points to be considered by NIH:

    1) There is a recent trend in the study sections that some reviewers purposely give scores to triage the applications so that they do not have to discuss them or they have some non-scientific issues with the applicants (as it is now well known that a single reviewer can kill a grant application).Importantly, as their names are not disclosed to the panel members they take this advantage. Previously, reviewers were cautious to do so as their names were disclosed to the panel members.

    2) It has no meaning to submit A1 applications if the same reviewers are not reviewing them as new reviewers usually have new totally different suggestions. Therefore at least some of the reviewers should be common and the new reviewers should be instructed to give credits to the responses to the A0 critiques.

    3) There are some members in the study sections who raise their hands to give out of the range scores. I feel that there should be a limit for the out of range scores so that this action is not misused. How can a member who has not read the complete application can give a very high or low score?

    4) Another interesting trend is that some members of the study section give out of the range scores without stating any scientific reason. These reviewers just state that they are giving out of the range scores in order to “give a range”. How can a score be given on the basis of such a vague point.

    5) There should be no bias based on any community. Nowadays a definite bias is being noticed against a certain race. As there are both ethical and unethical applicants and reviewers from all races and so it is wrong to penalize a group.

    In summary, I feel that correcting the above problems will give confidence to both applicants and reviewers.

    1. Your point about A1 applications is spot on. It is simply not fair for A1 applications to be reviewed by entirely new reviewers.

      1. I couldn’t agree more. This has been my experience in A1 submissions over and over. Resubmission should be viewed as resubmission and the reviewers should focus on making sure the comments were addressed properly. Moreover, ALL reviewers should be held accountable for assigning bad scores with no noted weaknesses. If there are no weaknesses, they should have no right to lower the scores just to push the application out of the discussable range. Finally, when the comments clearly show that the reviewer didn’t bother to read the proposal, these reviewers should be red-flagged and disqualified from serving on any panels in the future.

        1. Back in the day, we had what I will call the Link-Prince lemma. For an A1 application, a totally new critique could not be raised unless it was a very serious one (enough to substantially change the score by itself), and for an A2 (it was that long ago) it would have to be a deal breaker, enough to send it to NS, by itself. This was established by the chair, but I think it was a prudent policy.

  5. Do the the vast majority of individuals take the integrity of peer review seriously on NIH grant applications? From the first comment from Volga it appears it might be much more widespread than this blog post implies.

    1. I’ve served on a number of NIH panels and cannot recall a single instance of impropriety like this. I’m shocked at this person’s experience, so I am wondering if certain fields are more of a club – an unethical club.

      1. There was so much “I scratch your back” like corruption on one SS I was on that the news finally got out and the administrator got “reassigned” to a desk job. Being a government employee, he of course could not be fired.

      2. I agree with you. I have been serving on study section for nearly 40 years and I serve a lot. I have always found the study sections to be fair and the reviewers to bend over backwards to help the applicants. Funding is tight and everyone recognizes that only a few grants can receive money but I feel that we have always strived to rank the best grants at the top. I have found the SROs to be professional, helpful and to intervene when needed to ensure a fair review. These other comments have been a very, very unexpected concern to me. They must be serving on study sections outside my area.

    2. The “horror” stories being reported here are completely foreign to any SS that I have been a member.

      1. Consider yourself one of the lucky (and uninitiated) few.Sooner or later the belly of the beast will be revealed.

  6. Is it time to make applications anonymous? If reviewers are blinded as to the identity of the applicant, then it will be more difficult to rig the score to favor a particular applicant.

    1. It is hard to make applications completely anonymous. One criterion of the NIH review is Investigator. How can you evaluate someone when you do not know who he/she is?:)

      1. Regarding anonymity:
        The investigator score could be assigned by a different set of individuals who do not read the grant per se. The score can then be provided to the reviewers of the science.

      2. If the qualities of the applicant must be taken into account, the time to do it is after a blind review of the proposal on its scientific merits. But it’s hard to think of good reasons why a decision based on science would be overturned by a consideration of the applicant. There can be little doubt that the introduction of blind review would radically change how research funds are distributed, with less money going to established researchers and more to outsiders. The likelihood of such a change shows that change is needed, but also shows why it is feared.

    2. This is actually not a bad idea in principle. However, implementation would be difficult if not impossible given that one of the review criteria is “Investigator” and there is an accompanying NIH Biosketch to help gauge the applicant’s productivity. To offer a really radical solution that might have the same intended effect, why not eliminate the anonymity of the reviewers assigned to particular grants.

      It may be somewhat naive, and may not entirely eliminate the problems inherent in the review process, but remove the anonymity and perhaps reviewers would be less inclined to level scientifically unjustified or downright ridiculous criticisms.

      Case in point, I have been writing grants and reviewing grants for nearly 20 years and as many of my colleagues know from personal experience to get many criticisms that lead to question whether the reviewer even read your grant!

      I could go on and on, but you get the point.

      1. I completely agree with you. The comments given by reviewers should be made publically available. In the current system you can say anything without any consequence. If they rate umpires in baseball game, this is serious stuff.

    3. You are absolutely correct. There is no rationale for reviewers to know the applicants. The question about experience and facilities, could be addressed after scoring the science blindly. I have served many years on NIH study sections and have seen how differently applications are reviewed depending on who the PI is and where they are. The same goes true for publications BTW.
      Everyone talks about transparency but grant and manuscript reviews will never be unbiased unless reviewers only score the science and do so blindly.

  7. I think that it is long overdue for NIH to make applications anonymous. Having been an applicant for many years, I have clearly noticed many irresponsible behaviors from the reviewers: given no comments under Weakness, yet gave bad scores; wrote insignificant or irrelevant comments and gave bad scores to kill the application; and sometimes simply gave bad scores without a reason…. NIH placed too much weight on a single reviewers comments. I am sure many applicants felt the same dismay when the hard efforts got killed by just one person’s injustice comments.

    I also strongly suggest NIH give applicants a chance, for example 5 min conference call, to clarify major issues raised during the review meeting. This will prevent any biased intent or misunderstandings from the reviewers. It is really the time for NIH to strengthen the reviewing process.

  8. I would like to know the nature of the breach in the integrity of NIH panel reviews. A Google search comes back to this page, so it seems as if Open Mike has made this up.

  9. Another suggestion to improve the review process is to let the applicant rebut the critique BEFORE the study section meets. The rebuttal can be made available to all members of the panel so that they can take that into account while scoring the application.

    Criticisms are sometimes superficial and reflect lack of the reviewers’ familiarity with the subject matter, or just the fact that they rushed through the grant because they were otherwise busy. Having the applicant wait for the next cycle when the reviewer of missed a key point is not fair.

    Parmjeet Randhawa, University of Pittsburgh

    1. I think this is a very good idea. Sometimes the reviews contain basic mistakes, where the reviewer failed to notice something. It would be good to straighten out these issues, so the applicant is not penalized unfairly.

  10. In my humble opinion as an applicant and a reviewer for the past 15 years, the peer review system only works under a generous payline. In the highly competitive environment as now, this system simply fails. Take athletic competition for an example, do they use a peer review system? Absolutely not. In today’s environment, one failed grant application can doom one’s career, or even his job and life style. It is the time to completely reform the current review/funding system by either increasing the payline to 20 percentile or more, or adopting a referee system where the referees should not be the peer of grant applications, thereby having no obvious conflict of interest. Additionally, sound statistics have to be used to ensure biased scores be removed from percentile calculation.

  11. Tronovic / Merchant / Ford … yes, yes, and YES! All of that is also my experience. I reiterate a suggestion I have made before – an applicant should have the opportunity to respond to ALL criticisms – especially “not discussed” applications, because in most every case the applicant is more expert than the reviewers, and because comments made by reviewers who do not have to present their criticisms to other study section members ten to get sloppy, petty, and contradict each other.

    Rasmussen – I don’t see that idea as practical, and past performance is in many ways a good indicator of future performance in science.

  12. Having reviewers be blind to the identity of applicants is not just a good idea, but the foundation of any fair review process, for the same reasons that blinding is essential for fair data analysis. In the absence of such anonymity fair review is impossible, even by reviewers with the best intentions. Why is the importance of avoiding bias, unintentional or otherwise, which is universally acknowledged in science, ignored in the peer review process?

  13. I would like to see reviwers who give an application an outstanding score (1) after a discussion that reveals numerous weaknesses held accountable for their actions by SROs or other CSR personnel. It is sad to see how profound some biases are.

    1. One approach may be to establish an ethics panel at the CSR to which reviewers may report perceived irregularities at their SS. Under existing rules, reviewers have to report to the SRO, who may have incentives to just sweep the issue under the rug and to make sure that a reviewer who rocks the boat is not invited again. The reviewer who reports irregularity does not receive feedback on his/her report, so one never knows if the report has made any impact, which disincentivizes reporting.

  14. I don’t want to know the consequences for the poor slobs whose grants got screwed – we all see that clearly: delays in funds, more pressure by their dean/chair, techs and students that have to be let go, increased teaching load since you can’t get funding….

    What I what to know, and what this article *should* have covered, is what the consequences are for reviewers who pull shady stunts like this. Come on NIH – transparency! What will happen to this reviewer?? Let me guess – whoever caused this “breach in integrity” will get quietly moved to another study section or even promoted to section head. Bet they get to keep their NIH funds, too.

    It’s funny that the NIH is shocked (*shocked!*) at this, when the rest of us all know sketchy reviewer behavior happens all the time.

    So – consequences?? Perhaps if you disclose these openly it will start to dissuade the crummy behavior that forced you to write this nonsurprising article to begin with.

    1. While we will not discuss the specifics of the case, an integrity breach in peer review is handled by referring the matter to the NIH Office of Management Assessment and possibly to the Office of Inspector General, U.S. Department of Health and Human Services. It could result in criminal penalties, fines, imprisonment, and/or other action(s). We recently issued Guide notice NOT-OD-18115, Maintaining Integrity in NIH Peer Review: Responsibilities and Consequences, which may answer some of your questions.

      1. I am betting my first child that the offenders get away with a slap on the wrist….unless Senator Grassley gets interested in the matter. The NIH and CSR are essentially toothless. The ORI is actually a mess, with tremendous internal strife, and very little resources and manpower to investigate and implement meaningful consequences.

        You can commit fraud on multiple NIH supported publications, the whole world knows about it, and still get away with minimal consequences – a 3-5 year suspension on NIH funding. The Wall Street crooks got a tougher sentence than that!

      2. The US Criminal Justice System is also based on peer review. It, like NIH peer review, is tilted toward, the rich, powerful and connected. But unlike NIH peer review system, you CANNOT serve on a jury if you are in ANY WAY even remotely connected to the plaintiff or the defendant. Imagine how widespread the lack of integrity in our justice system would be if the jurors socialized and networked with those they are tasked with judging? Think about the tremendous bias such a system would be faced with if the jurors knew that the defendant or plaintiff could some day sit in the jury box passing judgement on them?

        Just ask yourself these questions, and you will understand why the NIH peer review system is so flawed.

      3. Thank you, Mr. Lauer. So, which consequence will it be? Criminal penalties? Fines? Imprisonment? The always-hilarious “other actions” (aka hide your head until the problem goes away)? I agree with the rest of my colleagues posting here – nothing will happen. It will all quietly go away when you “refer the matter” to someone else in the Office of Management Assessment to not do anything with. It will sit in a stack of paperwork on middle management’s desk until the coast is clear. If you look at that Office’s website, by the way, their first purpose they list is to “safeguard agency assets” (aka, look out for numero uno NIH!). Listed second is “preserve public trust in the NIH”. Erm… if NIH scientists don’t have trust the NIH processes, why should the public?

        1. On spot. Nothing will happen. Circle the wagons and CYAs. If the NIH was serious about integrity it would do a complete overhaul of its peer review system. The majority who have spoken out on this forum have seen the truth. This truth is known to the NIH and CSR. Now the ball is in their court, but don’t hold your breath!

      4. I’m sorry to say it, but this response from Mike Lauer is pathetic. I don’t hold him responsible for any part of this enormous mess that has been well outlined in the various comments. But putting out a vague story about this “revelation” of misconduct, and then taking cover behind a bunch of government committees or offices that will supposedly deal with this and put everything right. When is NIH going to accept that they have a system that needs to be overhauled and substantially changed? No more band-aids or platitudes, please!

  15. I appreciate efforts to enforce integrity in the peer review process. I teach Responsible Conduct for Research in my Research Methods and Seminar course. It is clear we have failed in our mission to maximize the production of relevant knowledge for the betterment of society (social welfare) because the process is tainted. The incentives for reviewers to weigh the social welfare goals above their own are non-existent. The time to do it right has opportunity costs. And the incentive to promote projects that support their own research agenda is strong. I applied for a grant that set as a restriction that only projects using a measurement tool would be funded. It is a survey based self-reporting tool so that our actual physician diagnosis from charts was not funded. NIH needs to be careful with mixing learning objectives with policies/approaches. It is another form of bias that restricts efficient growth.

  16. To short-circuit the problem, I would suggest that any proposal scoring better than some semi-arbitrary cutoff (say, 20th percentile) be put into a bin and that funding should be on the basis of a lottery — get picked, you get funded. Perhaps then the onus of not getting funded would be altered, some of the useless “must fund” grants would not get funded and the money would be more evenly and appropriately spread around.

  17. It is virtually impossible to demonstrate that your proposal has not been treated fairly. I have heard many times “I don’t understand the discrepancy between the review comments (good) and the (bad) score” from the program officer. Preliminary scores are purged from the record and only post discussion scores are recorded. If you think that a certain reviewer has been dinging you over many years, there is no way to prove that; their reviews and scores are not archived from past years. There are some reviewers who have been on the same panel for 20 years. Panel chairs confer with SROs to assign reviewers…if the panel chair has a problem with you, you are toast. There is no recourse…because of course, the discussions and all materials are confidential and other reviewers who might have been there can’t say anything, although from their body language you can tell that they think you got screwed.

  18. NIH finally did something right. The bad grant reviewers should be punished and be excluded from the system. The names of the grant reviewers and applicants who violate NIH regulations should be made public .Grant proposals should never be judged based on network, relationship, or friendship. SCIENCE should be the only criteria. It is sad that many department chairs are emphasizing and encouraging networking. Some reviewers can just write down some thing to kill any proposals; the very same reviewers can give pass to any proposals they want. This should stop.

    1. Well put. “Networking” is a form of legalized corruption, and should be outlawed. At the very least, NIH reviewers who “network” should be forced to reveal who they have “networked” with, just like lobbyists have to now declare who they lobby on Capitol Hill. It is sad that brilliant minds with brilliant ideas, who may not have “networking skills”, or resources to network, are being denied funding because they do not engage in this form of corruption.

    2. “Some reviewers can just write down some thing to kill any proposals”…it is very simple. I’ve had at least 2 killed by getting a 5 for significance from the primary reviewer…not because I did not explain it well or because it was truly trivial…but the reviewer just did not think it was a priority. Usually that 5 from one reviewer is enough to get your proposal triaged. Even if it does get to be discussed…because (in my experience on panels) the reviewers are told by the SRO at the end of a discussion “please revise your scores to reflect the discussion” the other panel members, who did not read the proposal but listened to the discussion, then do the trained seal thing and put down the primary reviewer’s final score. The other two reviewers usually go along with the primary reviewer…as a reviewer you pick and choose your battles carefully. “Oh, I guess I can come up on my score.”

      I wonder if everyone on a panel should read all proposals and provide a preliminary score for each. I do this for abstracts for meetings…I am not an expert on all the research questions but I can rank them from best to worst…good science stands out as does mediocre science. Those reviewers with the median, Q75 and Q25 score are asked to provide a written critique to serve as the basis for a discussion. All panel members then revise their scores to reflect their response to these critiques if the application is discussed. But…the preliminary scores are not erased. That way a program officer can better advise on how to fix an application.

      I know it is a lot of work to serve on a panel and no one in their right mind would want to read 70 applications in their entirety. Perhaps assign a third of them to be read and ranked by each reviewer. Or just rank them based on the abstract/summary or specific aims pages.

  19. There are repeated suggestions to make proposals anonymous or, conversely, to end anonymity of reviewers. Though well-intended it may be, it just is not going to work.
    1. The anonymity of reviewers is a long-established principle which is there for a reason. Most people would never criticize openly another investigator if there is a chance the the roles might be reversed in the future. Or they would flatly refuse to review. In any scenario, this would lead to more, not less, distortions in the review outcomes.
    2. Even if there is no “Investigator” review item (as is also sometimes suggested), it is still not realistic to assure the proposal anonymity. Every reviewer who knows his field can readily figure out where the proposal (or the manuscript) most probably comes from. We are just fooling ourselves pretending that we are doing bling reviews in some journals. It is especially so given that “Preliminary Data” are not only present, but are a crucial part of the grant proposal.
    3. Besides, it doesn’t seem wise to totally exclude “Investigator” or “Environment” items from consideration. It is not very difficult to propose “pie in the sky” and to assemble a perfect research proposal if there is no need to justify that one has enough expertise/resources/collaborations to actually do the research. Even now, I can see this happening often enough.

    I am not proposing any remedies here, I am just pointing out that finding working solutions may be not straightforward as it may seem to so many.

    1. 1. Can it really be true that the medical research world is so riddled with incompetence that only a small fraction of researchers are capable of doing the work they propose? That is the implication here, for if a researcher can do the work, and the work is judged sound and valuable on scientific grounds, what further reason is there consider the identity of the researcher? If incompetence is this common, then the legitimacy of the whole medical research enterprise falls into question.

      2. It simply is not possible for any reviewer to know all of the thousands of people whose applications they may be called on to consider, much less know them well enough to render fair judgements of them. My own field is tiny, maybe 100 people worldwide. I like some of their work and dislike some, but there is not one of them whose application I would reject based on anything I know about them or their work. What a reviewer can and does know is the eminent people in their field, which is a very different thing. In practice what taking the applicant into account during review amounts to is that people who are established in their field will be favored over newcomers, for how can you possibly render a fair judgment of the abilities of someone who you do not know?

      3. Given that it is possible to deduce the identity of applicants from their applications, it does not follow that this is always possible, or we should not attempt to make the review process as blind as possible. What harm would come from simply stripping all applications of obviously identifying information? This presumably would make the review process at least partly blind, and if it didn’t, what would change for the worse? Again, why is something as integral to proper study design as blind analysis not a part of the review process? Or, to put it another way, would NIH any reviewers accept the NIH review process were it presented to them in a proposal?

      All hierarchies pretend to be meritocracies, but true meritocracies are so rare that it is difficult to cite historical examples of them. Maybe artists and scientists in the court of Lorenzo the Magnificent and the first few years of the Légion d’honneur were true meritocracies, but the vast majority of hierarchies, including scientific hierarchies, merely use the trappings and language of merit to advance the real goal of self perpetuation. The current NIH system works perfectly well from the point of view of people inside it. Money is allocated and divvied up. Established researchers and programs have stable funding, and there are just enough scientific advance to sooth away any concerns that advancing science is not the real goal. But it isn’t. This thread demonstrates that.

      1. Unfortunately, everything is not that simple. It is a rather commonplace among young researchers to propose overly ambitious studies that realistically require much larger budget and/or team of investigators. Biomedical research these days is so diverse and complicated that oftentimes only the largest labs possess resources and expertise to conduct research independently. Yet, outside pressure and inexperience prompt young investigators without access to appropriate infrastructure propose all kinds of “omics” approaches, some times in a single application.

  20. There is only one victim of a flawed peer-review grant application system, the applicant. However, improving (enhancing) the peer-review system must include actions taken not only by applicants, but also by Program Officials, SROs, and CSR.

    Applicants need to take the time to carefully formulate their concerns about reviewers’ comments when they judge them as scientifically incorrect and/or unjustifiable dismissive, and communicate them to the Program Official. Although this is an unpleasant extra burden, it is the first step in the identification of a possibly biased unfairness of the review process.

    Program Officials should be required to take the applicants’ concerns to higher levels as soon as there appears to be the slight possibility of an unfair or unqualified review. The current culture of discouraging applicants from an appeal (I was told that it is “not good for anyone”) and diverting applicants to a “grievance” (which, in my experience was never followed up on) appears to mainly protect the reviewer, not the applicant.

    The current practice of given more weight to Reviewers 1 and 2 than to Reviewer 3 should be eliminated, i.e., all reviews should rank equally. The SROs should identify “to-be-triaged” applications with priority scores that vary substantially (between 1 or 2 to 5). Such applications are likely worthy of a discussion and, if such discrepancies persist among reviewers, the RSO should consider to report them to CSR.

    Reviewers comments should reflect the reviewers’ reputation. Being a Reviewer should be an honor, i.e., the outcome of careful consideration based on experience and recommendation, and not simply based on availability. If a review of an application is indeed found to be flawed or biased, the reviewer should be identified and immediately removed from a Review Panel. Applicants can and should help in this process!

    1. Hi Claudia,
      The conscientious SRO who does that must be the most miserable person in the whole wide world!

      I blame the system more than any individual or group. Research as the pure pursuit of unadulterated truth used to be the preserve of eccentric extremists who were either inspired sages, destitute soldiers with a mission, bored wealthy dilettantes or socially disconnected schizophrenics crazily obsessed with a fuzzy, infinite dream.
      When ordinary humans with pride, greed, lust, envy, gluttony, wrath and sloth use the halo of truth seekers to find a capon lined sanctuary supported by the state, they promise the nirvana of perfect health and eternal youth to reserve an ever-expanding claim on federal support.
      Devoid of the commitment to the self-flagellation involving continuous effort to achieve experimental perfection only to find that the passion was mired in misplaced imagination, “the leaders” seek a large bevy of youth to bear the labor and frustration while they bask in the glory of the rare gems of discovery. Since this system hijacks the youth and cloisters naïve idealistic students into the confines of a laboratory, they become incapable of worldly pursuits and are forever locked in the bondage of an ephemeral tax charity-dependent existence in the same prison of dreams. Some lucky or well boosted blokes join the ranks of the funded kings of this enterprise, seeking to trap and misguide more innocent recruits in the name of training without defining the ends.
      The universities have beaten the churches, mosques and temples in this modern crusade and attempt to build a similar empire around half truths that become the new gods. The result of this massive expansion is a tremendous amount of corruption at every institution from the mega universities with their hidden billions and greedy drive for federal funds to grow the inner circle of administrators, to publishing houses that create false matrices that have no meaningful outcomes, to funding agencies that distribute the life-giving nourishment to the system.
      While the kings lock horns, the hyenas and other pack hunters plot their battles around new formations to raise dust around themselves and loudly announce the attainment of finish lines by the members of their alliance and create a formidable wall of half baked conclusions to feed the study sections. Even blind reviews cannot correct the support of this noise. Great discoveries are now accidental flares that that burn down well rooted trees of knowledge. The outcome for society is that fake findings sequester real resources of wealth and spirit, preventing wonderful new technologies from igniting the explosive brave new world.
      I therefore believe that misconduct of this sort goes way beyond affecting the unfortunate, unfunded individuals. The only solution is to force universities to be self-supporting with organic growth driven by kindling the spirit of discovery and its application towards social advancement. Without universities learning to capitalize on research and with their legendary oratory providing broad access to the ears of both people and their representatives, their coffers get filled with tax dollars letting them coast forward, with only a tiny trickle driving meaningful advances. Meanwhile even these paltry advances are not well capitalized, and few truly reach and enrich society as volumes of literature, even in high impact journals, simply perish into dust.

  21. Anonymity of the applicants will not work because it is necessary for the PIs to refer their work in order to strengthen the scientific premise and feasibility of the proposal. Also chances of a successful proposal increases with the experience of the PI, his or her collaborators and the research environment. Therefore, it will be impossible to evaluate a proposal (like many manuscripts submitted to journals: citation of the previous publications of the authors) if these critical information are hidden from the reviewers.

    1. Is it possible to let the applicants to decide? According to the benefits they believe and feasibility to do so (anonymous)? While NIH makes efforts to ensure that anonymous proposals being treated fairly?

    2. Absolutely not true. If we are to just review a new project without preliminary studies and only published literature to support the premise, we can determine whether a project is worth doing with a blind review and triage the poorly framed grants and only selecting the best and most meaningful advances for full review. It will be liberating to not be dazzled by unpublished anecdotal preliminary data while reviewing the goals and approaches. The remaining review would have already gone through a blind prescreen without investigator-related bias. However, as mentioned earlier, it will still not overcome the gossip factor with peer reviewers being naturally biased by their own research interests and the most noisy current bias in the field. Adding a community reviewer may help, but that will only be useful for clinical problems with a lot of community engagement. Basic breakthroughs like the Brainbow mice, CRISPR-CAS, optogenetics, etc needs peer reviewers to generate excitement.

  22. Science is only a portion of the remit of the NIH. The senior leadership know how to solve the problems and choose to side with the status quo but have added some boxes to check so that there is cover for the critics. The only solution is either a congressional act to force the NIH to focus on what is best for scientific advancement ( rather than the biology workforce for example…). The NIH is actually quite responsive to congress ( they have the purse strings that they care the most about). The other solution is to get rid of the leadership and replace them with C. Everett Koop like leadership that will do what’s right regardless of the toes that get stepped on in the establishment. All else is wasting your breath….sadly.

  23. Comments from a non-expert:
    1. Something like this blog is a very good start. Make a site like this on NIH website.
    2. Keep reviewer names anonymous, if not easy, at least publish after review is done.
    3. Give applicants the option of anonymity when possible. In some cases, separating science and PI/environment reviews might not be that hard.
    4. For reviewer unethical conducts, be friendly to them, but must remind them timely on even what is considered minor issues. Reviewers are educated people. The problems might be easy to solve in most cases. By the way, in most cases, the reviewers benefit not much themselves. So, treating broadly but friendly might work well.
    5. Increase NIH staff number to shorten their time on handling proposals to give more time to reviewers so that they have more time to read the proposal. I assume that reviewing more proposals by a single reviewer is beneficial to fairness, otherwise, assign less proposals to one reviewer each time.
    6. If reviewers are given more time to review, do not let them wait until the last minute to submit all reviews. Instead, for example, first 3 weeks, 3 proposals, second 3 weeks, 3 proposals, and third 3 weeks, 3 proposals. Then panel review.

    I think that reviewers/applicants is an educated group of people. As long as enough attention is paid, the problems can be solved fairly quickly. May be too naive?

    1. Yes, it is a bit naive.. But sounds very good to me.

      Although reviewers are all educated people, they are also struggling to get funded given this terrible funding situation. So unofficially, they want to make something else through friendship and other connections. Senior people told me when they were new investigator, the funding rate was about 50% so everybody was generous. But now with 7% payline? Forget it. They have two digits lab members to support.

  24. As a new investigator, I am scared to death when I read the comment from a veteran NIH grant reviewer (the first one after the NIH blog). What this reviewer honestly reveals might be something everybody knows but the NIH is not willing to admit. I have no lineage; I have never been in a big lab. I completed my master and PhD degrees in a foreign country. I have been scared to death in the past years when I heard similar thing from colleagues. Many of my colleagues said that I failed in NIH grant applications not because of science (they agree that I have done a great science). I have no connection to any study section. I have always focused on science. I have been scared to death every time I read the reviewers’ comments of my proposals. I feel that they can kill any grant proposal. They can find something to criticize even for a Nobel-prize–winning article. I am scared to death when I heard that some is in the circle and this is the reason he/she has a lot of grant. I am very and very sad that you fail even though you are doing great science.

  25. My experience with the integrity of panels has been varied. I’ve served on (and chaired) two IRGs where I saw the highest possible integrity, and in one case where an ad hoc reviewer strayed into an ad hominem attack on an applicant, the members of the committee quickly informed him that it was inappropriate behavior. In contrast, I served several times as an ad hoc member of a review committee in which bad behavior was out in the open, and the SRO took no action: I was contacted by the other reviewers of applications, to lobby me to give a score that was similar to theirs, and I witnessed a reviewer and the chair of the committee trading snide comments about the applicant during the discussion of the application. In addition, I witnessed reviewers with similar interests huddling at the beginning of the meeting, agreeing in advance on which applications in their specific field would get the best scores. In my view, many of these things are (and should be) monitored and controlled by the SRO. While I do believe that investigators in certain fields act with more integrity than those in other fields (from my experience on multiple committees), the SRO has absolute responsibility to ensure integrity of the process, especially when the chair of the committee does not. There should be more information provided to reviewers on how to report their concerns (now, the instruction is to report concerns to the SRO, but if the SRO is ineffective or part of the problem, there needs to be a clear alternative identified.

  26. To all the people who have never experienced such bad and not-allowable events, here are some of my experience.

    1. I witnessed a PI treated the friendly reviewers a nice dinner right after getting a fundable score. As they are not connected through any co-authored papers but in the same field, they have been helping each other for decades.

    2. Reviewers in A0 application suggested to add X and remove Y due to her/his own reason.
    In A1, new reviewer ask to turn that back to original form.
    In new A0 application, the first reviewer comes back and got upset as there is still Y instead of X as he suggested. Seriously!!??

    3. A reviewer just gave the worst scores to my application with no single reason. Asked the PO who had no idea why he did it and turned out nobody spoke up in the SS. I complained about it and SRO contacted the reviewer who said he is too busy to give me any comments. Why NIH select such irresponsible person as a reviewer????

    4. My previous mentor always got funable score when the SS is filled with his friends. When not, his application got triaged. Such a biased community. It is not about Science it is about friendship and politics.

    5. At a conference, I happened to ask a tough question at a postdoc in a big lab. I was told later the PI got upset and turned out he later gave me the worst score in a SS. So now I never ask any tough question from a big lab. Instead, I always praise their works to get money from NIH.

    What a stupid science world.

  27. My biased perspective comes from being a mid-career investigator who has submitted about two hundred NIH proposals and sat on about a hundred NIH panels. Frankly, some comments reflect limited personal experience. Yes, NIH peer review is far from pristine but is also not hopelessly corrupt. If anything, my impression is that panels are less biased now that they were in the 80’s or 90’s in the glory days of automatic grant renewals. In turn, better oversight, different process, blinding, jail time, etc., might suppress abject cheating but cannot overcome bias or misunderstanding. A reasonable conclusion is that tight funding lines are the real problem. Frankly, addressing success rates does not require an increase in the NIH budget. As of today, we can choose to support twice as many investigators with modular R grants. Other changes might help even the playing field, such as making it progressively harder to get a third, fourth or fifth grant. My concern is that focusing on the review process may distract us with efforts to rearrange the deck chairs and delay addressing the water rushing into the hull.

    1. Totally agree, but the level of noise around the fine expertise required on poorly written (confusing) grants that get over supported for some and thrashed for others due to friendly/neutral reviews leads to massive overfunding of certain labs. If the actual output of these labs in terms of numbers of new ideas developed, new companies started, text book information generated, and other measures of honest progress are evaluated in NIH reprorter, they do not justify >$1 million of direct cost funding per year. So it is important to target the areas of bias, which include outlandish preliminary data, randomly linking functions to genes, etc. Having too many criteria to review allows investigators an easy outlet of focussing on the investigator.

  28. The only way to avoid investigator bias-“being star struck, helping friends, hurting competition, etc” is to review grants blindly by reviewing the premise (no unpublished preliminary data) , significance, feasibility, research advance over current knowledge (innovation), regulations (Animal, human, safety) and and approach. The top 20-40% (depending on funding cut off) useful projects may then be selected and reviewed for investigator, institution, etc. Neither applicant nor reviewer should know each others identity till the science reaches a critical threshold. This will also save reviewer time on critically evaluating the investigator and institution.

  29. I have sat on an NIH study section for 2 years. I have submitted multiple RO1s and have been the recipient of a few. My experience on the study section is that reviewers have been diligent and thorough in their reviews. The process seems eminently fair to me. The best ideas do manage to get funded by the process. The SRO of my section has repeatedly admonished us regarding confidentiality. In my two year on the section I have observed no shenanigans. Everyone has been highly professional and highly respectful of the responsibility and the process.

    1. will that explain how many Senior PIs have 3-5 R01s without much productivity for decades (2-3 papers a year with 8-10 postdocs, 2-3 grad students and 5-6 technicians and 6 R01s… anybody else find this ridiculous)? I personally know a couple who don’t even know what individual members in the lab are working on, have no handle on data integrity that get published in their name (in journals where they ‘serve’ in the editorial boards). And yet, they game the study sections (from the horse’e mouth after couple of glasses of whiskey) for themselves and get 3-6 R01s. I wonder what they put on their progress reports and how NIH is OK with that. The rot runs deep. Having said that, there are some recent changes that might be a temporary respite till these thugs figure out another way to game the system. The key is the resilience or complacency of SROs and POs in this process. The individual I am talking about, used to host the SRO at his house around times he would submit a proposal (always got funded by the same study section). Go figure…

    2. Sometimes I feel that folks who don’t see any problem here are actually the part of the problem. Given the qualifications and experience, it is unlikely that they are naive or stupid.

  30. I was a K08 awardee and have made multiple attempts to secure an RO1; all those attempts have been unsuccessful. Though I am a clinician, I approached research with my heart i.e.passion and rigor, and a lot of self-criticism. It is obvious what is wrong with review process and awarding system. It was broken 10-15 years ago. Now its in decay. I was repeatedly told that if you don’t know anybody on the study section, it is unlikely to get funded. I didn’t believe it then but I believe it now. After multiple attempts and seeing similar examples, I have come to realize that the system in current state can only breed mediocrity. And so is obvious from the science I see from the labs of reviewers. My concern with this is not only as an investigator but also as a citizen. I see how our tax dollars are being wasted to breed mediocrity and boost academic careers and egos. The reviewers approach has been to lower the bar of science rather than raising it. I think NIH needs to adopt the following rules:
    Mask the reviewers to candidate and let them only evaluate science.
    NIH can itself decide upon the environment, PI and the merit of collaborators to determine the feasibility of project.
    Declassify the reviewers (make it transparent who is reviewing).
    Have different pools of the funds for MD and PhD investigators.

    1. So true Sandeep. The NIH has failed in its scientific and medical missions. Physician scientists like yourself who really make an impact on people’s lives have been thrown to the wolves. It has succeeded in building and perpetuating careers of the powerful and well connected, many of whom are not in this for helping patients (which is the ultimate goal of the NIH). In fact, I am betting most MDs who get NIH funding have little interest in seeing and treating patients. Many well meaning physicians like yourself are unfortunately dropping out.

  31. I never got my R01 funded even after authoring 100 papers and 11 years of doing science.
    NIH is a RIGGED system, EVERYONE associated with the grant review process knows that… but Who will bell this cat…
    One thing is for sure, if you believe that you are doing a GOOD science and the NIH is not funding your grant, it is THEIR LOSS and not YOURS.

    But unfortunately, the TIME is running out for many of us. Such a WASTE of TIME….

  32. May be time to scrap the permanent member system- so easy to rotate reviewers who are not diligent or not honest. Few people accumulating power leads to more bias??

  33. BIllions of more dollars can be made available to fund research and R&D from research discoveries if the NIH caps the extravagant 60%+ university overhead at 20%. In 2017, NIH spent 83% of its $27.7 Billion budget on extramural research. Capping the overhead at 20% would have freed up an additional $6 Billion. If one average an application at $600k, this means that an additional 10,000 applications could have been funded. University are supposed to be centers of innovation. It is about time that they show they can innovate in management as well. Some universities have up to 1 administrator per 3 employees. This is mismanagement at its highest level. They need to find other ways to pay for their outrageous expensive management then eating up precious research and innovation funding monies.

Before submitting your comment, please review our blog comment policies.

Leave a Reply

Your email address will not be published. Required fields are marked *