3 Comments
This blog has been co-authored with Noni Byrnes, Director, Center for Scientific Review, NIH. Originally posted on the Review Matters blog.
As discussed in the blog, “Update on Simplifying Review Criteria: A Request for Information (RFI),” NIH issued an RFI—from December 8, 2022, through March 10, 2023—seeking feedback on its proposed plan to revise and simplify the framework for the first level of the peer review of research project grant (RPG) applications.
NIH received more than 800 responses to the RFI: 780 from individuals, 30 from scientific societies, and 30 from academic institutions. The vast majority were supportive of the proposed changes, although a minority were in favor of Factor 3 (Investigator, Environment) being scored, and a smaller minority advocated for a blinded or partially blinded review process. Most of the respondents highlighted the need for strong training resources for reviewers, study sections chairs, and scientific review officers.
One question that often arises is how investigator and institution will be weighted in arriving at the Overall Impact score, if they themselves are not individually scored. Since 2009, when scoring was added to the review criteria, reviewers have been free to weight these as they see fit in the Overall Impact score. This score has never been an average of criterion scores. This is no different in the setting of the simplified framework.
Although fully blinded review may be conceptually favorable, NIH is required by statute to assess investigator and environment. Thus, at best, only a multi-staged, partial blinding process would be possible. However, as the Nakamura et. al publication showed (eLife 10:e71368, 2021), anonymization of research proposals is difficult to achieve with those familiar with a given field, with about 20% of the reviewers correctly identifying the principal investigator despite extensive redaction. In addition, while NIH is conducting a partially blinded, three-stage review process for its Transformative Research Awards, which receives fewer than 200 applications per year, attempting to scale the process up to the more than 80,000 applications is not feasible. Piloting the changes would involve designing a multi-year study, since the NIH cannot “carve out” a subset of applications submitted to the agency for potential funding and review them using a different set of criteria.
A trans-NIH committee has been established for implementation of the changes for simplifying review criteria. This committee is developing a timeline as well as designing rollout and trainings. The evaluation of these changes—the effects of which would be evident only over several years—will include surveys and data analysis. For the simplified review criteria framework changes, we hope to see a broader range of institution types across the scoring ranges and an increase in the diversity of the pool of R01 applicants, as well as more career stages and PI funding levels represented (meaning those with no grants or only 1 other grant).
If we do see improvements, however, it will be important to place them in the context of all the actions that NIH’s Center for Scientific Review (CSR) is taking to improve peer review, which also include diversifying our review committees, deploying bias awareness and mitigation and review integrity trainings, and establishing a direct channel for the extramural community to report instances of bias in peer review. These actions are, of course, in conjunction with NIH’s overall efforts to break down structural barriers and advance equity in all aspects of NIH’s activities, particularly by way of its UNITE initiative, which has recently reached its second anniversary.
We thank all who took the time to work with us in this effort to simplify the review criteria framework for RPGs and provide feedback through the RFI and in other ways. We also thank those involved in the other aspects of improving peer review at NIH, which is an ongoing process as more data are generated and analyzed, new questions are asked, and fresh insights are established and shared. The engagement of our community partners is critical to the success of this continued endeavor. We believe these current changes will go a long way in helping us to better identify the science with the greatest potential impact.
When will the actual transition to the simplified criteria start?
No earlier than May 2025 councils.
Dear Drs Byrnes and Lauer:
Although I have addressed before my concerns about the broken review process by writing directly to the corresponding NIH directors – which of course never materialized into anything–my critique and concerns are based on my experience not only as an applicant, but as a reviewer who served honorably in several NIH as well as other Study Sections. Unfortunately, my collective experience leads me to believe that the review system is indeed broken. There is simply too much bias and favoritism. The dream of having an unbiased and honest review is gone. First, the reviewers forget that they are there to see whether the experiments proposed in a particular application will advance our knowledge in each field, but instead resort to hasty nitpicking and mundane critique that, in the end, has nothing to do with the overall significance of the proposal nor the proven experience of the applicant. I have written many NIH applications in the past 35 years but never have I complained about any of my grants before, even when they didn’t do well, for I always believed they were reviewed by experts in the field who were honest and without any perceived bias. However, the last 5 years have been horrible to say the least and are getting worse each time in terms of their biased tone and superficial critique. Furthermore, my experience from serving on the various Study Sections leads me to believe that, although an application is assigned to three reviewers–some of them unfortunately don’t even have the expertise in the proposed research area– the tertiary reviewer almost never reads the application and instead parrots the harshest critique of the primary and secondary reviewers to feel that he/she has read it and agrees with them. What I find most frustrating is that reviewers in general are under the impression that they have to perforce find even a simple mistake in an application and then magnify it to feel that they have done a great critique! Therefore, the review system, which is presently broken must be mended if we are to reward truly outstanding and novel ideas. Here are some suggestions:
(1) Two reviewers- NOT three- with proven expertise in the area of the proposal should be assigned to each application.
(2) “Resource Sharing Plans” should be eliminated. It does not work. People take advantage of you but do not reciprocate. For example, a reviewer who was part of the Study Section in which our application was recently reviewed had the opportunity to read our application, got some ideas, and then had the temerity to ask us to send him precious reagents under the NIH “share resources” mandate. As an NIH funded laboratory, we felt obliged to respect the request and sent him the reagents. In the next round, he applied and was funded, while ours was not!! Sadly, when we ask other NIH-funded investigators for reagents, we don’t even hear from them much less to get any reagents. Although the resource sharing mechanism should have been a great way to help each other in order to address very important biological problems, most investigators are disingenuous and do not collaborate.