Does It Matter Where Your Grant Application Is Reviewed?

Posted

There are a lot of urban myths out there about NIH grant review. Here is a common one—your chance of getting funded is lower if your application goes to the Center for Scientific Review (CSR) for review rather than to another NIH institute or center (IC). Well, I have the data and took a look.

The first thing to point out is the split between CSR reviews and the other IC reviews is roughly 80/20. In fiscal year 2010, ICs managed the reviews about 17% of all applications. The major difference between CSR and IC reviews are the types of applications they review. While the separation is not absolute, CSR manages the review of most R01, fellowship, and small business applications. ICs manage the review of most program project, training grant, and career development award applications. ICs do review some R01 applications—typically the ones with IC-specific features, as well as specific requests for applications (RFAs). Tip: check the funding opportunity announcement to find out where your application will be reviewed. It is usually stated there. You can also learn where funded grants were reviewed by looking them up in NIH RePORTER, which lists the study section that reviewed them.

So, back to the question, “Does your application have a different chance of success if it is reviewed in CSR or in an IC?” Given the very different mix of applications reviewed, it is perhaps not surprising that the answer is yes, on average. In fiscal year 2010, 17% of all the applications that went to CSR for review were awarded compared to 25% in the ICs. 

All Applications

Locus of Review Applications Awards % Applications Awarded
CSR 48,642 8,111

17%

ICs 9,737 2,411

25%

But that is really comparing apples to oranges. It doesn’t tell you the whole story. A closer look shows that there is essentially no difference in your likelihood of getting funded when you compare the same types of applications. Take R01 applications, in fiscal year 2010, 18% of R01 applications reviewed in the ICs and 19% of applications reviewed in CSR were awarded. 

R01 Applications

Locus of Review Applications Awards % Applications Awarded
CSR 27,608 5,197

19%

ICs 2,000 368

18%

Another example, applications submitted in response to RFAs in that year were funded at almost identical rates whether reviewed in CSR (24%) or in the ICs (22%).

RFA Applications

Locus of Review Applications Awards % Applications Awarded
CSR 211 51

24%

ICs 4,327 948

22%

Since all our reviews comply with the same set of applicable laws, regulations, and policies, I’m glad to see the outcomes are similar, once we compare apples to apples.

18 Comments

    1. In part, the answer to that will be “it depends”. If the chartered CSR study section in which an application would have been reviewed has a score to percentile curve that includes way more “grade inflation” than the CSR as a whole, against which curve SEP priority scores are converted to percentiles, and IF the SEP was similar in its priority scoring to what the chartered CSR study section would have done (a big if !!!!!), then it would be better to be in the SEP. Ultimately, though, things regress to a mean. The dispiriting part, given the primacy of the “pay line” on percentiles in funding decisions, is how much variance there would be between ratings by panel A versus panel B (say, SEP vs regular / chartered panel, or two regular panels with similar briefs) in scoring the same application.

  1. I don’t doubt the rates are the same, but having served on both SEPs and Chartered SS I always got the feeling that the reviews are a bit more random on SEPs, I think due to a lack of the culture of the SS. This of course goes both ways, with poorer grants doing better and better grants does worse- whatever that means. The problem is the noise is a bit bigger in SEPs and IC SS that don’t meet regulary.

  2. It makes a mahor difference where the grant is reviewed and which study section reviews. Please make study sections for SBIR grants and define what is a tranlational grant and design different criteria than basic science grants. Basic scientists do not know how to appreciate translational grants and you cannot use same criteria as hypothesis driven grants for assessing translational grants definately not SBIR grants.

    You have to eliminate review criteria flaws. Also increase funding for translational grants otherwise we can keep on doing basic research which have no use unles translated to human use.

    nih HAS TO GET this point BEFORE BLINDLY KEEP FUNDING HYPOTHESIS DRIVEN GRANTS.

    We are publishing papers like crazy without any change in the clinic.

  3. Perhaps I don’t understand the way grants are scored, but aren’t they given percentiles within their committee. If so, the percent funded can’t be much different. It might be more informative to compare how many R03 or R21 applications are funded, as well as the average impact scores of R01 applications.

  4. These statistics do not tell you anything. They can be manipulated however one wants them to be. With all due respect, it matters who reviews your grant let alone which study section it is reviewed by!! The current grant review process is highly subjective and it is nothing less than a lottery draw. I believe that you are “out-of-touch” with the current state of affairs. It is sad that you are trying to force something down our throat rather than face the daring truth.

  5. Having served in study section and participated in many institute workshops as well as submitting grants, I can attest that the NIH invests great effort to ensure a fair review process, at the study section level, and to advance science and medicine, at the Institute level. Moreover, there are many forums to provide feedback from investigators that are always considered throughout update procedures.

  6. With all due respect, CSR has to update the review process. In my opinion, there are two major hurdles for young investigators: 1) only one resubmission policy, and 2) changing roster of study sections. I am sure there are thousands of young investigators trying their best with the limited resources and personnel to improve their grants, but are unable to touch the funding score by A1. After all that, changing the specific aims for a new submission is nearly impossible. In summary, addition of another resubmission (A2) and sending the A1 grants to possibly same set of primary, secondary, or third reviewer will be help. Just my suggestion.

    1. In summary, addition of another resubmission (A2) and sending the A1 grants to possibly same set of primary, secondary, or third reviewer will be help.

      Except this is exactly what was the case not so very long ago and young investigators were hardly faring any better. It was A2 and out rather than A1 and out- same process, it just took longer. The more aggressive screening of new grants that are in reality just another revision is perhaps a point, but really the largest influence now versus 5 or 10 years ago is the budget stagnation.

      People in science have been bemoaning the plight of young investigators since the 50s and the NIH has generated a series of dramatic initiatives (R37! Checkbox! ESI!) that have had little influence. At least to the extent of being viewed as a permanent fix to the problem since it keeps arising every decade or so. This is because the system fights back. In the R37 era, sections started essentially *requiring* the young to have one of these 5yr, low budget dogs before being “allowed” to get an R01. When NIH started picking up NI grants in 2007 or so to balance their numbers, study sections fought back with ever lower scores for NI/ESI apps. Given the complaining about special ESI treatment and the skipping over of their generation, I bet the current crop of Associate Professors has a special bias against ESI-qualifying apps and those from investigators who got their start under ESI rules.

      1. I completely agree with S. Ahmed regarding A2 resubmission. My last grant was funded as A2 even though it was triaged as A0. For a new investigator, it is very difficult to figure out from the start what this particular study section wants to see in that grant application. As a result, you get 10-15 suggestions to your triaged A0, which has 2s and 1s for significance and innovation and 6s and 7s for approach. Then you have to make huge changes to the grant just to get another set of minor suggestions on A1 and a score, which is close to fundable, but not quite. Should their be an A2 submission, such grant no doubt would be funded. However, under current system it will not. The current policies simply designed to reduce the number of young investigators in the country. I understand why, but why don’t you openly say so!

        1. How are all these ESI policies to fund inferior proposals that cannot compete “designed to reduce the number” of young investigators?

  7. I find the discussion interesting and want to make a couple of additional points about review. Having been on an IC panel earlier under the old scheme, there certainly was a lot of work in developing thoughtful comments and critique. Under the new scheme, it concerns me that it is too easy for reviewers (and clearly not all are of the same caliber, frankly) to dash off one liners on the strengths and weaknesses that sometimes wildly miss the mark. In addition, a number of such opinions appear ‘uninformative’–reviewing many summaries from both successful and unsuccessful applications in our Center, several things are concerning, perhaps in the unsuccessful ones the most troubling being several instances of wild outliers–two reviewers are highly or, on an A1 moderately supportive and enthusiastic and a third thinks the application relatively worthless. Strongly held prejudices regarding the value (or rather lack of it) certain strains of work that in many others’ opinions seem to be moving science forward (or new directions) are considered to be fundamental dead ends or inappropriate methodologies by, usually, one reviewer. Should a first submission not be discussed, the decision to resubmit becomes difficult. I am not aware if a formal review of the new review scheme that I thought was planned has actually occurred or is occurring. In unburdening reviewers, I feel the new NIH review approach has done a disservice to applicants.

    1. I wonder why nobody ever seems to consider that the reviewers who are in favor of the application are the ones with “strongly held prejudices”? And does anyone ever fess up to reviewers having made clear errors in giving a fundable score? In my years of reviewing I can’t recall an application that was perfect. So if the reviewers were in favor, many items that might have drawn fire were overlooked. Somehow I never see any applicants complaining about errors on their funded apps…..

  8. I believe that the current NIH scientific review system is outdated and flawed. What NIH needs is an objective criterion for making funding decisions, at least in the case of first R01 (additional R01s may be funded on the basis of the merit of written proposals as is currently the case). Such a metric could be solely based on the scientific productivity (p) of an applicant over the past five years. The p value could be determined from both the quantity (number of papers) and the quality (impact factor of the journal in which a given paper is published) of papers. A a cut-off could be set for the p value such that investigators (at faculty or equivalent-level positions) whose scientific productivity falls within the top 33rd percentile are funded at any one time. While such a mechanism would be fair and just, it is unlikely to be implemented by those who control the NIH as it would level the playing field rather than hand big shots have an inherent advantage over an average productive scientist. Just some thoughts.

  9. The data for R01 and RFA is good; however, the question remains which grant mechanisms are driving the overall difference by Locus of review. It would be nice to follow-up on this and show where there is actual difference.

    1. Applicants may use the application cover letter to request assignment of their application to a particular NIH institute or center for funding consideration. They can also request a particular study section or integrated review group for review. Not all requests can be honored, but we carefully consider all requests.

Before submitting your comment, please review our blog comment policies.

Leave a Reply to AVT Cancel reply

Your email address will not be published. Required fields are marked *