Application Success Rates Decline in 2013

Posted

As we go into the last few weeks of the year, my office is busy working up the final numbers on application data for this year. Although the complete set of application data, tables, and graphs will not be available until later in January, I thought I would provide an early snapshot on success rates for 2013 competing research project grant (RPG) applications and awards.

We received 49,581 competing RPG applications at NIH in fiscal year 2013, slightly declining compared to last year (51,313 applications in FY2012).

In the blog I co-authored with NIH Director Francis Collins, we discussed the potential impact of the sequester and the potential downward trend for grant application success rates. Now we have the numbers to show this. In FY2013 we made 8,310 competing RPG awards, 722 fewer than in FY2012. This puts the overall research project grant (RPG) success rate at 16.8%, a decline from the 17.6% reported in FY2012. One might have expected a bigger drop in the success rates since we made about 8% fewer competing awards this year, but the reduction in the number of applications explains part of it.

We’ll be working hard over the next few weeks to update the NIH Data Book and Success Rates page on RePORT with final FY2013 data, tables and graphs, so stay tuned when I follow up in January for a more complete look at FY2013 numbers compared with last year’s data and a fuller explanation of the new numbers.

28 Comments

  1. The stated success rate (17.6%) for FY 2012 is misleading for several reasons:

    1) it includes things like Merit Awards [R37] and High-Priority, Short-Term Projects [R56], which are not competing grant applications that are peer-reviewed. These are essentially gifts, and unsolicited applications cannot be submitted, much less awarded. Unsurprisingly, the “success rates” on these mechanisms are 100%.

    2) Competing renewals, other renewals (e.g., R37 and R56), and supplements are also included, and all of these have very high success rates—specifically, an aggregate success rate of 37.2% (weighted average).

    A more germane figure for the vast majority of applicants is the success rate for the most common RPG mechanisms, namely R03, R21, and R01. The success rate for these mechanisms was 19.9%, 14.1%, and 14.9% in FY 2012. The aggregate success rate for these 3 mechanisms was 6,166 awards out of 40,985 applications, or 15.0%.

    Since the FY 2013 figure (16,8%) doubtless includes misleading mechanisms as discussed above for FY 2012, these too will be inflated figures. My guess is that the overall success rate for 2013 for NEW (not competing renewals or supplements) R03, R21, and R01 awards is somewhat less than 15%, and probably closer to 12% or 13%. This would represent yet another all-time low.

  2. It would be very helpful if you could break down the data to show:

    1) Success rates for A0 vs. A1 submissions for NEW applications (i.e., not competing renewals).

    2) Success rates for A0 vs. A1 submissions for competing renewal applications.

    3) Success rates for applications that were “Not Discussed” on the A0 submission, but were revised and resubmitted anyway as an A1 submission.

    The first and second would give the community some realistic estimate of how long it might take to obtain funding, and inform investigators how long they should be planning ahead when considering personnel and other budgeting issues in their lab and making projections. In addition, these data would also give more pertinent estimates of the probability of success for a new vs. competing renewal application. As you know, success rates for all competing renewals are more than twice that of success rates for new applications using the same mechanism (ratios range from 2.3 for R01, P01, and UM1 mechanisms to 3.4 for the U19 mechanism for FY 2012).

    The third would give the community some realistic evidence-based guidance when deciding what to do with applications that were unscored (“Not Discussed”). I realize the NIH discourages resubmission of applications that were unscored but that “party line” suggestion would benefit from quantitative probabilistic estimates of the chances of success for previous unscored applications.

  3. A few months ago, I emailed Rock Talk to ask the same question as Mr. Doherty’s question #3. My query was routed to the Freedom of Information Act Office, and a few months later I received a table of data covering A0 R01s received between FY 2010 and FY2012 (ARRA funds and solicited applications were excluded). Overall at NIH, 2.3% of new R01s that were “not scored” as A0s were funded as A1s (range at different ICs was 0.0% to 8.4%), and 8.7% of renewals that were unscored as A0s were funded as A1s (range 0.0% to 25.7%). These data have at least two limitations. First, funding decisions made in 2013 were not included, so the actual success rates are likely a bit higher. Second, the table does not indicate how many of the unscored A0s were resubmitted.

  4. Thank you Terence and Deborah for your attempts to shine light on these numbers. It puzzles me why NIH is intent on making things appear better than they are. This makes no sense, and does not serve the mission. In this kind of budgetary climate, congress, other politicians, and appointed administrators will not respond to issues where people are not “hurting”, and all efforts to make it look like we are not hurting lessen their efforts to help.

    On second thought, it doesn’t really puzzle me. Appointed officials are evaluated by their appointers on the basis of how good their operation looks, not how good it is. These officials are beholden to people who do not want them to complain, demand change, or make needs evident.

    1. I agree. I think it’s better to face facts. They may not always be pleasant, but at least they are true, and strategies based upon things that are true will always do better than strategies based upon skewed perceptions of reality.

      Speaking of facts, note that the link supplied by the Rock Talk blog team (they need an entire team to write a blog that publishes only sporadically??) show that the success rate for initially unscored competing renewal R01 applications that are revised/resubmitted it 3.8 times that of the success rate of initially unscored de novo R01 submissions that are likewise revised/resubmitted.

      I have older data that shows that competing renewals have closer to two times the likelihood of success as de novo submissions.

      Obviously there is a strong bias towards previously funded applications, probably at least in part because most funded R01s are productive. But it’s hard for me to believe that the source of the “best and brightest science” that the NIH says it funds is THAT heavily biased in the direction of people who already hold grants. Success tends to beget success, but it also tends to beget increasingly conservative, incremental science. That’s the story of NIH funding over the last 50 years and they know it, and that was much of the impetus for revamping the review process several years back. Doesn’t look like things are working well…

      Anyhow, the moral of the story with respect to those statistics is that if your R01 application is unscored and it is not a competing renewal, you can forget about a revision/resubmission. Don’t waste your time, because it’s hopelessly unlikely—-to be more precise, you only have about a 2% chance. Better to re-tool and find another way (different mechanism, institute, study section, research plan, RFA, or PA) to submit.

  5. I’ve asked it before and I will ask it again … Sally et al., as a new investigator, honestly, is it even worth submitting ANY new proposals to NIH anymore with these abysmal and unconscionable “success” (or, rather, “failure”) rates? At what point are we going to see the elephant in the room that NIH funding for novel and cutting-edge research has been in a free fall for over five years and has now hit rock bottom. In my final year as a T32 postdoc @ Yale, and without a lab in sight that can hire me, it really feels like the entire system is beyond broken … and continuing to keep up this ‘business as usual” farce is both cruel and unusual to my generation of young scientists who unfortunately missed the boat and are going to end up frying donuts at Krispy Kreme.

    1. I hear ya, JBB. But unfortunately, it gets worse:

      It is increasingly difficult for PhDs to get R01s, partly because of the historically low success rates, partly because most PhDs require years of postdoctoral work to get up to speed and develop a decent CV and so not infrequently exceed the rather arbitrary time limits of the K awards, and now, also because availability of K awards that PhDs can apply for has been markedly reduced.

      Most of the biggest institutes at the NIH have thrown PhDs under the bus by eliminating (NCI, NIDDK, NINDS) or severely restricting (NIAID, NICHD, NHLBI) the K01 mechanism. The primary reason is that they wanted to divert funding to support physician-scientists via the K08, K23, and K99/R00 mechanisms. This is absurd, since the main reason physicians tend to gravitate towards non-research careers is that research makes little economic or practical sense to somebody with the earning power of a practicing, licensed MD, and anyhow vanishingly few physicians went to medical school because they wanted to develop research careers. Physicians can make far more money and have much better job security pursuing clinical careers. Even those physicians who have real interest in research and can devote significant time/effort to research eventually get pulled away, and I believe less than half who get K awards ever go on to get R01s. So throwing funds at MD-focused mechanisms cannot possibly work. Salary support from any K award can never compete with the salaries that physicians can command in the clinical marketplace, and the risks inherent in research are just too great for the compensation for most physicians and their families to tolerate.

      But the real tragedy is that these policies force a large proportion of an entire generation of talented, innovative PhDs who could and would devote 100% effort to translational research into industry or out of research entirely.

      I can’t see how THAT sort of strategy will fuel NIH-funded innovation and translation in biomedical research. But I CAN see how that strategy seriously jeopardizes the ability of the US to lead the world in biomedical innovation, and if US innovation erodes significantly, that will have devastating economic and political consequences for sure.

      1. Hi Terrence. I agree with many of your posts. I do wonder, however, if you mean “clinician scientists” not “physician scientists”. I am a speech-language pathologist PhD with a K23. Are you suggesting that the funds for clinician centered work is increasing while basic science funding for PhDs without a clinical degree is decreasing? I don’t think its a PhD vs MD dichotomy.

        1. Hi Ianessa,

          I mean physician scientists, and I’d have to disagree: I DO think it is an MD (and MD/PhD) vs. PhD shifting of funds. This is distinct from clinical research vs. basic science research. I don’t see any bias towards (or against) clinical research, but I do see a bias towards trying to increase numbers of MD scientists, and that must come at the expense of PhD scientists. That’s just the math, since dollars are not increasing and have instead decreased.

          The NIH has noted that they are disturbed by trends showing decreasing commitment by MDs to research, and hope to reverse these trends. Their motivation is largely that they view MDs as essential to translational research, and presume that only practicing physicians have the insight into clinical and disease contexts to appropriately shepherd translational research directions in ways that will optimally impact health and well-being.

          Personally, I think that is nonsense, and there’s no evidence I know of to support that contention. You don’t have to have a license to practice medicine to be able to see where opportunities to improve clinical treatments and diagnostic and preventative strategies are. And, you don’t have to be a practicing physician to develop innovative approaches, or to translate (i.e., commercialize) new approaches into clinical practice. In fact, much of this has been and is currently accomplished by those who are not practicing physicians.

          I am not suggesting that MDs are useless, but I do maintain they are not essential to successful translation. And, I think the large cadre of PhDs this country produces are an enormously valuable and innovative natural resource that has been the source of much of the innovation to date. To divert investment away from that resource can be expected to erode both the quality and quantity of biomedical innovation going forward.

          Needless to say, I don’t consider that a shrewd strategic decision.

    2. You are not alone, Sir. The system is awfully gravitating towards well established senior scientists who have 10 postdocs, 8 students and 4 techs working for them. These senior scientists have 4-5 grants but produce 10-15 papers for the whole grant cycle, which is ridiculous. They use the same set of papers for annual report for all grants, which is not fair at all. They keep writing several review articles with hardly any new stuff in them. These same senior scientists expect juniors with one grant to produce 15-20 papers for the entire grant cycle. The system has been broken for a long time, but they say there is no other better system (because this system works for them). It is a tragedy that the NIH administration fails to expect 4-5 times the number of papers when someone has 4-5 grants. This has been conveyed to them for many years, but nobody listens.

      1. 1. The NIH review should be at least a two-step process. The first step should focus only on the scientific merits of the proposals, at this step the NIH reviewers should NOT be given information on the identity of the PIs. The second step of the review process should evaluate the merits of the PI and his/her institution; this step should be performed by a separate panel of reviewers.
        2. There should be only one R01 per PI. It is impossible for one PI to lead several projects. Currently, PIs with more than one R01 are assisted by several very capable people in the lab. I assume that these “assistants” are PhDs with many years of experience and deserve to have funded awards on their own names.
        3. The funding (including through the R01 mechanism)should be given for the maximum of three years. There could be a provision that exceptionally promising results from the first three years would allow for additional two years of funding.

    3. At least you can be assured some job security in that profession. Regardless of the state of the economy, government funding, and the sociopolitical environment, the great American public will always want donuts.

  6. Dear Jaded: The way the world works for trainees now is that it is unlikely that you will be competitive for a research intensive assistant professor position at a major university unless you have a transitional award. These awards lead to a job offer. You need to sit down with your mentor if you haven’t done this already and push for the support to apply for a k99 or other k-series award. Some foundations, like American Heart offer them.

    I think that everyone dislikes the current situation but we have to deal with the current reality. My postdoc just got one this year and she has already had two interviews and one offer. I encouraged her to begin submitting a k99 at the end of her first postdoctoral year. It’s different for the mentors as well. Frankly, I have been a little surprised at the number of my fellow faculty who have never heard of a k99 award and/or do not realize that it turns into a modular R01. As mentors we have to be more aware, more aggressive, and commit more resources to ensure the success of our trainees than ever before.

    I hope that you are able to make the transition. Unfortunately, it is likely to take longer than you anticipated. Still, it is worth it as there are no jobs better than ours. I used to work in the pharmaceutical industry, and remain glad that I left. The freedom and excitement of discovery is like nothing else.

  7. I’m sorry, maybe I completely missed the boat here, but how is grant success (or failure) rate any measure of performance? Isn’t that a result of 3 factors: # applications, budget size, and award amounts. So is having a low # of application (i.e. increasing the success rate) a good thing? I’d say no. Is having a big budget a good thing, yes of course, but not sure how that means anything here. Lastly is the award amounts, and since the award amounts is dependent on the research proposed and the research is basically chosen based on scientific merit, not sure how this plays in either.
    So could someone please enlighten me on why we are so interested in these success rates?

    1. PIMT, I think it is much simpler than you imply: Success rates are the probability of obtaining funding. If you can’t obtain funding, then all of the rest—applications, budget size, award amounts, scientific merit—-are moot.

      Also, success rates are not a function of these factors (except scientific merit, but only to a limited extent). I’d say anybody who thoroughly understands the realities of the review process (as opposed to the NIH “party line” that is) would agree with that.

      1. TMD-
        Your phrasing here is inaccurate. But it points to an analysis we would want the NIH to make routine. Success rates are based on applications and not on applicants. The fate of the PI is important for many, many reasons.

  8. What continues to amaze me in these very difficult funding times in the complete disconnect between CSR and the funding institutes. CSR has put together a nebulous set of target objectives such as significance and innovation yet the reviewers (which includes me) have NO direction from the institutes as what they consider to be significant or innovative. Never in my decades of reviewing grants for NIH have I heard anyone from an institute tell me what is important to the institute. As such the grant writer is in the dark, the reviewer is in the dark, and consequently innovative cutting edge science dies in the study section and only incremental science survives. With funding at 8% percentile, this double blind way of grant writing and grant review is broken. –

    1. Hi Concerned Scientist,

      Interesting comment.

      NIH program officials might argue that the institutes publish research priorities, and also detail areas of interest in FOAs (other than Parent PAs) and workshop proceedings, etc. I suspect they would further argue that lack of guidance is by design: they intentionally sequester programmatic interests from the review process like church and state, in the belief that not doing so invites problems from conflicts of interest and contamination of the review process.

      There is one well-known former NIAID PO who I’ve heard say publicly (while he was still working for NIAID) on more than one occasion that “Reviewers typically haven’t read the RFA or PA they are reviewing grants for, and usually don’t know or care what Program Officers think.” So, some might argue that even if the NIH provided better guidance, many or most reviewers probably wouldn’t pay much attention to it. I would tend to agree with that.

      I suspect reviewers such as yourself–who would actually welcome better guidance from programmatic officials–are fairly unusual.

      The reasons innovative, cutting-edge science tends to not fare well in review are another issue, and an excellent topic for more open discourse that the NIH would be well-advised to foster and participate in. My own view is that this primarily arises because reviewers tend to approach their task in a manner similar to that of an investor evaluating various possible investments: they try to judge the return on investment (ROI), and balance that against their judgement of the risk. The proposals with the best risk-adjusted ROI will tend to receive the best scores.

      Incremental science tends to score well because it tends to come from established, productive labs (= good ROI), but also is relatively low-risk, since the best predictor of success and productivity is past success and productivity (at least in the minds of most reviewers). I think these are major reasons why the success rates of competing renewal R01s are 2.3 times that of de novo R01s (maybe not the only reasons, just the major reasons).

      I would bet the farm that the success rates of R01s from PIs that currently hold or have held prior R01s and have demonstrated productivity are at least twice that of relatively untested applicants who are young and have never had an R01 before, even if they have demonstrated good prior productivity. Why? Because junior investigators are something of an unknown quantity, and that equals risk. I have recently seen specific comments on two grant critiques saying under the category of “Weaknesses” that “The PI is a junior investigator.” Yeah, I know, they are not supposed to judge grants that way, but let’s face it, reviewers are human beings and may have their own criteria that might well impact their judgements.

      And I think experienced PIs have a better idea of how much innovation (read: risk) will be palatable to most reviewers, so they will tend to propose projects that might be less innovative (decreased ROI), but significantly more likely to succeed (markedly less risk), compared to less experienced, more junior investigators.

      In the light of these considerations, I think it not surprising that many of the most innovative proposals are not funded.

      But it gets even worse. There is a penalty attached to too much innovation because it tends to incur unacceptable risk. And, junior investigators who propose innovative, risky science will generally not do well simply because they have no track record of NIH-funded success. So the summed risks attributable to innovation plus junior investigator status become prohibitive to many or most reviewers. Hence funding decisions (which are largely a rubber-stamp of the review process) are biased away from innovation and towards less risky (incremental) science in a sort of study section natural selection process. So the junior investigators who tend to survive tend to be those who are productive but learn to propose less risky (= less innovative, or incremental) science.

      1. Once again, Terence, I am in complete agreement with you. I would even go farther by saying that I do not want guidance from PO’s and SRA’s. I’m starting another 4-year term on a study section this spring, and if I don’t become too jaded or nihilistic, I will throw my support behind those proposals that are most creative/innovative, and those that are opportunistic (i.e. exploiting new and exciting opportunities). I have little interest in evaluating the goodness-of-fit between a proposal and an FOA/PA that I had nothing to do with composing.

        I disagree with Concerned Scientist, who wrote: “the grant writer is in the dark, the reviewer is in the dark, and consequently innovative cutting edge science dies in the study section and only incremental science survives.” To the contrary, paying attention to FOAs/PAs favors incremental science because proposals get points merely for being “responsive”, whether or not they are innovative.

        Below this comment, Dr. K makes a good point as well.

  9. A number of applications are never reviewed because they are administratively withdrawn by NIH staff prior to review (e.g.proclaimed not responsive to an RFA). I know that in some cases close to one half of all the proposals submitted in response to an RFA fall into this category. Could someone from the Rock Talk Team comment whether such proposals are counted for the presented statistics?

      1. Thank you! So, it looks like the presented statistics underestimate the real number of turned-down applications. Moreover, this method of calculation encourages NIH officials to force applications out of contention prior to review, so that statistics would look better. Frankly, I suspected so when I learned that about half of the applications, which were bound to my review panel, were administratively withdrawn, and I also learned about NIH officials writing letters to the applicants, encouraging them to withdraw the applications “voluntarily”. Does NIH share the numbers of applications, which were administratively or voluntarily withdrawn before the review?

  10. The Australian NHMRC has it right. One submission, no resubmissions (no fuzzy math on success rate) and you get to reply to the reviewers comments prior to study section meeting. It may require moving to 2 applications per year, but that is fine. PLEASE adopt this system… It’s too late for me as I’m 0-12 at NIH and have lost my job, but it will help the next gen. Another way to help is to have peers compete with peers. For e.g. Faculty at years 0-5, 5-10, 10-20, 20+ compete within their respective classes.
    50cent

  11. The NIH granting system worked reasonably well when a greater percentage of grants were funded. Now, the granting system is broken because of (take your pick) insufficient money available for grants or too many submitted applications. How to fix: 1) redirect NIH funds towards R01 applications; 2) limit the number of grants (or funds) that any one laboratory can receive; and 3) increase the total NIH appropriation. Methods 1 and 2 may help in the short term, but the long-term fix is increased NIH funding. How to do this? It is pretty obvious that the current method of begging for funds from congress with arguments that the work we do will bring medical benefits to all of society, simply does not work. So it is time to use the tactics that have worked so effectively for other interest groups. We need to woo politicians to our casue of increased NIH funding through direct campaign contributions and informationals for and against various politicians based on the politician’s viewpoints. There are large numbers of scientists and if each above the level of postdoctoral fellow made substantial contributions, then a large war chest could be built up to support politicians who would support increased NIH budgets and oppose those who wouldn’t. Furthermore it can be argued that the work we would do with NIH funding would benefit everyone. Our societies working jointly should establish an independent pro-health research PAC. This pro-health research PAC should be funded with as much money as possible. It is time that politics in addition to well-reasoned arguments be used to increase the NIH budget. And, incidentally, all the discussion in the world about how study sections should function will not solve the current problem, but just shift the funds from one highly deserving research project to a different highly deserving research project.

  12. There are additional measures of the impact of underfunding of biomedical research via NIH.
    The proportion of total extramural funding in several disciplines that goes to the top ten institutions vs. the bottom 35 is slowly increasing. This means that there will be further erosion of research at smaller institutions. The whole trend is a reflection of a 1%:99% tier structure in biomedical academia. This means fewer research jobs and medical training not informed by direct contact with research.
    A key statistic that NIH does not provide is the per cent of first R01 awardees who mange to renew their grants. This is important because it predicts success at tenure decision time. I don’t know this, but I hypothesize that many institutions are reluctant to hire any new assistant professors because the chances that they will be able to establish a research career are so small.
    NIH has trumpeted the relatively high success rates of first time applicants. But this number means nothing if a high proportion crash and burn.

  13. Are there any plans to break down success rates by Congressional district? This topic flared up recently.

Before submitting your comment, please review our blog comment policies.

Leave a Reply to Terence M. Doherty Cancel reply

Your email address will not be published. Required fields are marked *