Rock Talk

Helping connect you with the NIH perspective

Paylines, Percentiles and Success Rates

I have read or heard much about the dilemma of NIH applicants as they struggle to understand their chances of receiving NIH funding. As budgets flatten and tighten, this discussion has heated up. To declare that NIH success rates have hovered around 20% for the past five years does little to calm the storm of concern when we hear about shrinking percentiles and paylines. So how is it possible to have a success rate of 20% but a payline at the 7th percentile? Let’s take a few moments to sort out what these things mean and think about how these numbers are derived and how they can differ.  

Impact Score

It all starts with the impact. This score is assigned by reviewers to indicate the scientific and technical merit of an application. Impact scores range between 1 and 9. A score of “1” indicates an exceptionally strong application and “9” indicates an application with substantial weakness. (I always wondered why at NIH low = good and high = bad but that predates me!) In assigning an impact score, reviewers consider each of five scored criteria: significance, investigator, innovation, approach, and environment, along with other factors like protection of human subjects and vertebrate animal care and welfare. Read more about scoring.

Percentile Rank

The percentile rank is based on a ranking of the impact scores assigned by a peer review committee. The percentile rank is normally calculated by ordering the impact score of a particular application against the impact scores of all applications reviewed in the current and the preceding two review rounds. An application that was ranked in the 5th percentile is considered more meritorious than 95% of the applications reviewed by that committee. This kind of ranking permits comparison across committees that may have different scoring behaviors. It is important to note than not all research project grant applications (RPGs) are percentiled. For example, applications submitted in response to a request for applications (RFA) are usually not percentiled. In the absence of a percentile rank, the impact score is used as a direct indicator of the review committee’s assessment. Read more about percentiles.

Payline

Many NIH institutes calculate a percentile rank up to which nearly all R01 applications can be funded. For grant applications that do not receive percentile ranks, the payline may be expressed as an impact score. Institutes that choose to publish paylines in advance (see an example) calculate the payline based on expectations about the availability of funds, application loads, and the average cost of RPGs during the current fiscal year. Other institutes prefer to describe the process for selecting applications for funding (see an example) and then report on the number of applications funded within different percentile ranges at the end of the fiscal year (see an example). Because the NIH is currently operating on a continuing resolution and funding levels for the remainder of this fiscal year are uncertain, most of the NIH institutes have offered less detail this year than in the past.

But remember, even when an IC establishes a payline, applications outside of the payline can be paid under justified circumstances if these applications are a high priority for the particular institute or center. When these select-pay/out-of-order/priority pay/high priority relevance selections are made, it may result that other applications within in the payline are not paid because funds are no longer available to support them. 

Success Rates

The success rate calculation is always carried out after the close of the fiscal year, and it is based on the number of applications funded divided by the number of applications reviewed and expressed as a percent. To better reflect the funding of unique research applications, the number of applications is adjusted by removing revisions and correcting for projects where the resubmission (A1) is submitted in the same year as the original application (A0). Read more about success rates.

The Answer

Now we are equipped to answer our earlier question. How is it possible to have a success rate of 20% but a payline at the 7th percentile? There are several real-life reasons why paylines (the ones that use percentiles) can be either higher or lower than success rates.

  • Applications that are not percentiled are still factored into the success rate calculation. Thus, funding a number of awards that are not assigned percentiles will increase the success rate without changing the payline.
  • The success rate for a particular fiscal year is a reflection of the funded applications and can include applications reviewed in the previous fiscal year; whereas, the payline encompasses only applications reviewed in that fiscal year. So awarding applications that were reviewed in the previous year will also increase the success rate.
  • The average quality of the applications assigned to an institute will also affect its payline. If an institute happens to receive a set of applications with very good (low) percentile scores, its success rate will be higher than its payline, all else being equal. For example, in fiscal year 2010, the NIGMS R01 success rate was about 27% but the midpoint of the funding curve occurred close to the 21st percentile.

Check out more reports on RPG success rates broken down by year (2001 to 2010) and IC.

Whew, you made it through. The difference between paylines, percentiles and success rates remains a confusing topic because of the compounding factors that rule out a simple linear relationship. You need to consider all the factors when assessing the potential for an individual application to be funded. Your best advisor on this issue, because of the differences in the ICs and programs, is your NIH program official. Give him or her call.

55 thoughts on “Paylines, Percentiles and Success Rates

  1. The truth is that reviewers have only one 2 criteria in my mind : Fund , Don’t Fund.
    That’s why no matter what terms you use: poor , fair ,good, very good , excellent , outstanding ( or the newer 9 categories ) there are only those 2 that count . When money was availbale there was a third category : fund if funds are available .
    As the category : Fund, gets harder to achieve then the scores all shift towards 1 for those reviewers who think the grant should be funded .
    Bunching scores for “do not fund” is uneceesary as any score or category outside of outstanding or 1.0 +/- a small variation, will do ( even excellent) to render most grants unfundable . Now to add jeopardy upon jeopardy is the undefined ” impact ” factor . It is not as the article suggests an average of 5 criteria , it is a subjective summary score that is open to prejudice .

  2. Ok, this is great to have summarized so succinctly. Thank you!

    However, many of us are concerned that the NIH continues to provide the public and Congress with the Success Rate as a measure of funding competition. As pointed out here, the Success Rate is a trailing statistic and can be so distorted as to not reflect the reality on the street, which is far closer to the payline.

    Now, a confounding factor is the opportunity to resubmit, but making a special accounting for that by collapsing the A0 and A1 together without similar adjustments for the distorting effects of renewals, RFA’s and other skewing factors, seems disingenuous. This is far too much like our unemployment numbers and other highly massaged government statistics. No-one knows what they really measure or mean. In the end, Congress is going to be confused. Some numbers suggest a crisis, some indicate solid support of investigator-initiated proposals.

    NIGMS has recently gone to considerable lengths to allow us all to see the raw data behind the success rates and other relevant data. It would be great if NIH made a commitment to offer similar access to data for all institutes.

    Thanks again!

    Steve

  3. I have served as a Charter member of a number of different study section during the course of my career and when funds are flush scoring works well; however, when funds are extremely tight one single overriding factor determines an applications outcome: do we want to fund this investigator or not? It is virtually impossible in my mind to quantify the differences in the merits and significance of applications that score and bunch at 1.1-1.5 (three scores of 1.1 verses three scores of 1.5)! I am coming to the conclusion that it might be fairer to award a consistent level of funding based on a historical actuarial analysis with ten year review cycles.

  4. Bottom line – it’s very tough out there. The differences in quality between those that make the cut and those that don’t can be negligible. Tens of thousands of man-hours are being spent by highly trained experienced investigators to write grants, rather than make discoveries and further medical science. Imagine how much more progress could be made if that same level of effort were applied doing productive research rather than spending most of one’s time writing grants. Young PhDs and post-docs are pursuing alternate careers because they don’t think there’s a stable future in academic research. I find it very sad.

    • As a new investigator going through this process, I have to say this is incredibly true. Having received a score in the 19th percentile, but knowing it is still not good enough is very disheartening. It really does make one think their time could be better spent in a corporate or government setting where at least time isn’t spent writing grant after grant. We (new investigators) want to produce something besides just paperwork!

  5. I guess this explanation is technically correct, but when the leader of the study section prefaces a grant discussion by looking around the table and saying “This is an important grant from a long established investigator…” then one has to wonder if the system is objective. I guess once all the “established investigators” get their funding, the NIH could recalculate what is a real world payline for those who are foreign / unknown to the study section members.

    Study sections are human endeavours, and like all human endeavours they’re not necessarily objective. I’d rather the NIH not spend time trying to accumulate stats that pretend to show us that humans, when assembled in a group, are putatively objective. They’re not. Let’s just move on rather than delude ourselves.

  6. I agree with Bob that many of us are spending so much of our precious time writing grants rather than analyzing data or writing papers. We all have limited amount of time for writing and spending time writing grants means that I am not writing papers. Then, you get comments back as “not productive”. As a mid-level investigator, I am not a really “established” or “new” investigator. I feel stuck and frustrated.

    • I completely agree with Bob and Chris. I find myself in the tenure-track yet not tenured rat race with grant writing pulling me away from writing papers and other activities like mentoring graduate students. Maybe the NIH could make some consideration for those of us trying to renew our first R01 since the personal stakes are so high – most institutions base tenure on R01 grant renewal. There is no realistic way we can be as productive as an established investigator but we no longer benefit from new investigator status – and for those of us who spent 5 years funded on a K Award after a 2-3 year post-doc, we no longer fall under the early investigator status that is defined as within 10 years of completing their terminal degree (not to mention anyone taking time off to have a family). Maybe an “early-stage investigator” could also include those submitting their first R01 competing renewal?

      • Yes indeed. These investigators should be given the opportunity to produce and return their best results yet. From the investment point of view, it would not be wise to truncate years of training and first accomplishments. Thus, the efforts that taxpayers have already made to support these early turning (if the opportunity is given) into mid career investigators would be brought into fruition.

  7. Bravo. The explanation of percentile rank and success rates is excellent. Thank you very much for clarifying the differences in these statistics.

  8. Of course NIH wants to put the best spin on this issue! A better statistic for estimating the chances of funding would be a ‘life time risk’ — probability of funding for an application up until the A1 fails to be funded (most of the time).
    I’m a really well established investigator and I still get plenty of off the wall reviews (are they reading?). Many junior investigators are having a rough time getting started- 5% advantage not withstanding.

    Everybody knows funding is dire and getting worse. The cost to our nation in lost scientific development is incalculable!
    mnh

  9. The review process at the study sections is inherently screwed, as pointed out by some. How can successive revisions continue to get poorer scores than the original with comments from the second review dinging the application for something that was in the original application, and the first reviewers wanted it changed? Or how can first reviewer be very happy with the amount and quality of preliminary data, but the second review finds it inadequate when even more and better data have been added?

    As someone said, it seems to be all about one person saying “I want this funded” or one person saying “I don’t want this funded” and that decision being based on who is known to the reviewer.

  10. Last year I published an editorial entitled “NIH, Sceince, and Baseball: Time for Reform?” in Lipids 45:889-890. If you care to examine the editorial please do so, but the basic premise is that the NIH funding model is broken and has been for sometime. Dr. Teuuscher put it succinctly when he stated it is impossible to distinguish between a 1.1 vs. a 1.5 grant. Which should be funded, which shouldn’t be funded? Cannot tell, so let’s flip a coin. Clearly not a great way to run a extramural program. I agree with the comments by many that when pay lines were around 30%, the good grants got funded and some perhaps borderline grants. Now, that is not the case and in the end are we really working to push through the weird and crazy ideas that frankly drive research? Or are we merely funding safe, incremental increases in knowledge because of increased aversion to risk?

  11. At the current funding rate, it does not matter anymore whether success rate is based on percentile or percent. Even the funding of topmost grants in a study section has become questionable because the way NIH calculates percentile. The CSR has simply distorted the whole review process. For exaple, each grant has only two chances to get funded, but what happens if grants in the outstaning category are not funded in A1 submissions. Why do investigators need to change the obsjective and specific aims for a new submission? There must be an otion that grants ranking in the outstanding or high excellent category (upto 20 percentile) should be allowed for A2 submission. The CSR needs to wake up and be realistic about the guidelines they impose on the investigators. They don’t need to pretend to be innovative, they need to be realistic and pragmatic in creating guidelines. The CSR should stop inventing new guidelines all the time so that investigators are not confused anymore. They (investigators) are already confused, frustrated, disheatened and demoralized with the current funding situation. The CSR must stop adding salts to the injury. We must ask CSR as to their advice to an investigator if his/her grant in the outstanding category is not funded. Time has come to consider seriously how we can shape and direct the CSR to be more constructive to get thoughful reviews of grants. They serve the scientific community and it is our collective responsibility to advise them to be in the right course.

  12. I agree with Cory and Michael: regardless if we score on 9, 90 or 900, at the end the reviewer’s decision is: do I want this funded or not, and figuring out the all-misterious impact score based on that decision. NIH is trying to make the best out of a tight financial situation, so they periodically change the scoring system, criteria, page limit etc. If we really want to make the system fairer, the review process should be completely anonymous (reviewer doesn’t know who is applying) or open both ways (I know who reviewed my grant and reviewer knows who the applicant is). In the current system it might be just too tempting to throw a wrench into a competitors research and hide in the anonymity of the study section.

  13. I’m sorry, but all of these explanations still don’t clarify why there is a near THREE fold difference in success rate compared to paylines. The explanations are for factors that are subtle and will have small effects on the differences.

    1) “Applications that are not percentiled are still factored into the success rate calculation. Thus, funding a number of awards that are not assigned percentiles will increase the success rate without changing the payline.”
    This would only be true if the rates at which these types of grants are funded is much higher than percentiled grants. If there are grants that are not assigned percentiles, then surely they will increase both the number of total applications and total successes. So unless this ratio is different for this type of submission, it should change the overall success ratio not at all.

    2) “The success rate for a particular fiscal year is a reflection of the funded applications and can include applications reviewed in the previous fiscal year; whereas, the payline encompasses only applications reviewed in that fiscal year. So awarding applications that were reviewed in the previous year will also increase the success rate.”

    Again, this shouldn’t change the ratio much if at all. No matter what the length of time examined, both the successful and the total grants should scale with that length of time. The ratio should really not change. Its not as if because you are awarding successful grants from the previous fiscal year you are not also looking at the total (awarded and unawarded) in that same time period.

    3) “The average quality of the applications assigned to an institute will also affect its payline. If an institute happens to receive a set of applications with very good (low) percentile scores, its success rate will be higher than its payline, all else being equal. For example, in fiscal year 2010, the NIGMS R01 success rate was about 27% but the midpoint of the funding curve occurred close to the 21st percentile.”

    This I truly don’t get. My understanding is that each institute has a certain budget. The main thing that affects the payline is their budget. When they set the payline, can they really be off by a factor of 2-3? And EVERY institute is off by that much? EVERY year? What exactly is going on here?

    Bottom line: Why are paylines 2-3 fold different from “success rate”. This is a huge difference that the above 3 explanations don’t seem to address. Which should we believe the payline or the success rate?
    Can it be as simple as the institute ends up funding most grants from the “wrong” side of the payline, and this is based on “programatic” or program officer preference?
    Seriously, what the heck is going on here? Because if true funding rates are 20% and not 7%, it makes it much more worthwhile to apply for a grant. So which number is right? Thank you.

    • Admittedly, the example we used was exaggerated somewhat to make a point. In actuality, such extreme differences are rare. It’s important to keep in mind that percentile scores are an indicator of rank order among applications specific to a study section, but applications are then distributed among several institutes each with its own budget and success rate. If all applications were reviewed by a single study section and NIH were one big institute with a single payline, you would find that the payline and success rate would be very similar; much more similar than they are now for any particular institute, with its particular set of initiatives, with its particular mix of applications, from a particular mixture of study sections, in any particular year. Also, remember my original post mentions that the success rate calculation collapses resubmissions into a single application by removing resubmissions from the success rate denominator when the original application is submitted in the same year. This can have a significant effect on the success rate.

      • I agree with Sam H that the explanation doesn’t explain such a huge difference. I also agree with Sam H that some of the explanations don’t seem significant. But as Sally says, maybe it was a little exaggerated. As for the explanations here is my take on them:
        1) Non-percentile Applications: Since they are from a smaller pool the success rates of them can easily be much higher than for the general pool. For example specific RFAs may have only a handful of applications so funding just a couple of them is a much high percent – however the significance of this would depend on the total number of applications funded this way and since we already assumed the effect is due to smaller group of applications it is likely to have a small effect on the overall success rate
        2) Previous year submissions: Not sure how this works but if they are funding applications from previous years then the number of funded goes up without changing the number reviewed. Again, I would think this would be a small number compared to the total funded and have little effect. On the other hand if this is referring to the resubmission calculation changed that was mentioned, that could have quite an effect. If you guess that about 1/6 applications are resubmissions that year and they remove them for the success calculation the number of applications goes down to 5/6 which is like multiplying the funding line by 6/5 or 120%.
        3) Different Institutes: Since each institute has its own budget size and different number of application that go to it, each institute will have different paylines and success rates that may be quite different. While NICHD might be 7%, NIAID might be 14% and if the larger institutes have a higher payline the overall success rate will be weighted toward them. That said however, my experience is that it is just the opposite of that and the larger institutes have more applications going to them and tend to have lower paylines (but I could be wrong).
        4) Payline is Conservative: The published payline during the year is much more conservative than the ‘final’ payline since at the end of the year the institutes will go back and fund many applications that weren’t funded under the original payline based on how much money is left in the budget at the end of the year.
        So I hope my explanation is helpful (and correct). And although I understand how the two numbers are different, I agree it does seem rather significant.

        Finally, I agree with many comment about the problems with the system and have seen the problems from both sides of the review process. I have personal experiences with scores going up despite addressing all their concerns, having great individual scores (1-3) and then having overall impact of 5, or having great pluses and no negs but getting poor scores, having reviewer say exactly opposite comments etc… But I will ask all of you who are complaining about the system make suggestions about how to make it better. The few suggestions I have seen I don’t think will make any overall improvements. So I call for constructive criticism because I too would like to see improvements.

  14. One pervasive conviction among funded senior investigators is about the RFAs, which garner a fair amount of money. Many persons I have spoken with are convinced that certain RFAs are “cooked” to fund certain groups or investigators.

    Additionally, a comment I received on one of my applications was: “does not have adequate independent funding”. How am I going to qualify if you never fund me in the first place?

    • The ARRA announcement seemed like such a “cooked-to-order” announcement. Some Specific Challenge Topics appeared to be directly copied from someone’s abstracts or specific aims (including phrases like “We propose that…” and direct reference to some specific institutions and companies). It looked like there was no effort to conceal this aspect of the announcement, and I know many people who were sufficiently disheartened to stay away from the competition.

  15. I really in these hard times when there are tenure track faculty and the funding levels are so low, NIH should make a clear policy to limit the number of R01 grant one can receive. It becomes very difficult for new investigators to compete with the old folks who has been in business for a long time. We should have a very reasonable separate allocations for new investigators so that they can get funded and tenured as long as these new people are productive at the level of ranking when applying for grant.

  16. I guess, another question is, why are we writing to this web site? Do these comments get read, let alone affect anything? Or is this simply a feel good sort of semi-soothing opportunity to vent, full of sound a fury, signifying nothing?

    • We do indeed read every comment, Sam! Did you see Sally’s response to your original post on this topic earlier today?

  17. With the new changes in application rules, each grant has only two chances to get funded, that leaves outstanding grants that are not funded in A1 submissions no place to go: specific aims and goals are to be jettisoned. I completely agree with the comment above that says that A1 grants ranking in the outstanding or high excellent category (up to 20 percentile) should be allowed for A2 submission. This would allow the best grants that are all bunched at the same (excellent but unfundable) percentile to benefit from added data or improved study design. It would also help prevent the cronyism that gets a grant funded when it is scored at nearly the same level as a competing grant—a second resubmission could allow for further improvements that could separate 2 closely scored grants.

    • I agree completely that non-funded A1 grants receiving an outstanding or excellent score should be allowed an A2 submission.

    • I would like to see an explanation from the CSR as to their perception about what the new, A1-only revision system really does versus what they *think* it does. I understand their concept that it is supposed to encourage more high quality submissions, but this is really a fallacy: why would anyone *knowingly* spend months on an R01 submission that they did not think was an example of their good work? Why would we want to deal with the paperwork, local grants officials, etc… etc… all for an application that we ourselves do not think is high quality? At face value it’s just an illogical concept. This line of argument makes it sound like submitting R01 applications is something investigators do on a whim, like buying a chocolate bar while waiting at the counter to pay for the gas.

      Second: The concern is that, given two revisions, grant applications were basically being steered into a trajectory defined by the reviewers. Well, does the CSR really believe that applicants are not following the reviewer’s preferences when there is only one revision? Of course they do! We get comments back to address them, and all grant applications are inevitably steered by the review committee. This doesn’t matter if there is only one revision, or two, or three.

      Third point: perhaps there could be a blog post about what researchers are really supposed to do with applications that fall a percentile point away from being funded. If I’m studying kidney disease and my application scores 11th percentile and they fund to the 10th, do I ditch it and become a neuroscientist now? Is the message really that I should not continue working on this project that “experts in the field” deemed to be in the top 11% of projects they reviewed? The repercussions of this “concept” really work *against* NIH’s pursuit of the best science available.

      My suggestion: There needs to be more reviewers per grant application. They’ve been shortened, and therefore they deserve more eyes on them. Put six reviewers on each. I’ll happily commit time to the system. The subsequent reviews should then be adjudicated by CSR staff so that concerns in *common* amongst the 6 reviewers are considered. Is it only reviewer #2 that thinks the data in Fig. 3 are not adequate? Let them battle it out through a secure online forum accessible by the 6 reviewers. This would remove that one squeaky wheel that is currently empowered by the system to sandbag an application. At the end of the process, the investigator should be given the comments and a full transcript of the online discussion so that s/he has a better understanding of where the real issues lie.

      Last point: There should be transcripts of study section meetings. Applicant names can be changed to the grant number to preserve the privacy of applicants, and reviewers can be identified as Reviewer 1, 2, and so on. This study section is a taxpayer funded enterprise, and there is no expectation of privacy with regards to the reviewer comments. A little transparency in that regard would help minimize the discussions of personalities and focus more on the science.

  18. First, I think the success rate should also be calculated based on grants awarded only in their first year (or initial year) during a given fiscal year. This is because an R01 has a period of 3 to 5 years and an R21 has a period of 2 years.

    Second, given different personalities and knowledge of reviewers, even in the same study group, it is impossible to distinguish the quality of proposals with 1 out 9 marks. That is, a proposal with a score of 2 is almost as good as a proposal with a score 1 or a proposal with a score of 3. We can easily see that from our own experience or from example proposals posted at http://funding.niaid.nih.gov/researchfunding/grant/pages/appsamples.aspx In those examples, some reviewers gave 3 in the initial discussion but gave 1.x in the final. In reality, things can go the other direction.

    Third, I agree with most people here. Prof. Eric Murphy gave me an idea. Why don’t we set a payline of 25% and have a coin toss for proposals within the payline? If the actual payline is 8%, at least one third of good proposals will be funded. NIH can fund $1 to each of the proposals that lost to the coin toss. At least it is a token to say to the PI, his/her university, the general public, and even the Congress that this proposal is a good proposal but NIH can only fund it with $1. This sounds like a joke but at least we let the general public to know how a low budget is going to affect the scientific community and research. In addition, junior investigators such as me who may want to change their careers can walk away with those tokens.

  19. As someone who has reviewed for NIH for 10 years, my experience has been that the critical factor that determines scores (and ultimately funding) is the reputation of the investigators. New investigators have very little chance, especially if they don’t come from the shop of a well-established team.

    I endorse George W’s suggestion (above): If NIH really cares about quality (and, more importantly, equity), the only solution is to review grants anonymously. I understand that this makes it difficult to evaluate the feasibility of the proposed work (how will reviewers know, for example, whether it is possible to recruit 20 persons per day for a clinical trial if the clinic is not described — a requirement to mask the identity of the investigators?).

    Let me propose a solution: Allow investigators to choose to submit through either the Anonymous track or the (current) Non-anonymous track. In the Anonymous track, they take out all potentially identifiable information (just as we do when we submit to a blind-review process for journal articles), and their proposal gets reviewed on purely the scientific merit.

    Finally, let me also suggest that NIH provide a special designation for the top 20% applications that do not get funded: M20 (for meritorious, 20%), M10 (for meritorious, 10%), etc. that the investigator gets to use in his/her vitae. This would say to the outside world that the investigator submitted something that was scientifically solid but was not funded simply because NIH did not have enough funds. This would be a small reward (akin to having a publication) so that all the work that goes into submissions is not completely wasted.

  20. If there is less money, the normal thing to do is to is to reduce the size of each grant, not the number of grants. The logic that reducing the number of grants will bring more quality by providing abundant funding to scientists at the top is totally flawed, as suggested more or less openly in the previous comments. When it comes to survival, things other than quality tend to prevail: who knows who and what ethnic group/minority/university/network the applicant is a part of. In addition, the granting system in the US needs a complete overhaul. I believe a significant part (around one half) of the score should be the average impact factor of previous personal publications. That is because international reviewers, who typically review submissions for most journals, are more likely to be independent observers. This method is applied in some European countries with very good results. This should also reduce significantly the number of mediocre publications, since these publications are going to reduce significantly the individual average impact factor and people are going to avoid to submit weak papers. I am part of the wave of scientists with a solid education that were lured into the US from Eastern Europe, Russia and China over the past 2 decades. I perceived the US as a place where one could reach full potential and which also strongly affirmed equal opportunity. Unfortunately, that was only a smoke screen, and very few people from these countries have “succeeded”, because opportunities are far from equal. For most, the opportunity that was offered was to work for “established” investigators and live endless years on a subsistence income. As a graduate student at a state university in the US, I published a paper in a top journal, based on my own ideas and against the advice of my mentor, beating single-handedly two major labs from Harvard and Columbia to the finish line in the process. The result: my mentor got promoted to full professorship, and I managed to get a post-doc position in a second-tier lab, because I was not part of any kind of network. I managed to advance to a non-tenured faculty position, but could not get a grant although I only published in very good journals, so I had to keep working for an “established” PI who got most of the benefits (financial and professional) from my work and ideas, and wrote his own grants, which were approved, based on my results and publications. I don’t think this model of “capitalist” mode of production is going to work in science. A lot of the “established” investigators in science followed this career because it was an opportunity to a high-paying job, not because they were dedicated to science, and the result is that, despite large resources, the US is falling behind. I remember as a graduate student I was shocked when I realized that my biochemistry professor, a senior person with a full professorship, had no clue how to write correctly the chemical formula of the aminoacids he was talking about! I believe that each PI should have no more than three people in his/her lab paid for by their grant. Any extra members in the lab should have their own fellowship or grant. This would encourage PIs to be good mentors as well and not promote only their own interest, while disregarding the interests of the people who work for them. A smaller number of people in the lab should encourage PIs to work themselves physically in the lab , not only act as managers/ directors, and should therefore increase the quality of work. The argument that PIs have a lot of other obligations and don’t have the time for experimental work is nonsense. I have been doing this for 20 years: teaching classes, conceptually designing projects, testing them with experimental work and writing papers. All the work a PI does, but without the pay, because the pay and all the benefits went to the PI who’s name was on the grant. Not really fair, is it? Granted, it is not easy to personally do all this work, but this will only weed out the people who are in this business by opportunism and not by dedication. We need more democracy (a lot more) in the way funds are being distributed. Right now, the funding system looks like the rest of the US economy: a few banks at the top keeping all the money and benefits, while everyone else struggles. Over time, this is going to backfire, because people already avoid going into science. I am not sure why a lot of US governmental agencies put a lot of effort in promoting science in high schools, when they know they don’t have the money to pay for it. What they do with this advertising is only going to destroy the lives of the people listening to their empty promises. This empty promise was able to fool me and my generation, but it is not going to fool the generation now in college, who have access to a lot of Internet information that was not available when I chose my career. That is why the “big boss” system in the US science, where a lot of people work for subsistence wages for someone not necessarily more qualified, but certainly more connected, needs to stop. PIs should be rewarded for their own merits, not for the merits of the people working under them, and that is why the size of the grants should be reduced, not their number.

  21. I want to second comments by SK Dey.

    In addition, Institutes continuously keep changing policies and apply them immediately. As an example, NCI changed funding policies in Jan 2011 with respect to funding of RO-1s. Grants up to 7th percentile will be funded and others will be selected from 8-15th percentile ranking for funding considerations. Should’nt this be directed towards future submissions i.e. those who submit grants past Jan 2011 and not to those who have applied in the previous cycle and are awaiting their review results? Some may try and improve their applications more if they knew change of policies beforehand. This seems unfair as a process and needs to be corrected.

  22. I wonder whether the statistics is skewed towards an artificially high success rate because it lumps together regular grants with the ones that are custom-made for very small categories with not much competition (Pioneer, etc). I was surprised by the high success rates of the “OD” grants. If the funds are drawn away from open competition, then you see worsening of real paylines even if the budget does not decline, while the statistics does not look too awful.

  23. Pingback: J Unf Gr Prop | DrugMonkey

  24. Considering the pathetic situation of NIH funding, to get NIH grant is truely a lottery. It is too random….too arbitrary. Three of my grants got scored first time or first two times were not discussed after all flaws were removed according to suggestions of reviewers.
    One strong reason of this ‘laughable situation” is that the reviewers are new every time and they have new opportunities to find flaws. As human perfection cannot be absolute, so is the grant application. Some program officers neither pick-up phone nor return calls.
    Is there any one who can help???

  25. Many of the problems addressed in this thread would be solved by automatically calculating the final priority score of an application as the sum of the actual scores assigned to each critierion. If such a system were used, the study section should also be given the option of weighting the various criterion in any way they agree on, as long as the same weights were applied to every application. Can you imagine if our Nation’s educational system used a grading system where the grading teacher could ignore the sum of the right answers a student got on each section of an exam?

  26. For new/early applicants: From my experience, NIH seems to put more emphasis on the applicants stature (which institution, who mentored the applicant, home-run papers etc.), rather than applications scientific merit. I tried for seven years and was completely unsuccessful in getting any of my grants even scored or discussed. Of course I could be a terrible scientist with horrendous ideas to test! I had one particularly bad experience with the GVE study-section, whereby my R21 application, as usual, went un-scored but another investigator with essentially the same idea (but less biologically or clinically relevant) got an RO1 funded. Same cycle! One difference: I was from a tierII school and the other investigator was from Ivy League; of course he was also trained with NAS members, and I am Ph.D. holder from a third world country (an unknown entity challenging established paradigms!). This experience just broke me.
    The point is, like everything else, NIH funding is reduced to a beauty-contest, whereby scientific merit of the proposal is relegated aside. The result is there for all to see, whether we appreciate it or not: everyone seems to be largely doing the same kind of science.

    Can things be changed? Of course. But do we have the will to do it, without discomforting the forces that be? Can we make the the review process transparent, perhaps interactive (as some journals have now done) so that reviewers can’t get away by declaring an application ‘fundamentally-flawed’ without providing any reasoning for it!

    I now teach full-time, not pretending any more that I will ever write a successful application one day. And perhaps therein is an option for those who tried, but failed or those wondering what to do after Ph.D./Postdoc. Teach!! Passionately. Perhaps the next generation/s would be more enlightened and better prepared to deal with the whims of funding decision making.

  27. I completely agree with Sam H. The question: Why are paylines 2-3 fold different from “success rate” has not been answered in this article.
    If institutes ends up funding most grants from the “wrong” side of the payline, and this is based on “programatic” or program officer preference, this is a major blow to the review process, which, with all its flaws, is still far more democratic than have the “buddy” network decide who gets funding.

  28. I strongly second two of the ideas here and encourage the NIH extramural program to adopt them.
    1. Decrease the size of awards to spread the funding further.
    2. NIH provide a special designation for the top 20% applications that do not get funded: M20 (for meritorious, 20%), M10 (for meritorious, 10%), etc. that the investigator gets to use in his/her vitae.
    Finally, I agree the system cannot work under such financial restrictions

  29. Would be great if you had a “like” button here!

    I agree 100% w/MBR’s “seconding” of suggestions listed here..

    1. Decrease the size of awards to spread the funding further.
    2. NIH provide a special designation for the top 20% applications that do not get funded: M20 (for meritorious, 20%), M10 (for meritorious, 10%), etc. that the investigator gets to use in his/her vitae.

    I have a colleague who has been in funding limbo for over a year w/her NCI R21 that rec’d 8th percentile on the first submission in JUNE 2010! No word still on whether or not that will be funded or not, and she may have to resubmit- with an 8th percentile on a smaller grant mechanism. A clear illustration of how broken the system. We can’t even get NCI PD’s to tell us what will happen with R21s.

  30. NIH funding is a classic ‘tragedy of the commons’ problem. There are lots of great ideas out there. There is not enough money to fund them all. The review process is imperfect so there is a feeling that at some level it is a ‘lottery’. So the logical response is to increase the number of our grant proposals, repacking our basic idea slightly each time, so that at least one of them ‘makes it’ (and damn the consequences for other researchers and for the peer review system, that is straining under the volume) – this just makes the payline worse (the same number of proposals will get funded given a constant budget, but the denominator increases) and hence we all get on a treadmill whose speed increases over time. The ONLY rational solution is for everyone (and I mean everyone) to write half as many proposals. If we all do that, the payline will double immediately. I doubt that science will be held back. But of course individual self-interest will kick in, and we will very soon be back to square 1, as the higher payline will encourage more people to write proposals…Maybe NIH should limit the number of PROPOSALS that an investigator can submit in any 2-3 year period, rather than limit the number of funded projects that an investigator can hold? Or perhaps there really are too many researchers entering the rat-race of NIH funding and we need to slow down production…

  31. Recently in a departmental PC in my Biology department the issue of NSF Career awards vs NIH RO1 awards arose. NSF folks vehemently believed that receiving a NSF Career award is extremely difficult and thus, extraordinary, much more so than getting a NIH RO1. While those who receive NIH RO1 funding (or at least try) believe that it is more difficult to get an NIH RO1, especially the first renewal, which would be the equivalent time that most Careers are awarded (after the PI has had their first NSF award and pretenure). So I found the data for the NSF Careers, and success rates in Biology are around 12-13%. I would like to have a comparable number for NIH Awards and still can’t really find the answer from the info above from Rock Talk. I know it is vastly harder for the first NIH RO1 renewal because the competition is against everyone in the field (across all levels of experience) while the competition for Career awards is only with your peers (pretenure). I’d like some hard numbers to present to my PC, who makes important decisions like merit based on things like Career awards. Sadly, many are under the impression that they are much harder to get than RO1s. Any help/data would be much appreciated.

  32. Me again, the success rate cannot really reflect the number of applications funded/number of applications received. The success rate of many institutions is around 20-30% (http://www.einstein.yu.edu/administration/grant-support/nih-paylines.aspx). There is just no way that one out of five to nearly 1 out of three proposals is being funded. I want to know the that number- number of RO1 applications funded/number of applications received. That should not be a hard number to get.

  33. Isn’t the success rate simply close to double the payline, which is near the percentile cutoff? A percentile is given ONLY for those proposals that reach study section, right? And yet half the proposals received are triaged, and never scored, right? So a proposal that gets a 5th percentile in the study section is really at the 2.5th percentile of all proposals submitted (when including those 50% that were triaged).

  34. sorry my previous post doesn’t make sense to even myself now. It should work the other way with the numbers.

    I am still not getting anywhere near how success rates could be around 30% for some institutions…And the explanations offered including that how there is some carry over of funded grants from the previous year that contribute to success rates. Surely then some of the current year’s successes get carried over to the year after, so it ought to even out.

    • Hello — if you haven’t visited NIH RePORT yet, you might find the NIH RePORT Success Rate page helpful for current R01 success rates (and more information on NIH success rates in general). This page specifically lists several research project grant success rates, including the R01, for fiscal years 1997 through 2011. Hope that helps!

  35. Thanks Rock Talk, that link was very helpful. I’m still trying to wrap my head around the success rates (12-30%) that indicate that as many as 1 in 3 to 4 grants is getting funded and what seems to be extraordinary extreme competition for (very limited) funds. But thanks!

  36. I have a related question. Which of the two parameters, the success rate or the payline, is a better measure of the level of competition, namely, the chance of a grant to get funded?

    If I understand it correctly, the success rate of an IC is useless for estimating the chance for a grant to get funded by that IC. Assume that IC#1and #2 each can only fund 10 grants, and IC#1 receives 50 grants but #2 receives 100 grants. So the success rate will be 20% and 10% for IC#1 and #2, respectively. But if the 50 grants in IC#1 mostly have good percentile scores, while IC#2 just the opposite, then for a grant with a certain score, the chance of getting funded is better in IC#2.

    If I am correct, then the payline is the critical measure. But if so, why does the Reporter Page displays success rate but not the payline?

    • Some ICs do not publish paylines or have flexible or variable paylines for distinct programs and dependent on their programmatic priorities. (Links to specific IC strategies are provided here) As such, there is no way to provide payline information across NIH ICs as a means of comparison, whereas success rates can be calculated in a standardized fashion. When applying for NIH funding, we recommend that applicants use all available information, especially considering the IC’s research priorities and talking with program officials.

Leave a Reply

Your email address will not be published. Required fields are marked *


six + 3 =

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>