Open Mike

Helping connect you with the NIH perspective, and helping connect us with yours

Research Commitment Index: A New Tool for Describing Grant Support

On this blog we previously discussed ways to measure the value returned from research funding. The “PQRST” approach (for Productivity, Quality, Reproducibility, Sharing, and Translation) starts with productivity, which the authors define as using measures such as the proportion of published scientific work resulting from a research project, and highly cited works within a research field.

But these factors cannot be considered in isolation. Productivity, most broadly defined, is the measure of output considered in relation to several measures of inputs. What other inputs might we consider? Some reports have focused on money (total NIH funding received), others on personnel. And all found evidence of diminishing returns with increasing input: among NIGMS grantees receiving grant dollars, among Canadian researchers receiving additional grant dollars, and among UK biologists overseeing more personnel working in their laboratories.

It might be tempting to focus on money, but as some thought leaders have noted, differing areas of research inherently incur differing levels of cost. Clinical trials, epidemiological cohort studies, and research involving large animal models are, by their very nature, expensive. If we were to focus solely on money, we might inadvertently underestimate the value of certain highly worthwhile investments.

We could instead focus on number of grants – does an investigator hold one grant, or two grants, or more? One recent report noted that more established NIH-supported investigators tend to hold a greater number of grants. But this measure is problematic, because not all grants are the same.  There are differences between R01s, R03s, R21s, and P01s that go beyond the average amount of dollars each type of award usually receives.

Several of my colleagues and I, led by NIGMS director Jon Lorsch – chair of an NIH Working Group on Policies for Efficient and Stable Funding – conceived of a “Research Commitment Index,” or “RCI.” We focus on the grant activity code (R01, R21, P01, etc) and ask ourselves about the kind of personal commitment it entails for the investigator(s). We start with the most common type of award, the R01, and assign it an RCI value of 7 points. And then, in consultation with our NIH colleagues, we assigned RCI values to other activity codes: fewer points for R03 and R21 grants, more points P01 grants.

Table 1 shows the RCI point values for a PI per activity code and whether the grant has one or multiple PIs.

Table 1:

Activity Code Single PI point assignment Multiple PI point assignment
P50, P41, U54, UM1, UM2 11 10
Subprojects under multi-component awards 6 6
R01, R33, R35, R37, R56, RC4, RF1, RL1, P01, P42, RM1, UC4, UF1, UH3, U01, U19, DP1, DP2, DP3, DP4 7 6
R00, R21, R34, R55, RC1, RC2, RL2, RL9, UG3, UH2, U34, DP5 5 4
R03, R24, P30, UC7 4 3
R25, T32, T35, T15 2 1

Figure 1 shows by a histogram the FY 2015 distribution of RCI among NIH-supported principal investigators. The most common value is 7 (corresponding to one R01), followed by 6 (corresponding to one multi-PI R01). There are smaller peaks around 14 (corresponding to two R01s) and 21 (corresponding to three R01s).

Figure 1:

Figure 1 shows by a histogram the FY 2015 distribution of RCI among NIH-supported principal investigators. The most common value is 7 (corresponding to one R01), followed by 6 (corresponding to one multi-PI R01). There are smaller peaks around 14 (corresponding to two R01s) and 21 (corresponding to three R01s).

Figure 2 uses a box-plot format to show the same data, with the mean indicated by the larger dot, and the median indicated by the horizontal line. The mean of 10.26 is higher than the median of 7, reflecting a skewed distribution.

Figure 2:

Figure 2 uses a box-plot format to show the same data, with the mean indicated by the larger dot, and the median indicated by the horizontal line. The mean of 10.26 is higher than the median of 7, reflecting a skewed distribution.

From 1990 through 2015 the median value of RCI remained unchanged at 7 – the equivalent of one R01. But, as shown in Figure 3, the mean value changed – increasing dramatically as the NIH budget began to increase just before the time of the NIH doubling.

Figure 3:

From 1990 through 2015 the median value of RCI remained unchanged at 7 – the equivalent of one R01. But, as shown in Figure 3, the mean value changed – increasing dramatically as the NIH budget began to increase just before the time of the NIH doubling.

Figure 4 shows the association of RCI and the age of PIs; the curves are spline smoothers. In 1990, a PI would typically have an RCI of slightly over 8 (equivalent to slightly more than one R01) irrespective of age. In 2015, grant support, as measured by RCI, increased with age.

Figure 4:

Figure 4 shows the association of RCI and the age of PIs; the curves are spline smoothers. In 1990, a PI would typically have an RCI of slightly over 8 (equivalent to slightly more than one R01) irrespective of age. In 2015, grant support, as measured by RCI, increased with age.

We now turn to the association of input, as measured by the RCI, with output, as measured by the weighted Relative Citation Ratio (RCR). We focus on 71,493 unique principal investigators who received NIH research project grant (RPG) funding between 1996 and 2014.  We focus on RPGs since these are the types of grants that would be expected to yield publications and because the principal investigators of other types of grants (e.g. centers) won’t necessarily be an author on all of the papers that come out of a center. For each NIH RPG PI, we calculate their total RCI point values for each year, and divide it by the total number of years of support. Thus, if a PI held one R01 for 5 years, their RPG RCI per year would be  7 ((7 points * 5) / (5 years). If a PI held two R01s for 5 years (say 2000-2004) and during the next two years (say 2005 and 2006) held one R21, their RPG RCI per year would be 11.43 [(14 points * 5) + (5 points * 2)] / (7 years).

Figure 5 shows the association of grant support, as measured by RPG RCI per year, with productivity, as assessed by the weighted Relative Citation Ratio per year. The curve is a spline smoother. Consistent with prior reports, we see strong evidence of diminishing returns.

Figure 5:

Figure 5 shows the association of grant support, as measured by RPG RCI per year, with productivity, as assessed by the weighted Relative Citation Ratio per year. The curve is a spline smoother. Consistent with prior reports, we see strong evidence of diminishing returns.

A limitation of our analysis is that we focus solely on NIH funding.  As a sensitivity test, we analyzed data from the Howard Hughes Medical Institute (HHMI) website and identified 328 unique investigators who received NIH RPG funding and HHMI funding between 1996 and 2014.  Given that these 328 investigators received both NIH grants and HHMI support (which is a significant amount of long term person-based funding), they would be expected to be highly productive given the additive selectivity of receiving support from both NIH and HHMI.  As would be expected, HHMI investigators had more NIH funding (measured as total RCI points, annual RCI, number of years with NIH funding) and were more productive (more NIH-funded publications, higher weighted RCR, higher annual RCR, and higher mean RCR).

Figure 6 shows annual weighted RCR by annual RCI, stratified by whether the PI also received HHMI funding.  As expected, HHMI investigators have higher annual weighted RCR for any given RCI, but we see the same pattern of diminishing returns.

Figure 6:

Figure 6 shows annual weighted RCR by annual RCI, stratified by whether the PI also received HHMI funding. As expected, HHMI investigators have higher annual weighted RCR for any given RCI, but we see the same pattern of diminishing returns.

Putting these observations together we can say:

  • We have constructed a measure of grant support, which we call the “Research Commitment Index,” that goes beyond simple measures of funding and numbers of grants. Focusing on funding amount alone is problematic because it may lead us to underestimate the productivity of certain types of worthwhile research that are inherently more expensive; focusing on grant numbers alone is problematic because different grant activities entail different levels of intellectual commitment.
  • The RCI is distributed in a skewed manner, but it wasn’t always so. The degree of skewness (as reflected in the difference between mean and median values) increased substantially in the 1990s, coincident with the NIH budget doubling.
  • Grant support, as assessed by the RCI, increases with age, and this association is stronger now than it was 25 years ago.
  • If we use the RCI as a measure of grant support and intellectual commitment, we again see strong evidence of diminishing returns: as grant support (or commitment) increases, productivity increases, but to a lesser degree.
  • These findings, along with those of others, suggest that it might be possible for NIH to fund more investigators with a fixed sum of money and without hurting overall productivity.

At this point, we see the Research Commitment Index as a work in progress and, like the Relative Citation Ratio, as a potentially useful research tool to help us better understand, in a data-driven way, how well the NIH funding process works. We look forward to hearing your thoughts as we seek to assure that the NIH will excel as a science funding agency that manages by results.

I am grateful to my colleagues in the OER Statistical Analysis and Reporting Branch, Cindy Danielson and Brian Haugen in NIH OER, and my colleagues on the NIH Working Group on Policies to Promote Efficiency and Stability of Funding, for their help with these analyses.

Email this to someoneTweet about this on TwitterShare on Facebook11Share on LinkedIn9Share on Google+1Pin on Pinterest0Print this page

18 thoughts on “Research Commitment Index: A New Tool for Describing Grant Support

  1. I applaud these kinds of analyses, which are not easy to do. Good decision making can only arise from good facts. Thanks for trying to get the facts straight.

  2. The data seem to contradict the author’s conclusions. The data are distorted by using log or semi log plots and differential scaling of the X and Y axes in figures 5 and 6. Contrary to the claim of “diminishing returns”, it appears that an investigator with RCI of 7 has a weighted citation ratio of 1. With a doubling of RCI to 14 the citation ratio increases by about 3-fold to 3 (hard to tell exact amount). So there are actually increasing returns, not decreasing. Am I missing something here? If this is true, then an investigator who is awarded a second grant uses that money more productively than if the money were awarded to a different investigator awarded with a single grant.

  3. This is an interesting approach. However I worry that exploratory work that generates new hypotheses may take longer (and therefore be seen as riskier) than work that continues defining important mechanisms in previously explored processes. Hence, the approach you outline may end up reinforcing the culture of risk aversion in NIH supported research.

  4. This is potentially very significant. It does indicate that rewarding a a few investigators with large amounts of funding is not necessarily the best idea. More investigators can be funded while maintaining or improving overall productivity per grant. While a hard cap, like “no more than x grants per PI” is probably not wise, some sort of penalty for an equivalent of 2 R01s (14 RPG points) per PI may be useful.

    • Agree 100%. A bonus could be added directly to the percentile score of currently unfunded investigators if they have appropriate institutional support and access to facilities. A smaller bonus could even be added for those with one R01, but no bonus for those with two R01’s or the equivalent.

  5. The Log scale on the Y-axis could be potentially misleading for Fig 5 and 6. It should have been simple enough to have presented the linear scale, which should give a better readout between productivity vs RCI. Please consider adding them here.

  6. Log scale use is a clear way to benefit one side of argument. Benefit of more grants minimized as presented and likely benefit possible in 2R01 range is being hidden. Perhaps more importantly output is limited to just publications, treating all as equal both in impact and even data quality and quantity. If so just publish short papers with 2-3 simple figs rather than more complete stories of greater use and impact. Reviewers are less critical and forgiving also. But your output would go up when in real terms quality goes down. A Science paper can have 2-3 fold more data in it including supplement than average even top line journal (for ex. JBC) and twice that for bottom 50% journals. What about papers that open new fields or are truly paradigm shifting? When grants get bigger like P01 the absolute number of papers may suffer but number of authors per publication may increase as well the synergy and overall progress in the field. Also is # of papers only output or can getting to clinical testing really be even a more noteworthy goal. Need to be careful that we do not simply hunt for readouts that make the feel good case to the masses. Need a balance between unbridled capitalism where the young may suffer and simplistic socialism where ambitious “real deal” achievers are shackled and deterred from a timely and truly meaningful advancement. So much more needs to be considered and hopefully will be done thoroughly before any “hasty executive order” – little humor🙂

  7. Comparison between 1990 and 2015 in Figures 3 and 4 need to be interpreted with caution. The nature of investigator employment has changed dramatically between these points in time. “Hard money” positions supported by research institutions was far more common in 1990 than they are now, which effectively subsidized the NIH investment in productivity. In 2015, successful researchers, by and large, must obtain a much higher proportion of their salaries from external funding. Thus it is not at all surprising to see greater RCI with increasing age and career advancement. With these changes in how investigators are funded, it is not clear that flattening the RCI by age curve is feasible while maintaining the workforce and productivity.

  8. It is import to consider that institutional funding (e.g. from research universities) is a significant, but relatively fixed, per-PI cost of the total research enterprise. If NIH funds more PI’s this cost will increase significantly. For example, if NIH funds twice as many PIs with the same amount of money, this cost to the institutions employing PIs will roughly double. It is not clear that the employing institutions will be willing or able to shoulder this cost.

  9. Have to agree with others, simply post Fig 5 with linear scale on both axis. Without that, it is hard to tell exactly what’s happening. One can guesstimate that it is roughly linear, hence one investigator with 3 R01s produces about the same as 3 investigators with 1 R01 each. If that is the case, the conclusion that “it might be possible for NIH to fund more investigators with a fixed sum of money and without hurting overall productivity.” is supported.

  10. Thank you for this analysis and for introducing these interesting metrics. Would it be possible to do the analysis for clinical versus basic research? I think lumping both together may not give an accurate picture.

  11. Just want to echo the above comments that using log scale on the y-axis in Figures 5 and 6 is disingenuous. A logarithmic shape of both curves with only the y-axis on log scale suggests a roughly linear relationship with a linear scale on both axes.

  12. If these data are plotted on a linear scale, the relation appears to be quite linear. The ‘diminishing’ effect seems to vanish – at least in the range of 6-20 for RCI. The dashed line seems to suggest that the weighted citation ratio should be 1 for RCI of 6 but >100 for RCI of 20 to not have diminishing return on investment? Please clarify!

  13. Were any RCI points assigned to Activity Codes that are not listed in Table 1? I am looking into replicating something like this for my institution, and am interested in whether you ignored Activity Codes outside of those listed or if you simply assigned all others an average RCI value.

Leave a Reply

Your email address will not be published. Required fields are marked *