29 Comments
Thanks to everyone who has commented and provided suggestions on ways for NIH to manage in fiscally challenging times. I appreciate all the input. We’ve received more than a 150 responses. If you haven’t had a chance to comment yet, I encourage you to do so. You can leave a comment on the blog or send it to [email protected].
In one of the comment threads, there was discussion about the percentage of investigator-initiated research versus targeted research. We provided links to graphs in the NIH Data Book that show for research program grants and R01 grants awards made in response to targeted announcements make up less than 20% of the total awards. Then, we received the following comment.
Apart from the percentage of awards, how do the dollars play out? What is the percentage of dollars spent on targeted (including contracts, centers, etc) versus investigator-initiated grants? This may be more informative.
Good question. This information is also available in the Data Book. In 2010, targeted grants represented 23% of research project grant funding and 12% of R01 funding. We also have information on the allocation of research funding by mechanism, which will give you a sense of the percent of funding that goes to research project grants, center grants, research career awards, small business grants, and other research.
Your blog continues to be a wonderful source of information and insight, and we really appreciate your willingness to address the resource allocation issue. We had been looking at this for two years now and had generated some back of the envelop calculations. The wealth of information that you have provided allows us to peer into the envelop. Thank you very much
We have circulated your slides and will be discussing them at our Public Affairs Committee meeting on Wednesday, November 2. Thank you for providing the detailed information for our consideration.
I propose a graduated funding scheme for peer-reviewed grants. Currently, an application falls either on one side or the other of a pay line and an all or nothing situation occurs. This is stressful for applicants and for reviewers as so much can depend on minor changes in a score. An alternative would be that the better the score, the more money is awarded. For example, the top X% of grants would receive 100% of budget, the next X% would be reduced in time (e.g., from 5 to 4 yrs) with a per year reduction, and so on. This would increase the number of applications receiving at least some funding and create a correlation between peer-review scores and funding level.
Cut indirect costs to institutions by 50% of what has been negotiated. That should free up a lot of money. The institutions will have to do more with less–just like the rest of John Q. Public. If they have to let faculty go, then there it is. Right now the institutions with high indirect costs are “addicted” to the federal trough. Hunger focuses the mind.
I fully agree with the suggestion to limit the indirect cost rate to a flat rate for every university. Alternatively, the universities should fully cover or at least cover the PIs salaries to a higher percentage and it should not be coming from grants (similar to Europe and China).
Another possibility is to taper off the indirect costs as an institution gets more an more grants. There are real costs associated with administering grants. However, they do not scale linearly with the number of grants. Once you have a certain infrastructure and people in place, every additional grant is not that much more expensive. Institutions with small number of grants really do need every dollar. But mega institutions do not need 50-60% of the award amount for the 1000th grant. Start at 50%, and taper it to down to 25%.
Never mind 50% overheads, try 75% and above, which we are told still does not cover Inst. costs.
One approach would be to make each successive grant from an individual PI increasingly difficult to get funded (i.e. a “sliding scale” payline). My own observation at several institutions has been that the more money labs have, the more they waste. In the same vein, when budget cuts are required for grants that have already been funded, there should be smaller cuts for the first grant for a PI, greater cuts for the second grant for the same PI, etc. Considerable savings could also be realized by contracting out “factory science” (e.g. genome projects) to the most efficient provider (whether commercial or academic), rather than having multiple academic centers in competition trying to inefficiently reach goals of a purely technical nature. This would allow more funding to be used for innovative data analysis, which is the real bottleneck in the world of boondoggle-omics.
Agree with Joe Lipsick with respect to gradation in cutting funds. However, contracting out factory science may not be as practically feasible or possible though I agree that a lot of funds is wasted in methodological differences leading to checking and rechecking everyone wlse’s published work as well as causing confusion to the scientific community. NIH can more closely monitor this scenario.
Agree in spirit with Joe Lipsick but I would use a different mechanism: rather than change the payline, I’d apply a progressive “tax” to increasing numbers of grants held by a single investigator such that the amount awarded would decline. In turn, the modular amount awarded for one’s first R01 could be restored to a reasonable level. Right now at NINDS a single R01 cannot sustain even a modest lab. This is counterproductive as it compels PIs to submit additional grants.
The data clearly show that number of RPG per investigator should be capped at 3. A lot of senior PIs and department chairs “game” the system, in my opinion, by having junior faculty who are dependent on them for space, etc. submit as PIs, but its really the senior PI who controls the funds. So, in addition, the NIH should require some kind of resources/effort statement from all collaborators and key personnel. You could also score that automatically, e.g. max number of RPGs for all names connected to the grant.
I completely agree with Dr. T about the need for a resource/effort commitment on the part of the department. We continually have problems with junior faculty who are completely reliant on a senior investigator for space and resources. This hinders their ability to develop independent research programs and creates enormous problems when they come up for review. Instead of the 3 RPG cap that Dr. T proposes, I would suggest capping total funding per investigator at $750,000.
I believe a combination of approaches will be required.
1. Reduce indirect costs. These are far too high and the current levels have led to the present unsustainable system of funding. By including building costs, institutions actually find constructing new buildings to be profitable, but then they have to fill them. Far too many are staffed by people on the “research track.” Institutions have no investment in such people, and if they have to pay essentially their entire salary from grants, little remains for workers after the person pays his or her salary.
2. Limit the percentage of an individual’s salary that will be paid by grants to, perhaps, 50%, but no more than 67%. This will increase the number of tenure track investigators.
3. Limit the number of grants an investigator can hold. This will have the benefit of giving research grants to the people who actually do the work and effectively lead the teams in “megalabs” rather than to the figurehead with the big reputation.
4. Limit the size of grants. Of course this will have a disproportionate effect on large clinical trials and epidemiology studies. The value of these over obtaining similar results from “outcomes research” is questionable anyway, so perhaps the time has come to shift funds from these expensive trials with limited results.
Large projects could be funded by consortia of investigators, while each individual still operates under a funding “cap” (e.g. $750,000/yr).
I concur with Hurst, and add that a second or third grant (maximum) to an investigator should be more difficult to obtain, requring scores that are below the automatic payline (perhaps set at NIH as an institution as 5% or better). The modular budget should be re-set to a maximum of $225,000, thus funding more scientists. NIH might consider funding grants for a maximum of 4 years if that provides meaningful additional funds for additional investigators to be funded, with rare exceptions for clinical trials but only when necessary.
I think we should continue the Varmus regime of funding established investigators first and second, and then throw a few scraps out to “new” investigators. We all know that anyone else will never have a good idea again and should be discarded. If we fund this way, we best insure that we will never be proven wrong.
By the way, real physician investigators (taking care of real patients) are obsolete. No allowances should be made for them. We can do everything with real PhDs and with MD/PhDs who never actually see patients.
The way to promote scientific careers is to support people rather than projects. With low career stability, you see fewer people taking science as a careers, and more dropouts that wastes huge resources. In order to promote stability and support investigators long term, I suggest that R01 awards be made for 10 conditional years rather than 5 years. There will be a renewal after 5 years, when the investigator submits a progress report and a new plan in a short format. This will not compete with the new submissions. Labs that have shown progress and productivity will have a high chance of renewal. A study section will evaluate the progress, and an institute can decide to not renew say the bottom 20%, those with the least satisfactory progress. These will go back to the competitive pool.
This system has a massive advantage of the NIH showing commitment to investigators, thereby promoting a sense of “career” and stability for those being productive. Another big advantage is that less time is spent in writing grants and revisions. These changes are also separate from, and can be made along with, other changes such as reducing indirect costs, reducing the award amount, capping the number of awards per PI, and so on.
I VERY much agree with the idea of “R01 awards be made for 10 conditional years rather than 5 years. There will be a renewal after 5 years, when the investigator submits a progress report and a new plan in a short format.”
That would be heaven ! We could get on with the research rather than spending increasing time just grant writing (and worse, doing all the different format Admin. on to various sources), which in turn makes us all write more, an endless accelerating treadmill on which we are currently stuck !
Beware complete CAPS on overheads though, although often too high. Space as well as Admin. costs are very different around the country, although the “if they build, they will come” approach has contributed to the current mess !
Protect the Ro1–the engine that keeps our scientific enterprise alive. Look carefully into “set aside” programs for money saving measures. Some of these programs are awarded funds equivalent to more than 200 R01 s every cycle with very modest results .
It is important to keep good investigators funded, even if they can not be given large grants. Getting a grant gives investigators credibility in their home institutions and increases the chances that they can obtain other support to help through the tough times. We should do whatever is necessary to keep the percentages of funded grants at a reasonable level (defining a reasonable level will take some thought). It probably will mean reducing the sizes of grants, and other adjustments.
It is pretty clear we have too many investigators fighting over the same pool of money. What goal is served by just keeping them all minimally in the game while waiting for magic grant fairies to invent new monies some unspecified time in the future? Where does this notion that any scientist with a pulse deserves their piece of the pie come from anyway?
We have all reviewed grants from many institutions in this country, and noted that many of the older ones have an indirect cost rate of 100% approved with the Fed. Many other institutions, such as mine, never get a rate over 50%. If an across the board indirect cost recovery rate of just under 50% was introduced, I am guessing you could adsorb the effects of upcoming cuts.
I don’t think it’s a good idea to cap the number of grants an investigator can have. Why would we want to promote mediocre science at the expense of good science ? Also, all the suggestions about flat indirect rates simply won’t work because it is much more expensive to run institutions in certain locations such as big cities compared to institutions located in the small towns. The current indirect rates differ from institution to institution to account for that. A relative reduction across the board might be useful in tough times.
To make the current levels of funding work for the long term, we have to accept the reality that there are too many professional scientists employed relative to the amount of resources available- in part the reason the poor respect scientists get from institutions (research track, soft money positions etc.). As a faculty member at a major institution I’m dismayed at seeing young people come to graduate school because they simply didn’t know what else to do. Also, given how difficult it is to get a job these days, many students think graduate school is an easy option. We need to promote science to those that want to be scientists (and are hence likely to be good scientists capable of securing their own funding later on) and discourage those that enter the field because there is nothing better to do and hence simply cannot survive when they get their license to practice, as is currently the case for the lowest tier of struggling scientists. There are going to be exceptions where good scientists run into trouble and we need to have some way of identifying that scenario to keep those professionals from being forced out of science. Prior success (progress reports) can play a more significant role in a future funding decision than is currently the case. Interim low level support grants could be increased in number if there is significant evidence of prior success. New investigators with fresh ideas could seriously be given the opportunity to get funded without any preliminary data. Yes, it’s not required right now as per NIH but how many actually get funded for just the idea without the preliminary data ?
So, my proposal is to let natural selection do it’s part. Let the lesser science go unfunded and let those unfunded scientists find alternative careers. If I end up being amongst them, so be it. NIH can even fund programs to help current scientists move into alternative careers- such as an academia to industry transition workshop etc . The consequence in the long term will be that the quality of research will go up (fewer “garbage” papers being published using taxpayer dollars and from limited resources- who on this board doesn’t end up seeing frequent “garbage” or minimal steps forward published in their field?), scientists will be taken more seriously by their host institutions instead of money generating machines as is sometimes the case and will limit the number of graduate student spots available such that students that aren’t serious about science are forced to drop out early or never enter the field. While in the short term there will be a huge outcry amongst unfunded scientists, in about 10 years from now and going forward, I think we’ll achieve a lot more with the money we do have and are likely to have.
My two cents.
too drastic, although you are on point on what is going on and I like your point: “Interim low level support grants could be increased in number if there is significant evidence of prior success.”
In the end the problem is not just TOTAL success, but EFFICIENCY.
BOTH should be considered critical in review.
The problem with BIG Lab. TOTALs is of course with much more $$, more good people will come and at least some will get good papers.
But we also need efficiency, the smaller Labs. who churn out important papers at a high rate / grant $.
In reviewing, I have seen how many of the big Labs. pile on the papers on every grant, irrespective of whether really supporting, so each grant APPEARS productive, but in reality something like total impact factor / grant $ with a limit to how many grants can “pay” for the work could work.
But always watch for yet more creative work arounds !
First, it seems to me that the PIs funded by a single R01 (the one-R01 labs) are the most at risk here. These PIs have limited resources and cannot really compete against big operations that generate a lot of data and publications. If we kill off a large fraction of the one-R01 labs in order to keep funding the multimillion-dollar operations at their current levels, we will lose the generation of scientists that would have been trained in all of those smaller labs.
Second, I am beginning to think that a sliding scale system of funding might be optimal. The grants in the top 5% would be paid at 100% of IRG-approved budget, and others paid at lower levels until the 25th percentile. That would allow highly meritorious projects to receive at least enough money to keep the projects alive.
Third, I would also advocate strongly for uniformity across NIH institutes with regard to dealing with this difficult situation. Currently, each institute is using a different approach in funding allocation. For example, NIDDK cuts 20% from every project’s budget, AND cuts 1 yr from every project that has a percentile score greater than some specified single-digit percentile; NHLBI funds >95% of IRG-approved budget, cuts 1 yr from every project, AND uses a different payline for A0 and A1 applications… This is problematic given that IRGs are not 100% linked to particular institutes, and reviewers are sometimes unaware of different funding policies for various institutes.
This is problematic given that IRGs are not 100% linked to particular institutes, and reviewers are sometimes unaware of different funding policies for various institutes.
So is your proposal that reviewers should be making funding decisions, glfadkt? Every single SRO that I’ve ever reviewed under makes a big deal out of the fact that reviewers are not supposed to be gaming the payline when they review grants. (Despite our natural tendency to score in binary “fund/not fund” terms)
More important, I would suggest, is that the differing policies vis a vis some ICs that cut a year of funding as a matter of course leads to game playing on the part of the applicants as well as reviewers. And this can throw a lot of variation into the process when one set of reviewers is thinking “this 5yr plan will only be funded for 4yrs anyway” and another set of reviewers is not in on this little secret. More variation when n00b investigators are thinking they have to fill out 5 yrs of work and keep getting nailed for being over ambitious..
I think there is building consensus that one critical modification to NIH policy could greatly and positively impact research progress. Specifically, the idea of restricting the number of active research grants to 2 per investigator. Unfortunately, NIH has not adequately addressed this issue. Funding more investigators, prioritizing investigator research priorities, and broadening the NIH portfolio are all necessary and positive outcomes that would inevitably occur with such a change. It is time for the NIH to have more substantive discussion regarding this innovative, practical, and low cost idea to immediately enhance NIH success during the current economically challenging environment.
There is no “building consensus” at all. There is the continued assertion from small town grocer labs that there way is best when all the available evidence in Science, Nature, Cell and other top, cutting edge journals shows that what is needed is more gigantic labs, not fewer. This may have to be at the expense of a few insignificant one-grant operators who publish once every other year in their society journal.
It is ironic that over the decades NIH has contributed to the instability of American Universities by providing salaries to senior faculty. In many cases, I would imagine that a crisis would ensue if NIH funding stopped; Universities could not afford to keep their well-funded faculty. The proportion of salary support on an R01 is simply unsustainable, and the tough choice must be made now, so that an unruly default will not occur. Oops, Greece popped into my mind! But, changes must begin to be made so that Universities will not succumb to faculty loss, especially senior faculty which provides their bases. Thus, it is equally ironic that the survival of many Universities really depends on their relinquishment of external funds to support faculty, as it will drive them to manage their faculty costs more responsibly, and buffer them from external financial forces. Of course, Universities will never do this on their own accord.
Yes, I agree with others that NIH should not provide salary to senior PIs. Perhaps, rather than cutting overhead, it could be used as a source not only to provide for University basics — libraries, space, etc., but also another necessity – tenured and senior faculty. In any case, the Universities should be financially responsible for senior faculty. When a grant ends, the University does not raze it library but continues to support it; nor should it fire its faculty. Imagine the amount of direct costs which would become available for research. On another note, don’t cut the number of R01s per investigator. Ideas are needed and these, via R01s, can provide for junior faculty and staff salaries and training. Finally, don’t limit resubmissions. This is tantamount to saying that PIs are incapable of learning how to improve their research, and that the constructive, beneficial work of study sections should be lost. We are here to learn and teach.
In terms of A2 funding for R01s, I think that the A2 should be put back. I understand the rationale for removing the A2; however the rationale was an expediency measure, not a measure of funding the best scoring proposals. Since many A2s fell amongst the best scoring proposals, disallowing them has removed funding for science that would have scored better than science that is currently being funded. As it is today, such proposals may never be funded, even though they would have scored better than others that will be funded. This just makes no sense to me if the goal of the review process is to identify the best scoring science to fund. Following this logic, one could also argue for additional tries. At some point there is a trade off, but that trade off seems too far balanced against funding the best scoring proposals when one eliminates the A2, as quite a few A2s were funded.
In terms of SBIR/STTRs, I see no justification for removing the A2. From my recollection after exploring the NIH data base, the percent of A2s that got funded were higher than the percent of A0 and A1s that got funded (this is easily verified). The A2 and more should be put back for SBIR/STTRs if the goal is to fund the proposals that will get the best score.