5 Comments
At the NIH Regional Seminar this past May, I had the pleasure of giving the keynote talk and presenting different perspectives on how NIH can further the impact of our research funding. Some of the topics I presented in this talk will be familiar to frequent Open Mike blog readers – our concerns about the hypercompetitive nature of applying for NIH support, for example. Others we haven’t discussed in depth here yet – such as how we might measure the contributions of NIH-supported research to treating diseases. My staff recorded this talk and has made it available to you on the NIH Grants YouTube channel. If you’re interested in the topics covered here on the blog (which I hope you are, since you are reading this now!) then you may be interested in this talk.
Super informative talk, and very helpful. What about young physician-scientists?
Mike,
While it is possible that we eventually see diminishing returns as funding to a single PI increases, the graph that you present during the discussion starting at 23:05 fails to establish this.
The immediate problem is that you are showing a log-log graph, but making claims about decreasing returns, i.e., about the concavity of the graph. Concavity is not preserved across the log transformation. In other words, curves that exhibit decreasing returns on a log-log scale may exhibit increasing returns when plotted on a linear scale. Therefore the mere concavity of the graph you show does not demonstrate the decreasing incremental returns it is purported to show.
There’s a more fundamental problem here however. Rather than comparing how individual investigators respond to changes in funding, you have compared outputs across different investigators receiving differing amounts of funding. The problem with this approach is that if review panels are doing anything useful, investigators’ funding will vary systematically with their abilities to convert funding into impactful work. As a result, the production function that you have inferred from data is unlikely to adequately predict the consequences of actual funding changes.
I realize these arguments are difficult to follow without images and equations. We have addressed each of these points in great detail in a pair of blog posts. I am not permitted to post links here, but one can find them easily by searching for “Case study: NIH’s Rule of 21” or something similar.
Best regards,
Carl Bergstrom,
University of Washington
Inspire decision from U.S.NIH think tanks to accelerate dormant promising research works.
Thanks for all good offices of NIH. Anticipate International ESIs inclusion to these new initiatives
Dear Mike,
In response to Carl Bergstrom “The problem with this approach is that if review panels are doing anything useful, investigators’ funding will vary systematically with their abilities to convert funding into impactful work.”, I would like to point out that review panels do not take into account previous funding, except the small part in biosketch on the previous 3 years of peer-reviewed funding more for overlap and as a measure of success or evidence of infrastructure. Peer review does a pretty good job within its guidelines, but the guidelines do need to be changed to compare investigators at the same funding level, particularly for investigator and productivity issues. It does not make sense to compare an investigator with 5 million dollars a year with one who receives 250000 per year and one who has not been recently funded.
Given the large load on peer review, it may be useful to eliminate triage of whole applications in order to focus on complete review of a smaller number of applications. Instead, truncated applications of 1-2 pages consisting of the final specific aims page and summary page could be anonymously reviewed by an independent online panel to triage grants at twice the institutional payline. This way, 50% of the grants accepted for review will hit payline and review can be speeded up and be focussed on grants that are scientifically meaningful. In addition, the initial triage, being fully blinded on both ends, could eliminating a number of distractions and focus just on significance, innovation and approach without unpublished preliminary data or institutional and individual reputation, infrastructure, vertebrate animals, stage of investigator, etc. Also given the two page proposal with focus just on significance, innovation and approach, a single reviewer could easily review 20-40 applications in 2-3 weeks, providing triaged applicants with early feedback without their projects being prejudiced in review panels.