A Look at NIH Support for Model Organisms, Part Two

August 3, 2016

We were pleased to hear the feedback on our previous post on NIH-funded model-organism research. One question a number of you asked is: what’s happening with research involving mouse models? Thanks to additional work by colleagues in NIH’s Office of Portfolio Analysis (OPA) and Office of Extramural Research/Office of Research Information Systems, I’m excited to … Continue reading “A Look at NIH Support for Model Organisms, Part Two”

The Predictive Nature of Criterion Scores on Impact Score and Funding Outcomes

July 22, 2016

In order to develop and implement data-driven policy, we need to carefully analyze our data to understand the “stories behind our metrics.” Without analyzing our data to know what’s going on, we’re essentially flying blind! A group of authors from the NIH Office of Extramural Research sought to investigate the stories behind peer review scoring and why some grant applications are more likely to be funded than others. They extended analyses previously reported by NIH’s Office of Extramural Research and National Institute of General Medical Studies. Last month, they published their analysis of over 123,000 competing R01 applications and described the correlations of individual component peer review scores – significance, investigator(s), innovation, approach, and environment – with subsequent overall impact score, and funding outcome. ….

A Look at Trends in NIH’s Model Organism Research Support

July 14, 2016

Wangler, et al. recently published an article in Genetics on NIH funding for model organism research involving Drosophila. The authors extracted grant information from NIH ExPORTER and looked for the word “Drosophila” in either the title or abstract. By this approach the authors found that NIH support for Drosophila-based research is declining.

We chose to investigate further trends in NIH support for Drosophila and other model organism research. Two groups of NIH staff used two different approaches. Our Office of Research Information Systems (ORIS) used an automated thesaurus-based text mining system which mines not only project titles and abstracts but also the specific aims contained in the application; this is the system we use to generate “Research Condition and Disease Category” (or RCDC) tables, which are publicly posted to the NIH RePORT website. In a separate effort, our Office of Portfolio Analysis (OPA) supplemented a different text mining algorithm with extensive manual curation. Both methods – the wholly automated thesaurus-based text mining approach and the manual curation supplemented text mining approach – yielded similar findings. In this blog, we will present the results of the manually curated approach. ….

Outcomes for R01 “Virtual A2s”

June 24, 2016

A few months ago, a researcher told me about his experiences with the relatively new NIH policy by which investigators are allowed to submit what we have come to call “virtual A2s.” Under NIH’s previous single resubmission policy, if an investigator’s de novo R01 grant application (called an “A0”) was not funded, they had one chance to submit a revision (called an “A1”). If the A1 application was unsuccessful, the applicant was required to make significant changes in the application compared to the previous submissions. NIH took measures to turn away subsequent submissions that were materially similar to the unfunded A1. Under NIH’s current policy, investigators may resubmit a materially similar application as a new submission after the A1 submission. We will call these applications “virtual A2s.” The researcher told me that his virtual A2 did not fare well; although his A0 and A1 had received good scores (though not good enough for funding), the virtual A2 was not discussed. He wondered, just how likely is it for a virtual A2 to be successful? ….

How Many Researchers are Seeking SBIR/STTR Funding?

June 14, 2016

We were pleased to see the interest in our recent blog on the unique number of investigators applying for and receiving NIH research project grants (RPGs). Some of you (through the blog page or through other media) have asked about whether we have similar data for our Small Business Innovation Research (SBIR) and Small Business Technology Transfer (STTR) program. We have generated analogous figures for SBIR and STTR grants, and today’s post shares this investigation of the question, “How many unique researchers are seeking SBIR/STTR funding?” ….

How Many Researchers?

May 31, 2016

Last year, Kimble et al. published the findings of a workshop held at the University of Wisconsin on proposed strategies to rescue biomedical research in the US. The workshop organizers brought to together a number of stakeholders, who concluded that the research community faces two chief problems: “Too many researchers vying for too few dollars; too many postdocs competing for too few faculty positions.” These conclusions raises at least two questions: How many scientists, or more specifically how many principal investigators, does NIH fund? And how many scientists (more specifically aspiring principal investigators) want to receive NIH funding? Today I’d like to discuss two ways to examine these questions, by looking at the number of principal investigators awarded funding, not just on a yearly basis, but also in a way that captures a broader view of NIH-supported scientists over a window of time. ….

Grant Renewal Success Rates: Then and Now

May 26, 2016

In one of my earlier blog posts, I described an analysis looking at whether attempts at renewal are successful. We looked at data from fiscal years 2013-2015, and found that renewal applications have higher success rates than new applications, and that this pattern is true for both new and experienced investigators. In response to your comments and queries, we wanted to follow up on the analysis with some historical data that looks are whether success rates of competing renewals decreased disproportionately compared to new grant applications’ success rates. ….

Citations Per Dollar as a Measure of Productivity

April 28, 2016

NIH grants reflect research investments that we hope will lead to advancement of fundamental knowledge and/or application of that knowledge to efforts to improve health and well-being. In February, we published a blog on the publication impact of NIH funded research. We were gratified to hear your many thoughtful comments and questions. Some of you suggested that we should not only focus on output (e.g. highly cited papers), but also on cost – or as one of you mentioned “citations per dollar.” Indeed, my colleagues and I have previously taken a preliminary look at this question in the world of cardiovascular research. Today I’d like to share our exploration of citations per dollar using a sample of R01 grants across NIH’s research portfolio. What we found has an interesting policy implication for maximizing NIH’s return on investment in research. ….

FY2015 by the Numbers, and a Quick Look at Recent Trends

March 14, 2016

When I was an extramural program division director, NIH applicants and awardees would often ask me questions like “Do you fund research on certain topics?” or “What’s been happening to success rates for certain kinds of grants?” or “How much money do certain kinds of grants usually get?” Often I would respond by going to the RePORT website and running a query or two (or three or more); I would not only show the results but also show the applicant/awardee how s/he could run even more queries on their own. Indeed the website offers an extraordinary data resource for the public, ranging from the RePORTER query tool to find certain kinds of grants, to a bounty of prepared reports, to tools for exporting large data tables about projects, resulting publications, and (more recently) patents. With the Matchmaker tool, one can even copy and paste some text (e.g. a draft abstract of your next proposal) and find similar funded grants. The NIH Data Book on our RePORT website now incorporates NIH’s fiscal year 2015 data. Let’s reflect on funding trends over the past three years, and other recently updated application and award summary data. ….

Publication Impact of NIH-funded Research – A First Look

March 2, 2016

In a recent PNAS commentary, Daniel Shapiro and Kent Vrana of Pennsylvania State University, argue that “Celebrating R and D expenditures badly misses the point.” Instead of focusing on how much money is spent, the research enterprise should instead focus on its outcomes – its discoveries that advance knowledge and lead to improvements in health.

Of course, as we’ve noted before, measuring research impact is hard, and there is no gold standard. But for now, let’s take a look at one measure of productivity, namely the publication of highly-cited papers. Some in the research community suggest that a research paper citation is a nod to the impact and significance of the findings reported in that paper – in other words, more highly-cited papers are indicative of highly regarded and impactful research.

If considering highly-cited papers as a proxy for productivity, it’s not enough that we simply count citations, because publication and citation behaviors differ greatly among fields – some fields generate many more citations per paper. ….