My colleagues within the NIH Office of Portfolio Analysis sought to answer this call. Drs. Ian Hutchins and George Santangelo embarked on a hefty bibliometric endeavor over the past several years to curate biomedical citation data. They aggregated over 420 million citation links from sources like Medline, PubMed Central, Entrez, CrossRef, and other unrestricted, open-access datasets. With this information in hand, we can now take a better glimpse into relationships between basic and applied research, into how a researchers’ works are cited, and into ways to make large-scale analyses of citation metrics easier and free. Continue reading
In March 2017, we wrote about federal funders’ policies on interim research products, including preprints. We encouraged applicants and awardees include citations to preprints in their grant applications and progress reports. Some of your feedback pointed to the potential impact of this new policy on the peer review process. Continue reading
As you know, our NIH Strategic Plan articulated an objective to “excel as a federal science agency by managing for results,” and to manage by results we must harness the power of data to drive evidence-based policies. Sometimes, however, our world can be complicated by requirements to enter the same types of data over and over again in one system after another. These situations do have an upside: they provide us the opportunity to look for opportunities to simplify. Continue reading
We previously referenced Ioannidis’ and Khoury’s “PQRST” mnemonic for describing research impact: “P” is productivity, “Q” is quality, “R” is reproducibility, “S” is sharing, and “T” is translation. We wrote several blogs about “P,” productivity, focusing on publications, citations, and more recently the relative citation ratio. Now we’ll focus on a different kind of “P” for productivity, namely patents (which arguably are also related to “T” for translation). …. Do NIH-supported papers that are cited by patents have a higher Relative Citation Ratio than those that are not cited by patents? As a refresher, the Relative Citation Ratio uses citation rates to measure the influence of a publication at the article level…. We identified 119,674 unique NIH grants that were funded between 1995 and 2007 and that generated at least one publication…. Continue reading
Measuring the impact of NIH grants is an important input in our stewardship of research funding. One metric we can use to look at impact, discussed previously on this blog, is the relative citation ratio (or RCR). This measure – which NIH has made freely available through the iCite tool – aims to go further than just raw numbers of published research findings or citations, by quantifying the impact and influence of a research article both within the context of its research field and benchmarked against publications resulting from NIH R01 awards.
In light of our more recent posts on applications and resubmissions, we’d like to go a step further by looking at long-term bibliometric outcomes as a function of submission number. In other words, are there any observable trends in the impact of publications resulting from an NIH grant funded as an A0, versus those funded as an A1 or A2? And does that answer change when we take into account how much funding each grant received? …. Continue reading
Many thanks for your terrific questions and comments to last month’s post, Research Commitment Index: A New Tool for Describing Grant Support. I’d like to use this opportunity to address a couple of key points brought up by a number of commenters; in later blogs, we’ll focus on other suggestions.
The two points I’d like to address here are: 1) why use log-transformed values when plotting output (annual weighted relative citation ratio, or annual RCR) against input (annual research commitment index, or annual RCI), and 2) what is meant by diminishing returns. …. Continue reading
Last April we posted a blog on the measurement of citation metrics as a function of grant funding. We focused on a group of R01 grants and described the association of a “citation percentile” measure with funding. We noted evidence of “diminishing returns” – that is increased levels of funding were associated with decreasing increments of productivity – an observation that has been noted by others as well.
We were gratified by the many comments we received, through the blog and elsewhere. Furthermore, as I noted in a blog last month, our Office of Portfolio Analysis has released data on the “Relative Citation Ratio,” (or RCR) a robust field-normalized measure of citation influence of a single grant (and as I mentioned, a measure that is available to you for free).
In the follow-up analysis I’d like to share with you today, we focus on a cohort of 60,447 P01 and R01-equivalent grants (R01, R29, and R37) which were first funded between 1995 and 2009. Through the end of 2014, these grants yielded at least 654,607 papers. We calculated a “weighted RCR” value for each grant, …. Continue reading
In previous blogs, we talked about citation measures as one metric for scientific productivity. Raw citation counts are inherently problematic – different fields cite at different rates, and citation counts rise and fall in the months to years after a publication appears. Therefore, a number of bibliometric scholars have focused on developing methods that measure citation impact while also accounting for field of study and time of publication. We are pleased to report that on September 6, PLoS Biology published a paper from our NIH colleagues in the Office of Portfolio Analysis on “The Relative Citation Ratio: A New Metric that Uses Citation Rates to Measure Influence at the Article Level.” Before we delve into the details and look at some real data, …. Continue reading
NIH grants reflect research investments that we hope will lead to advancement of fundamental knowledge and/or application of that knowledge to efforts to improve health and well-being. In February, we published a blog on the publication impact of NIH funded research. We were gratified to hear your many thoughtful comments and questions. Some of you suggested that we should not only focus on output (e.g. highly cited papers), but also on cost – or as one of you mentioned “citations per dollar.” Indeed, my colleagues and I have previously taken a preliminary look at this question in the world of cardiovascular research. Today I’d like to share our exploration of citations per dollar using a sample of R01 grants across NIH’s research portfolio. What we found has an interesting policy implication for maximizing NIH’s return on investment in research. …. Continue reading
In a recent PNAS commentary, Daniel Shapiro and Kent Vrana of Pennsylvania State University, argue that “Celebrating R and D expenditures badly misses the point.” Instead of focusing on how much money is spent, the research enterprise should instead focus on its outcomes – its discoveries that advance knowledge and lead to improvements in health.
Of course, as we’ve noted before, measuring research impact is hard, and there is no gold standard. But for now, let’s take a look at one measure of productivity, namely the publication of highly-cited papers. Some in the research community suggest that a research paper citation is a nod to the impact and significance of the findings reported in that paper – in other words, more highly-cited papers are indicative of highly regarded and impactful research.
If considering highly-cited papers as a proxy for productivity, it’s not enough that we simply count citations, because publication and citation behaviors differ greatly among fields – some fields generate many more citations per paper. …. Continue reading