A few weeks ago, we touted the value of the NIH’s Research, Condition, and Disease Classification (RCDC) system to give us consistent annual reporting on official research budget categories and the ability to see trends in spending over time. RCDC’s robust scientific validation process, which allows for such consistency, provides public transparency into over 280 different NIH budget categories.
RCDC categories do not encompass all types of biomedical research. So, how can we get this type of data for other research areas that are not encompassed in RCDC categories, especially those which are newly emerging fields? Are we able to use the same thesaurus-based classification system to explore other research trends?
Almost 11 years ago, Stefan Duchy, Benjamin Jones, and Brian Uzzi (all of Northwestern University) published an article in Science on “The Increasing Dominance of Team in Production of Knowledge.” They analyzed nearly 20 million papers published over 5 decades and 2.1 million patents and found that across all fields the number of authors per paper (or patent) steadily increased, that teams were coming to dominate individual efforts, and that teams produced more highly cited research. Continue reading
As no scientist is an island, the overall scientific enterprise grows stronger when people work together. But, an interesting question emerges from this concept for us to explore: how can we quantify the effect of collaboration on productivity and impact on science? Continue reading
As you know, our NIH Strategic Plan articulated an objective to “excel as a federal science agency by managing for results,” and to manage by results we must harness the power of data to drive evidence-based policies. Sometimes, however, our world can be complicated by requirements to enter the same types of data over and over again in one system after another. These situations do have an upside: they provide us the opportunity to look for opportunities to simplify. Continue reading
The scientific community is paying increasing attention to the quality practices of journals and publishers. NIH recently released a Guide notice (NOT-OD-18-011) to encourage authors to publish in journals that do not undermine the credibility, impact, and accuracy of their research findings. This notice aims to raise awareness about practices like changing publication fees without notice, lacking transparency in publication procedures, misrepresenting editorial boards, and/or using suspicious peer review. Continue reading
We previously referenced Ioannidis’ and Khoury’s “PQRST” mnemonic for describing research impact: “P” is productivity, “Q” is quality, “R” is reproducibility, “S” is sharing, and “T” is translation. We wrote several blogs about “P,” productivity, focusing on publications, citations, and more recently the relative citation ratio. Now we’ll focus on a different kind of “P” for productivity, namely patents (which arguably are also related to “T” for translation). …. Do NIH-supported papers that are cited by patents have a higher Relative Citation Ratio than those that are not cited by patents? As a refresher, the Relative Citation Ratio uses citation rates to measure the influence of a publication at the article level…. We identified 119,674 unique NIH grants that were funded between 1995 and 2007 and that generated at least one publication…. Continue reading
In previous blogs, we talked about citation measures as one metric for scientific productivity. Raw citation counts are inherently problematic – different fields cite at different rates, and citation counts rise and fall in the months to years after a publication appears. Therefore, a number of bibliometric scholars have focused on developing methods that measure citation impact while also accounting for field of study and time of publication. We are pleased to report that on September 6, PLoS Biology published a paper from our NIH colleagues in the Office of Portfolio Analysis on “The Relative Citation Ratio: A New Metric that Uses Citation Rates to Measure Influence at the Article Level.” Before we delve into the details and look at some real data, …. Continue reading
NIH grants reflect research investments that we hope will lead to advancement of fundamental knowledge and/or application of that knowledge to efforts to improve health and well-being. In February, we published a blog on the publication impact of NIH funded research. We were gratified to hear your many thoughtful comments and questions. Some of you suggested that we should not only focus on output (e.g. highly cited papers), but also on cost – or as one of you mentioned “citations per dollar.” Indeed, my colleagues and I have previously taken a preliminary look at this question in the world of cardiovascular research. Today I’d like to share our exploration of citations per dollar using a sample of R01 grants across NIH’s research portfolio. What we found has an interesting policy implication for maximizing NIH’s return on investment in research. …. Continue reading
In a recent PNAS commentary, Daniel Shapiro and Kent Vrana of Pennsylvania State University, argue that “Celebrating R and D expenditures badly misses the point.” Instead of focusing on how much money is spent, the research enterprise should instead focus on its outcomes – its discoveries that advance knowledge and lead to improvements in health.
Of course, as we’ve noted before, measuring research impact is hard, and there is no gold standard. But for now, let’s take a look at one measure of productivity, namely the publication of highly-cited papers. Some in the research community suggest that a research paper citation is a nod to the impact and significance of the findings reported in that paper – in other words, more highly-cited papers are indicative of highly regarded and impactful research.
If considering highly-cited papers as a proxy for productivity, it’s not enough that we simply count citations, because publication and citation behaviors differ greatly among fields – some fields generate many more citations per paper. …. Continue reading
On September 11, 2015, the National Heart, Lung, and Blood Institute (NHLBI) announced that it was stopping its Systolic Pressure Intervention Trial (“SPRINT”). The Institute’s Data Safety and Monitoring Board (DSMB) had reviewed interim data and concluded that the results demonstrated clear benefit from aggressive blood pressure lowering. The trial enrolled over 9300 adults with systolic hypertension and increased cardiovascular risk and randomized them to standard control (aiming for a target systolic blood pressure of 140 mm Hg) or to aggressive control (aiming for a target blood pressure of 120 mm Hg). …. Continue reading