October 5, 2023
In 2021, we wrote that appropriately acknowledging NIH grant support allows us to properly assess award outputs and make recommendations for future research directions. This post revisits the issue as a reminder for the research community about the importance of properly citing NIH grant support and accurately representing funding support for the published study.
February 23, 2023
Sometimes disagreements about authorship cannot be avoided, and many have likely seen it up close. They can be handled thoughtfully and appropriately. But when they are not, they may lead to serious consequences for the people and research involved. Here, we will look at this issue more closely and reflect on how to proactively address them.
April 19, 2021
Imagine this scenario. In the hustle to publish a paper, you accidentally forgot to cite the underlying NIH support. Or, the opposite, you opt to include that other grant in the acknowledgements that did not have anything to do with the work. No problem, right?
Well, it could be. Accurately and precisely acknowledging NIH funding allows us to properly assess award outputs and make recommendations for future research directions. It is also a term and condition of award outlined in the NIH Grants Policy Statement. Since the Stevens Amendment passed in 1989, recipients have been required to acknowledge federal funding when publicly communicating projects or programs funded with HHS funds.
August 2, 2018
A few weeks ago, we touted the value of the NIH’s Research, Condition, and Disease Classification (RCDC) system to give us consistent annual reporting on official research budget categories and the ability to see trends in spending over time. RCDC’s robust scientific validation process, which allows for such consistency, provides public transparency into over 280 different NIH budget categories.
RCDC categories do not encompass all types of biomedical research. So, how can we get this type of data for other research areas that are not encompassed in RCDC categories, especially those which are newly emerging fields? Are we able to use the same thesaurus-based classification system to explore other research trends?
April 4, 2018
Almost 11 years ago, Stefan Duchy, Benjamin Jones, and Brian Uzzi (all of Northwestern University) published an article in Science on “The Increasing Dominance of Team in Production of Knowledge.” They analyzed nearly 20 million papers published over 5 decades and 2.1 million patents and found that across all fields the number of authors per paper (or patent) steadily increased, that teams were coming to dominate individual efforts, and that teams produced more highly cited research.
December 11, 2017
As no scientist is an island, the overall scientific enterprise grows stronger when people work together. But, an interesting question emerges from this concept for us to explore: how can we quantify the effect of collaboration on productivity and impact on science?
November 15, 2017
As you know, our NIH Strategic Plan articulated an objective to “excel as a federal science agency by managing for results,” and to manage by results we must harness the power of data to drive evidence-based policies. Sometimes, however, our world can be complicated by requirements to enter the same types of data over and over again in one system after another. These situations do have an upside: they provide us the opportunity to look for opportunities to simplify.
November 8, 2017
The scientific community is paying increasing attention to the quality practices of journals and publishers. NIH recently released a Guide notice (NOT-OD-18-011) to encourage authors to publish in journals that do not undermine the credibility, impact, and accuracy of their research findings. This notice aims to raise awareness about practices like changing publication fees without notice, lacking transparency in publication procedures, misrepresenting editorial boards, and/or using suspicious peer review.
September 18, 2017
We previously referenced Ioannidis’ and Khoury’s “PQRST” mnemonic for describing research impact: “P” is productivity, “Q” is quality, “R” is reproducibility, “S” is sharing, and “T” is translation. We wrote several blogs about “P,” productivity, focusing on publications, citations, and more recently the relative citation ratio. Now we’ll focus on a different kind of “P” for productivity, namely patents (which arguably are also related to “T” for translation). …. Do NIH-supported papers that are cited by patents have a higher Relative Citation Ratio than those that are not cited by patents? As a refresher, the Relative Citation Ratio uses citation rates to measure the influence of a publication at the article level…. We identified 119,674 unique NIH grants that were funded between 1995 and 2007 and that generated at least one publication….
September 8, 2016
In previous blogs, we talked about citation measures as one metric for scientific productivity. Raw citation counts are inherently problematic – different fields cite at different rates, and citation counts rise and fall in the months to years after a publication appears. Therefore, a number of bibliometric scholars have focused on developing methods that measure citation impact while also accounting for field of study and time of publication. We are pleased to report that on September 6, PLoS Biology published a paper from our NIH colleagues in the Office of Portfolio Analysis on “The Relative Citation Ratio: A New Metric that Uses Citation Rates to Measure Influence at the Article Level.” Before we delve into the details and look at some real data, ….
13 Comments