Bolstering Trust in Science Through Rigorous Standards

Posted

Scientists have long considered the research process to be self-correcting; we trust that, even if scientists may sometimes make errors in the lab, those errors will eventually be discovered and corrected as others try to substantiate and extend original research findings. However, as stated in a commentary by NIH Director Francis Collins and NIH Deputy Director Larry Tabak, “A growing chorus of concern, from scientists and laypeople, contends that the complex system for ensuring the reproducibility of biomedical research is failing and is in need of restructuring.”

There are examples that indicate that our processes have room for improvement. For example, a 2008 study by the Amyotrophic Lateral Sclerosis (ALS) Therapy Development Institute examined the preclinical effects of more than 70 drugs on the SOD1 mouse model of ALS, and showed that the probability of seeing an effect by chance alone is significant even with only ten animals per group. A 2011 study in Clinical Cancer Research authenticated 122 head and neck cancer cell lines, finding that 37 of them were misidentified and/or contaminated. Many articles publish results from experiments only using animals of one sex; however, a 2005 study in the Journal of Cerebral Blood Flow & Metabolism found that male and female mice responded differently to treatment with the same inhibitor, which was only evident when reviewing data disaggregated by sex.

Could we be doing better? For the past several years, NIH has been engaging in many conversations with its stakeholders to discuss how the research community can collectively address issues of scientific rigor and transparent reporting, with the ultimate goal of producing the highest quality science and preserving public trust in the research process. As you may recall, in the summer of 2014 NIH held a workshop to discuss the issue of reproducibility and rigor of research findings with scientific journal editors. The goal of this meeting was to identify opportunities in the scientific publishing arena to support research that is reproducible, robust, and transparent. Since then, many journal editors have come to a consensus on a set of principles for reporting preclinical research, focusing on rigorous and transparent reporting of study design elements (e.g., statistics, randomization, blinding), data and material sharing, consideration of refutations, and more.

While publishing standards are an important step in the right direction, we at NIH wanted to encourage the research community to think about scientific rigor and transparency throughout the life-cycle of the research process, beginning with the planning of research projects. By clarifying NIH’s expectations regarding rigor and transparency and how we would like to see these elements described in grant applications, we hope to raise community awareness about these critical issues, continuing efforts previously described by Dr. Sally Rockey & Dr. Larry Tabak. Through extensive trans-NIH discussions, we have refined our application instructions, progress reporting, and review language to reflect the following four areas: scientific premise of the proposed research; rigor of experimental design; consideration of relevant biological variables such as sex; and authentication of biological and chemical resources. We hope that our changes will prompt applicants to consider experimental design elements or variables that they may have previously overlooked or may have not reported. Of course, we are very cognizant of concerns around administrative burden for both our applicants and reviewers, and we are implementing these changes in a way that minimizes burden.

Last week, we published updated application instructions and review questions that apply to research grant and career development applications with due dates of January 25, 2016 and beyond. Specifically, these updates include:

  • New research strategy instructions
  • A separate attachment for you to discuss authentication of key biological and/or chemical resources
  • Additional rigor and transparency questions reviewers will be asked to consider when reviewing applications

We encourage you to read these notices carefully and consider how the updated instructions will change how you think about your project and prepare your grant application. There will also be updated instructions for fellowships and training grants published in the NIH Guide in the coming months.

Will this emphasis on rigor stifle innovation and creativity? We’ve heard concerns echoing this idea, but we don’t think that this should be the case. Innovative ideas should still be grounded by a scientific premise, and innovative science should be designed, executed, and reported in a rigorous and transparent way. Creative approaches to doing research can be just as rigorous, or even more so, than what’s currently seen as standard practice. One example can be taken from the world of clinical trials, where randomized registry trials preserve the rigor of randomization and large sample sizes, but, at the same time, offer a way to conduct large-scale trials orders of magnitude more efficiently than standard practice.

Our office has made many resources, including extramural staff training material, available on the OER webpage for rigor & reproducibility. We hope you find these resources helpful, and agree that these changes are a step in the right direction to uphold the public’s faith in scientific research.

It is our intent to foster more productive science, and better science, while maintaining enough flexibility for innovation and creativity. After all, nothing in science can be considered useful unless we can trust that it was held to rigorous standards.

4 Comments

  1. Increasing the rigor and reproducibility of research is a commendable objective. Compared to industrial and pharma standards, the academic community has clearly lagged in this regard. However, it is important to realize that increased rigor does not come free. Using experimental animals of both genders, verifying cell lines and reagents, extensively testing antibodies for specificity, keeping audit-proof records and other similar measures are important but costly. They will require increases in funding as well as reassessment of measures of productivity. If the changes are effective, they will predictably result in less voluminous, more costly but more reliable research output. Complicating the calculus further, research is now globalized and there is a danger that “bad money will drive out good”, and that research from places with a looser culture of rigor will drown those that do it the hard way. Finally, even the best of intentions can turn into a bureaucratic diktat. Despite all these reservations, which urge caution, I still believe that this is the right way to go!

  2. While the NIH did not have the luxury of hind sight, the funding practices of the NIH have virtually guaranteed an outcome of reporting results with the minimum level of replicates to achieve statistical confidence. That is, the emphasis on novel and cutting edge research has encouraged a culture of “don’t be right, be first”. Additional stresses, generated by research institutions and employers, related to the rate and volume of publications has also force investigators to rush just about any interesting finding to print … the days of one rock solid publication on a huge topic are over in academics with the minimum reportable unit the norm for all.
    Given the stresses of NIH-level work, and most academic institution’s emphasis on extramural support with hefty indirects, I chose to exit biomedical research after more than a decade or so of reasonable success (~20 publications and ~$3.5M in extramural support as a co-investigator). Our group enjoyed reasonable professional success, but the system of soft-money essentially sapped the joy out of the scientific inquiry. Currently, I earn my living largely by teaching with minimal research obligations, which I find to be much more satisfying content lifestyle.

  3. I am grateful to see that the NIH is taking initiative to strike back against the rising problem of reproducibility. Eliminating shoddy, unreproducible research is essential both for the scientific community to work effectively and for the broader public to retain their trust in scientists. I also applaud the NIH for reaching out to scientific journal editors to cooperatively craft principles to guide researchers towards producing more rigorous research. Journals, funding agencies, and researchers have all played a role in creating the current reproducibility problems, and all will have to work together to create a solution.
    However, I also want to echo the T(h)omases comments above that there are still strong incentives at work that drive scientists away from rigor and towards speedy publication. To truly reform the system, we will have to eliminate these incentives or over-power them with even stronger incentives to publish rigorous science. For the changes the NIH is proposing to be effective, these proposed new evaluations of rigor would need to act not just as a hoop for grant applicants to jump through, they would have to be heavily weighted in the grant’s score. If the rigor of the proposed experiments becomes the critical difference between a grant being funded or not funded, scientists will sit up and take notice.
    I would also expand Thomas’s phrase “don’t be right, be first” to “don’t be right, be first and be interesting.” Until publishing something that is true but boring is valued, experimenters will operate under a bias towards “positive” results, and the reproducibility problem will remain. The research community should therefore come up with new ways to infuse publication of negative results with value.
    Finally, I want to point out that many graduate programs do not have classes that explicitly teach experimental design. Often, students are left to glean what compromises solid research by assessing others’ work in journal clubs and receive no formal instruction in the principles of experimental design. Therefore, another approach the NIH could take to addressing this problem would be to require that the fundamentals of experimental design be taught to trainees in their mandatory ethics classes, as the two topics are so intimately linked.

  4. I applaud the Effort of NIH for reaching out to scientific journal editors to cooperatively craft principles to guide researchers towards producing more rigorous research.

Before submitting your comment, please review our blog comment policies.

Leave a Reply to V T Cancel reply

Your email address will not be published. Required fields are marked *