Enhancing Reproducibility in NIH-supported Research through Rigor and Transparency

Posted

Photo of Larry TabakDr. Larry Tabak is the Principal Deputy Director of NIH.

Nothing could be more important to our enterprise than research rigor, assuring that the results of our work are reproducible. Our conversation with you on this topic began early last year with a commentary in Nature by Francis Collins and today’s guest blogger, Larry Tabak, on the importance of reproducibility and how NIH plans to enhance it. As described in a follow-up Rock Talk post, the topic of reproducibility is not new. Evidence has shown that too many biomedical-research publications are irreproducible. Thus this topic demanded our community’s immediate attention and we have had continued dialog with and participation by you over the course of the last 18 months to describe the issue, request information, launch pilots, and craft a way forward to enhance reproducibility.

Since that January 2014 Nature commentary, NIH has begun to address reproducibility from a number of different angles. In 2014, NIH worked alongside journal editors to develop a set of common principles to guide how research results are reported. In 2015, NIH published a series of videos as a resource intended to stimulate conversation in courses on experimental design. In addition to these efforts, NIH’s Office of Research on Women’s Health has led the discussion of the consideration of sex as an important biological variable that should be considered in designing experiments and reporting results. They have posted a variety of resources on their website, including outcomes of an October 2014 workshop where research experts discussed ways to strengthen scientific design by incorporating sex as a biological variable. The problem of misidentified cell lines also has an impact on research reproducibility, and a December 2014 article by NIH leadership focused on ways NIH can help catalyze improvements in cell line authenticity. There are a number of NIH institutes and offices that have completed or are embarking on other projects that contribute to the goal of improving rigor in the design and methods used in research (see PAR-13-383, RFA-DE-15-004, PAR-14-267 NOT-NS-11-023, NOT-MH-14-004, NOT-DA-14-007). Many other NIH efforts also support the goals of reproducibility, including data sharing, public access, PubMed Commons, and more.

All of these activities are culminating in changes to the grant application instructions and peer review criteria that we are planning to put in place for applications submitted in January 2016 and beyond. As announced in the NIH Guide today (NOT-OD-15-103), we are clarifying our long-standing expectations regarding the importance of rigor in scientific research. In this notice, we outline our expectations for the scientific community about describing the rigor of the research proposed in grant applications, with additions to the review criteria used to evaluate proposals. These changes will prompt applicants and reviewers to consider issues, which – if ignored – may impede the transparency needed to reproduce key results and thereby slow scientific progress. Included in our clarifications about rigor is our expectation for scientists to address sex among other biological variables, in order to improve the transparency of this fundamental aspect of science in shaping biological processes and outcomes critical to health (see NOT-OD-15-102). And finally, we ask you to tell us how key biological and chemical resources are authenticated to ensure their identity and validity.

These new additions to application and review practices represent aspects at the beginning of the research process which will promote greater transparency of the work we do and the research NIH funds.   These additions complement the journal requirements for publication at the other end of the process. It is this strong foundation upon which research going forward relies, and we are confident that these changes will be embraced as an important step in lifting the entire research enterprise to even greater heights.

9 Comments

  1. This issue also was addressed for human biospecimens by the NCI’s Office of Biorepositories and Biospecimen Research:

    Moore HM, Kelly AB, Jewell SD, McShane LM, Clark DP, Greenspan R, Hayes DF, Hainaut P, Kim P, Mansfield E, Potapova O, Riegman P, Rubinstein Y, Seijo E, Somiari S, Watson P, Weier HU, Zhu C, Vaught J. Biospecimen reporting for improved study quality (BRISQ). J Proteome Res. 2011 Aug 5;10(8):3429-38. doi: 10.1021/pr200021n. Epub 2011 Jun 21. PubMed PMID: 21574648; PubMed Central PMCID: PMC3169291.

    http://www.ncbi.nlm.nih.gov/pubmed/21574648

  2. These steps are definitely a step in the right direction, we need to encourage the idea of such transparency in the healthcare sector. The grant application process in itself are not that complex. Applicants need to scrutinise the accuracy of the documentation provided, any financial feasibility documentation will only help to solidify your application. It is also imperative that your proposal benefits the masses in a positive manner.

  3. Suggestion:
    As I see it, a big problem is that science was meant to be “self correcting,” but has changed such that the structure only rewards novelty. So, knowing full well that most investigators will say they are too busy for this: What if, along with preliminary data, there was a requirement for each researcher to show data that reproduces an experiment germane to the proposed work? In this way, each scientist would be motivated to contribute to self-correcting science. And, realistically, it is something each scientist should be doing: reproducing work before extending it. So, to the “too busy” objection, I would say that nobody should be too busy for due diligence. It could also be branded as NIH fiscal responsibility to ensure that only projects based on sound experiments are funded.

  4. In my view, the NIH is avoiding an important issue in regard to data reproducibility – providing sufficient funding to permit experiments to be performed properly. Consider different types of genomics experiments such as microarray experiments, RNASeq experiments and many other types of complex experiments that are being performed in many labs. Do the grants that support these projects provide sufficient funding to allow investigators to perform sufficient numbers of biological and technical replicates as well as sufficient variation in experimental conditions to ensure robust and reproducible results?

  5. I believe that there is a fundamental problem with published false-positive results that is not well addressed by focusing on increased rigor of experimental methods. Not all variation in results is subject to controllable experimental conditions, and some uncertainty is inherent in all scientific work. NIH review guidelines stress innovation, but grant-panel reviewers seem to have difficulty distinguishing innovation from novelty. The main goal of prioritizing research with potential for substantial contribution is that the work will move a field forward. That is the crux of innovation, not whether or not the technology or its application are novel. In many cases, grant reviewers are not expert on both the disease and the conditions under study, and thus do not have adequate knowledge from which to judge the degree to which proposed research will indeed move a field forward. Most of science progresses by incremental improvements. If reviewers consider that boring, lacking innovation, it does a disservice to scientific growth. In today’s review climate, no second study of anything is likely to get funded. Confirmatory studies of important but uncertain results are vital. As long as reviewers adhere to the cult of novelty, the scientific literature will not self-correct false-positive results.

  6. Novelty is over emphasized by reviewers. This means that investigators are discouraged or prevented from repeating various studies, even in different ways, which may compromise quality.

    Also, it is well known that in some cases, drugs, chemicals, reagents, assay kits and other products that are supplied commercially do not reach the high standard and specificity advertised by the suppliers.
    So, I am curious to know what are the principles and the level of regulations that govern the ability of companies, especially ‘starts-ups’, to make public claims about the specificity of their product? It may be too stringent to hold reagent supply companies to the level required for drug companies, but it may be good that investigators clearly identify the suppliers and their products number or code in experimental procedures, which may allow discrepancies in reproducing studies to be traced to a reagent, for example.
    We have the experience of obtaining a reagent from a company that was different from what was shown on the label, a radio-labeled molecule from a different supplier that deteriorates much faster than the supplier claimed, and antibodies, from other suppliers, that do not react as claimed and/or showed high levels of cross-reactivity than claimed by the supplier.

  7. A major reason for lack of reproducibility is that the research results that get reported need not be representative of what really happened. The incentive for “publishable” results, combined with the fact that the NIH does not require pre-registration of studies and their methods makes it all too easy, if not inviting, to “torture the data until they confess”. If the data are so “negative” that they resist such “torture”, there is no requirement to report that ugly fact—instead the study can be repeated, with different methods and analytic techniques, until a “publishable” result is finally obtained. Thus results obtained using the originally planned methods need never see the light of day, and the next investigator, having no idea how hard it was to get them, wonders why he or she cannot reproduce them. Unfortunately this issue of reporting bias, though well documented in clinical trials and more recently in basic science, seems to have gotten relatively little attention in the current effort to increase reproducibility.

  8. My suggestion, after recently applying for a DOD grant which did not allow any references to who the grant applicant was or the associated institutions, is that this is an excellent approach which is likely to reduce compromise of the grant review process. The section demonstrating the past research of the applicant and relevance to the current application is separate and reviewed by NIH. This would reduce the continual funding of researchers who have been funded by NIH but do not succeed in translating findings as expected but continue to receive funding because they merely publish something.

  9. The incentive for “publishable” results, combined with the fact that the NIH does not require pre-registration of studies and their methods makes it all too easy, if not inviting, to “torture the data until they confess”. If the data are so “negative” that they resist such “torture”, there is no requirement to report that ugly fact—instead the study can be repeated, with different methods and analytic techniques, until a “publishable” result is finally obtained. Thus results obtained using the originally planned methods need never see the light of day, and the next investigator, having no idea how hard it was to get them, wonders why he or she cannot reproduce them. Unfortunately this issue of reporting bias, though well documented in clinical trials and more recently in basic science, seems to have got relatively little attention in the current effort to increase reproducibility.

Before submitting your comment, please review our blog comment policies.

Leave a Reply to biolabs edublog Cancel reply

Your email address will not be published. Required fields are marked *