Scientific Rigor in NIH Grant Applications

Posted

In part two of our series on rigor and transparency in research grant and career development award applications, we focus on scientific rigor, the strict application of the scientific method to ensure robust and unbiased experimental design, methodology, analysis, interpretation, and reporting of results.

We can all agree that attention to scientific rigor is important. But how can we be sure a rigorous experiment was performed? In published papers, full transparency in reporting experimental details is crucial for others to assess, reproduce, and extend the findings. Likewise, in grant applications, full transparency is necessary for reviewers to properly assess the proposed studies.

Therefore, as part of the Approach section of the Research Strategy, updated instructions clarify this expectation to emphasize how the experimental design and methods proposed will achieve robust and unbiased results. Solid, well-controlled experiments can produce robust results capable of being reproduced under well-controlled conditions using reported experimental details. A robust approach might include use of appropriate statistical methods, prospective sample size estimation, replicates, or standards (for example, reference reagents or data standards). Robust and credible results are those obtained with methods specifically designed to avoid bias, such as blinding, randomization, and prospectively defined exclusion/inclusion criteria, to name a few.

It is important to keep in mind that each scientific field may have its own set of best practices or standards to achieve scientific rigor. Reviewers are well-positioned to identify strengths or weaknesses of the proposed plans. Applicants are encouraged to include a succinct description of the experimental design and methods with enough detail to assure the reviewers that the necessary elements of rigor will be addressed.

NIH strives to fund the best, most rigorous science. NIH encourages professional organizations to engage in these discussions with their respective communities. For example, the Federation of American Societies for Experimental Biology (FASEB) recently published a set of recommendations that arose from a series of community dialogues on antibodies and mouse models as tools for basic research. Attention to scientific rigor will ensure we are all creating solid foundations on which future research can build.

Coming up in my next blog, consideration of biological variables in NIH grant applications.

For additional resources, see the OER website on NIH efforts to enhance reproducibility through rigor and transparency: http://grants.nih.gov/reproducibility/index.htm

9 Comments

  1. The explicit demand for rigor virtually guarantees loss of creativity. What should Einstein have said in his application? Statistics is inherently a subjective affair. If one is using ANOVA rigorously, then the error distributions must be shown to be Gaussian if the numbers are to have any meaning. It is worse to not check that distribution and claim that ANOVA applies. Some people like to choose 1% significance, others 5%, etc. Totally subjective. If you can’t see the result by eye, then the statistics is meaningless.

    As Asimov pointed out, discovery doesn’t come from rigor.

  2. Broadly applied the Heisenberg’s uncertainty principle states that to much “rigor”, often another name for bureaucracy, kills creativity and productivity while at the same time increases the cost of research and deters talented young people from research.

  3. To second Dr Sachs, science seems to progress from discovery which remarkably often stems from unexpected observations , often made while doing treatments that lack any rigor except clinical anecdote. Every major class of psychiatric medication is based on serendipitous observation or erroneous theory. Validating discovery calls for rigor. The aridity of psychotropic drug development since 1970 is not due to lack of rigor. It is likely due to the grandiose belief that therapeutics derives from basic research, rather than the historically shown reverse. Bench to bedside has proven a poor guide. NIMH ,emphasizing RDoC as a guide to future therapeutics simply has no foundation except wishful thinking.. If you are in a deep hole , continuing to dig down is an unlikely winning strategy.

  4. As a reviewer, I already comment on “scientific rigor” I do not need a subheading labeled “Scientific Rigor” with things that will obviously be written to please my eyes. The real issue comes down to the submitted body of work and who is reviewing said work.

  5. Scientific rigor should be a given. Asking us to now add in specific language, without altering the page limit is asking for shoddy grantsmanship, where critical preliminary data or important experiments must be excluded for lack of space. I worked for the NIH for a decade, and this was a consistent pattern- make more rules to enforce rules that already exist. Scientific rigor is a a cornerstone of good science, and should not have to be a soundbite in a grant. And if you are going to force us to abide by this rule, at least give us a page or two extra in which to do it, so that a) it’s done properly and b) i don’t have to forego critical information in order to jump through yet another hoop. Bureaucracy like this will kill creative science everywhere, and seems antithetical to initiatives like the moonshot program that call for creative ideas.

    1. 100% agree with the above comment. With limited pages (12 pages) it is very hard to write a decent application with all the new rules in place. 14 pages for research strategy looks better as VA adopted this page limit. In particular most of the reviewers want all the supportive data included in the application. Appendices are not allowed and some reviewers may not take an extra step to go through the referenced published work.
      Everybody will include the details requested by the NIH, may be extra work. But, most helpful is if the NIH inform the journals publishing the research funded by the NIH to have a standard scientific rigor could be helpful.

  6. I have seen the term ‘Scientific Rigor’ used as a means for reviewers to negate a grant application without having to critique the ideas and implications of the application. It empowers the reviewer to silence an applicant without evaluating the hypotheses and data which supports the hypothesis contained in the application.

    Typical statements include ‘There is no provision for scientific rigor or reproducibility’
    These statements are nonsensical, when juxtaposed to ‘Most of the preliminary data is already published’ which is also often used as a negating criticism.

    By stating that some of the preliminary data has already been published, and other data lacks scientific rigor, the reviewer implies that although the data and science has been validated by peer scientists, other data from the same investigator on the same subject is not reliable. I googled the term ‘scientific rigor NIH’ and was directed to this web site. I will now simply copy and paste the stock phrases above into my application

    1. I totally agree with you and, I have myself experienced it. Same vague comments that undermine the preliminary data and peer reviweed published work. I believe that the new “Scientific Rigor” criteria makes it easier for the reviwers to criticize a grant without solid commetns by stating the lack of scientific rigor! This is not helpful to the applicants, especially when the reviewers have contradictory comments about the rigor and reproducibility!

  7. Pingback: What We Talk About When We Talk About Reproducibility – UC3 CDL

Before submitting your comment, please review our blog comment policies.

Leave a Reply

Your email address will not be published. Required fields are marked *