Open Mike

Helping connect you with the NIH perspective, and helping connect us with yours

Continuing to Clarify the NIH Definition of a Clinical Trial

A few weeks ago we released some case studies and FAQs to help clarify for our research community whether their human subjects research study meets the NIH definition of a clinical trial. These resources prompted a number of follow-on questions and thoughtful suggestions from the community that have helped us refine both the FAQs and the case studies. We are grateful for your thoughtful and constructive comments and suggestions, many of which we have incorporated into our revised documents and communications.

In addition to providing additional rationale for our conclusions in the case studies, we made a number of changes, to include clarifying: what it means to be “prospectively assigned”; what we consider to be a “health-related biomedical and behavioral outcome”; how to classify “ancillary studies”; in what circumstances we would consider a mechanistic study to be a clinical trial; the use of surveys, questionnaires and user preferences; and more.

One of the key clarifications is the distinction between an observational study and an interventional study. There was a lot of very productive discussion around case study 18, which resulted in our breaking the one case study into 6 variations on a theme. The new case studies 18 a-f should help the community understand the nuances of when a measurement is a measurement, and when a measurement tool or task is considered an intervention.

The case studies and FAQs are living documents. We fully expect them to evolve as we work together to think through various scenarios. Unsure whether your human subjects study meets the NIH definition of a clinical trial? Ask the NIH program official (scientific contact) listed in the funding opportunity announcement or on NIH’s website who is responsible for your area of research.

It is important that we get this right. We have an ethical mandate to assure the public that the results of all NIH-funded trials will be made available in a timely manner. We know that under the current state of affairs, over half of all completed NIH-funded trials are not reported out within 2.5 years of completion; the problem is widespread and pervasive. This is an unacceptable state of affairs; it should not be optional to report results. We look forward to continuing to work with you as we move towards higher levels of trust and transparency.

Email this to someoneTweet about this on TwitterShare on Facebook0Share on LinkedIn11Share on Google+0Pin on Pinterest0Print this page

23 thoughts on “Continuing to Clarify the NIH Definition of a Clinical Trial

  1. Dear Mike,

    I hope all is well and that you’re enjoying the new change of season in the Northeast. I have been an avid reader of your blogs for a while now, and found them to be data-driven, evidence-based, and very informative. I now recommend your excellent blogs to all of my students and trainnes in biomedical sciences. One particular blog that have struck a chord, however, was the post adovacting for the GSI: Your were so brilliantly persuasive with meticulously assembly of hard facts, analytical insights, and ethical fortitudes in support of that policy. As a diligent researcher who has done work in experimental, clinnical, and epidemiological settings sponsored by the NIH, private foundation and industries over two decades, I fully endorse the orginal GIS policy. My experience reaffirms that values of diversity, inclusiviness and youth in any scientific inquiry. There is no double that the current winner takes all kind of federal funding environment is fundamentally not consistent with the ethical mandated by its public nature of support. Needless to say, I was disappointed to read Dr. Collins’ statement early June annoucing “a bold, more focused approach to bolster support to early- and mid-career investigators” –Simply put–abolishing the GIS policy! I wonder what happened behind the curtain of blue-ribbon meetings!?…

    I have thought about these issues over the Summer and decided to join the petition to reinstate the GSI program. The current NIH funding mechanism weakens the creative bases for any “young” investigators no matter how talented they are. It is high time to make the necesarily repair to the NIH funding foundation so to retain the American leadership of our global scentific enterprise and in higher institutions of learning.

    To paraphase Einsteins: nothing is infinite, except perhaps the universe and human stupidity; and I’m not sure about the universe. Capping NIH funding for individual investigator is the first step to the right direction.

    Yours Sincerely,

    Simin Liu, MD, ScD
    Professor and Director
    Center for Global Cardiometabolic Health (CGCH)
    Brown University

  2. Case #18 is still unworkably problematic. For example, there is no meaningful difference between #18b and #18c, other than perhaps the intent of the researcher.

    That is, every so-called “standard cognitive task” used to assess “brain activity under standardized laboratory conditions” (#18b) incorporates comparison among two or more conditions that vary with respect to some parameter that will “enhance or interfere with cognitive performance” (#18c). So, for instance, in a standard memory task, subjects deliberately encode items in one condition (Condition A) and only incidentally encode objects in another condition (Condition B). It is only by comparing activity between Condition A and Condition B that researchers can meaningfully identify fMRI signals of the variable of interest (memory), and it is therefore this comparison of the two conditions that is fundamental to a “standard cognitive task” that is not a clinical trial based on case #18b. However, Condition A and Condition B necessarily “enhance or interfere with cognitive performance”, given that memory was better in Condition A than in Condition B, and so this is now a clinical trial based on case #18c. Every meaningful “standard cognitive task” used to measure cognition relies on the comparison among two or more conditions that are designed to enhance or interfere with the cognitive process of interest, and so the distinction between case #18b and #18c is a pure fabrication.

    The only possible difference between #18b and #18c as written is the hypothetical intent of the researcher. That is, for case #18c, the goal of the researchers seems to be to test whether the comparison of Condition A and Condition B PRODUCES a change in cognitive performance and its corresponding brain function (e.g., fMRI), whereas for case #18b the goal of the researchers seems to be to USE the comparison of Condition A and Condition B to IDENTIFY the change in cognitive performance and its corresponding brain function (fMRI). In both cases, exactly the same conditions are used (A and B) and exactly the same outcomes are used (behavior and/or fMRI), but the “goal” of the study is different for one case versus the other. This policy will therefore simply yield confusion among anyone tasked with enforcing it, and classification of research studies as clinical trials versus not will fall on the whims of various NIH officials and the savviness of the researcher in describing the “intent” of their study.

    It is simply not possible to simultaneously say that all studies using “standard cognitive tasks” are not clinical trials while at the same time saying that all studies using “two conditions to enhance or interfere with cognitive performance” are clinical trials because all standard cognitive tasks use two conditions to enhance or interfere with cognitive performance. The three options are: (1) classifying all cognitive experiments as clinical trials (which is a horrible idea), (2) keeping the horrible paradox of #18b versus #18c, which will be solved arbitrarily by NIH officials and researchers (also a horrible idea), or (3) classifying all cognitive experiments as NOT clinical trials, including what you now mistakenly have classified as #18c.

    Case #18e has a very similar problem, in the sense that it still follows the same exact logic; i.e., two conditions are compared that vary with respect to a cognitive/brain process and the effect is measured (which, according to #18b is not a clinical trial and according to #18c is a clinical trial). For instance, if I care about “brain activity in the memory area” and I think that exposure to this magnetic field will alter brain activity in the memory area, then it is unclear why this is any different than case #18b, which is not a clinical trial. That is, if a “standard cognitive task” always involves comparison between two conditions that differentially affect the cognitive process or brain area of interest, why the heck does it matter what those conditions are? Here, Condition A is the magnetic field and Condition B is the lack of magnetic field, and that is no different than ANY experimental manipulation that would similarly alter brain activity in the memory area (e.g., if Condition A were intentional studying and Condition B were incidental studying, which is not a clinical trial according to #18b). However, if my stated goal is to “determine if exposure to the magnetic field impacts the memory area”, then this IS a clinical trial, just as #18c would be a clinical trial because the goal was to evaluate the intervention itself. In other words, just like for #18b versus #18c, the classification of #18e seems to depend on whether the “intervention” is an integral part of the assessment of a cognitive or neural variable of interest (just like #18b, not a clinical trial) or whether the goal of the study is to “evaluate the intervention” (just like #18c, a clinical trial). Either way, someone could do the same exact experiment, and it could be classified in two different ways depending on the supposed intent.

    I see no way around this conundrum unless the NIH wants to list every specific instrument that counts as a “standard cognitive task” falling under case #18b. This is probably a horrible idea that would entirely destroy the concept of basic neuroscience research with humans (i.e., tasks can’t be pre-defined, because new tasks are fundamental to progress in the field).

    Perhaps an easier solution would be to have researchers specify which of their experiments they will pre-register and make completion of these pre-registered experiments a condition of the review process for grant renewal? Then you don’t have to worry about trying to define things you can’t possibly define in a logical and consistent way. Also, if pre-registration is the goal and part of the review process, then there is no way for a savvy researcher to argue out of the requirement by being good at convincing an NIH official that their study doesn’t count as a clinical trial. Trying to make arbitrary guidelines for clinical trials seems like an indirect and ineffective way to force pre-registration.

  3. Dear Mike,

    I appreciate the responsiveness to some of the comments to date and in particular clarifications of case study 18. However, there is a concern that essentially all research regarding learning falls under the guidelines of a clinical trial, which I fear can be devastating towards US research on this topic.

    For example in my research, I typically present to participants simple visual stimuli and examine how this experience influences their perception of later presented stimuli. What we find in this research is that people quickly pick up the statistics of the stimuli and task structures of any activity that they participate in. This learning sometimes called statistical learning, or more recently much discussed under the framework of serial dependance, essentially shows that any experience in a study will induce some behavioral change. As such the definition of the clinical trial has a major problem in that researchers who decide to examine these changes have to register their research as a clinical trial, whereas researchers conducting the identical study but choosing to ignore these effects are not conducting a clinical trial.

    For example, take a study where I measure someone’s threshold of discriminating the orientation offset of a line. If I do this using a few hundred trials where different reference angles are intermixed, then I can use this to estimate an attribute of someone’s visual abilities on this task (not a clinical trial). However, I realize that performance differs on a particular reference angle depends upon the previous trial and decide to reanalyze the data to account for this, I am now studying how behavior changes in relation to a prospectively assigned intervention (and now suddenly this experiment becomes a clinical trial merely because I decided to more carefully account trial order effects in the data). Likewise if I do this for multiple days then I can get an even better measure of people’s performance, however, if I decide to ask whether they improved on the task during this period (e.g. perceptual learning) then this is now a clinical trial.

    This makes no sense and seems that it will have the effect of encouraging researchers to avoid studying these effects.

    As I wrote before, I think that it is very important to make results of research studies available to the public. However, it seems bizarre and dangerous to single out particular areas of research that now have to meet a huge regulatory burden when this research is in many cases indistinguishable from other research that uses the same methods in the same participants but analyses the data ignoring factors that exist within it. I hope that the NIH can take a serious look at how these new rules will impact different research areas and consider that not all behavioral changes should be considered within the same framework. For example, by reading this letter, you have experienced a prospectively assigned experiment designed to give rise to a behavioral change (in this case consideration of a change of perspective that will lead to a change in policy). Should I have registered this letter as a clinical trial?

    I appreciate your consideration.

    Best,

    Aaron Seitz, Ph.D.
    Professor of Psychology
    Director of UCR Brain Game Center
    University of California, Riverside
    900 University Ave.
    Riverside, CA 92521

  4. Dr. Laurer,

    I would like to thank you for taking the feedback to heart and for making these adjustments. There certainly were a number of studies (one of mine included) that were in a gray area before and that now clearly fall under the definition of a clinical trial. There are other forms of “intervention” than drugs or devices and bringing these clearly under the umbrella of a clinical trial is what I saw as one of the goals of the rework of the definition in 2014. I wholly understand the efforts here.

    As many of us voiced, the initial round posted recently had issues that broadened the scope considerably and did not seem consistent with those goals. That we could have an informed discussion, providing feedback, and having your office work with researchers to address the concerns is fantastic. It’s so easy to conclude these days that open discussion on matters people disagree on is a lost cause and that workable solutions that accommodate the central concerns of both sides is a mythical concept.

    I suppose what I’m trying to say is – thank you.

  5. Dear Dr. Lauer,
    the fact that results of many NIH funded clinical trials are not reported properly 2.5 years out is indeed a poor state of affairs. However, I just cannot understand how the issue of reporting on actual clinical trials would be in any way improved by mislabelling thousands of basic research studies as clinical trials. I fail to follow the logic here, and I have not seen a single plausible argument from inside or outside of the NIH to support this. Similarly, while I really appreciate the fact that the NIH is willing to refine these “case studies” based on our feedback, the expansion of case 18 has arguably increased rather than reduced confusion (for all of the reasons very nicely spelled out by Joel Voss above). Perhaps a key addition to your definitions could really clarify this issue, and that would be to elaborate just a little bit more on the notion of “to enhance or interfere with cognitive performance”: the key variable here that (to my mind) would separate a basic research study from anything that could possibly be construed as a “trial” is the intended duration of the effects of the experimental manipulation. So if you were to say “an intervention that aims to enhance or interfere with cognitive performance with health-relevant effects lasting beyond the duration of the experiment itself”, then we would be approaching something that makes sense. Basic research studies in human subjects (e.g., behavior, fMRI, EEG, etc.) manipulate variables and measure outcomes in the course of an experiment, but the effects of the manipulations are neither intended nor expected to change anything about how the participant will go about their day afterwards, beyond being able to remember the experiment. That is clearly not the case for any genuine clinical trials, be it of a pharmaceutical or a cognitive training regime. I sincerely hope that we can all get this right together, because the stakes are very high. Thanks for listening,
    Tobias Egner

  6. Dr. Lauer–

    No one will dispute there is a problem with your team’s discovery that nearly 50% of human trials failed to report findings; this clearly needs to be addressed. However, I’m not convinced the expansion of the clinical trials definition, and the extra burden associated with the new forms beginning January 2018, is the solution. There seem to be other approaches that could address this and not add substantial burden to basic scientists. For example, why not consider putting more weight in the Investigators criteria in the peer reviews? The Canadian system tends to weigh more Investigator productivity, and some of the new NIH mechanisms are moving this direction. Right now, reviewers are only asked to comment on progress of existing projects IF the application is a “renewal”. This means for any “new” applications, reviewers often do not know, or worse yet “look the other way”, if a prior funded project did or did not result in publications. This seems like a potential first step– not expanding the clinical trials definition so widely so as to capture key small-scale intervention work that bridges basic science and true clinical trials. The 21st Century Cures Act is about transparency.. but it is also about reducing administrative and faculty burden. There is a balance, and my sense is that many in the scientific community feel the pendulum swung too dramatically and too fast. You’ve got our attention– can we now have a constructive dialog about how to address this question without undue burden to what many consider basic clinical studies that are so critical to informing large and true clinical trials.

    Also, I would kindly point to the clarity that NCCIH came out with this past spring that made a lot of sense to many basic scientists using intervention studies with a primary outcome that is mechanistic in nature. They clarified:

    “NCCIH recognizes a difference between clinical trials that are designed to answer specific questions about the clinical effect of interventions and mechanistic studies that have the primary goal of understanding how an intervention works.

    A mechanistic study is defined as one designed to understand the mechanism of action of an intervention, a biological process, or the pathophysiology of a disease.

    A clinical outcome study is defined as one with the objective of determining the clinical safety, tolerability, feasibility, efficacy and/or effectiveness of pharmacologic, non-pharmacologic, behavioral, biologic, surgical, or device (invasive or non-invasive) interventions.

    NCCIH will continue to accept R01 applications via PA-16-160 that propose a study with human participants when the primary outcome/endpoint is explicitly mechanistic, rather than a clinical outcome, even if the mechanistic study meets the NIH definition of a clinical trial.”

    This clarity from NCCIH was one of the best I saw, and made a ton of sense. The additional clarity from NIH since that time, and some of these case studies (including case study #18 and its expansion), have led to more confusion we are seeing on this blog and others. I hope NIH will consider scaling back the NIH clinical trials definition in a way that is more consistent with the NCCIH clarity (i.e., there is a difference between mechanistic study and clinical outcome study).

    Respectfully,
    Jason R. Carter
    Michigan Tech Unviersity

  7. Dear Dr. Lauer,
    I have seen very few details regarding the implications of these new requirements on dissemination and implementation (D&I) research. The current NIH template for a clinical protocol is not a great fit for D & I research.
    Is there any plan to develop a more appropriate template for this type of research?
    Is there a plan to train study sections to distinguish applicable and non-applicable sections of clinical protocols when reviewing D & I proposals?
    I also worry about the added burden associated with using the current NIH template. It is very long and clunky. Potentially, use of this template could mean that an R21 grant that contains a 6-page research strategy could have a 100+ clinical protocol. This burden could hinder development of pilot proposals, particularly among early investigators who lack experience and research infrastructure. As a member of a standing section, I also wonder about the added time required to carefully review these extensive appendices.
    Has NIH conducted any internal research/simulations to determine:
    1) Whether these requirements will under deter submissions?
    2) What is the estimated burden on study sections, i.e. added time to review these proposals?

  8. Dear Dr. Lauer,
    I agree with the prior comments that case 18c reveals critical flaws in the proposed policy. And I second Dr. Egner’s point that the definition of “enhance or interfere” is critical here. The classification of 18c as a clinical trial implies that simply observing performance under various conditions constitutes “enhancement and interference”. But it would be more accurate to say that most experiments are only *measuring* performance in different conditions, not changing or intervening in the participant’s cognitive abilities. If I measure memory performance with a long versus a short list, performance will be worse in the long list condition; but this experiment would not be a true intervention because there would no expectation of any change in the person’s memory ability. Thus, to lump all such studies in with real clinical trials will only over-burden the system that is needed to oversee clinical trials and hamstring basic research with miles of red tape that will undermine the NIH goal of fostering scientific advancement.

    You emphasize the importance of pre-registration, but there are much more effective and expedient ways to promote rigor and reproducibility in the doman of basic science. Because the clinical trial infrastructure is not designed to handle both clinical trials and basic research, lumping all those studies together will only undermine them both.
    Ed Awh

  9. I appreciate that NIH leadership has been open to commentary from extramural scientists. I support the importance of disseminating and sharing clinical trial results but I do not see how broadening the definition will help solve the current problem (FAQ 1). The poor reporting of clinical trial results is a problem identified for research funded & registered under the current definition of a clinical trial. Why not focus on solving this problem first, rather than broadening the definition to include more research under the clinical trials umbrella? Second, while I understand a moral imperative to make clinical trial results public, and more generally for taxpayers to see their NIH tax dollars translate into research productivity, I do not agree that the public will be served by swamping clinicaltrials.gov with research that is largely irrelevant with respect to meaningful (rather than measurable) health-related endpoints.

  10. Little attention has yet been given to Case 9. By the logic used to quality this study as a clinical trial, it would seem that any experimental study involving a manipulation that induces a physiologic changes would qualify a study as a clinical trial, if this physiologic change serves as an outcome measure. For instance, a study in which subjects are shown images of fearful vs. neutral facial expressions, with the purposes of testing whether the different kinds of images elicit different stress responses (as measured by a skin conductance response, change in heart rate, BOLD response in the amygdala, etc.).

  11. The new set of variants of Case 18 represent a valiant effort to clarify the distinction between what is and is not a “clinical trial”. This is interesting and tricky work and I don’t claim to have worked through all the implications. However, I do not think that the new cases solve the fundamental problems with this approach to the issue of registering and reporting human subject research. I will illustrate the problem by using Cases 18a and 18c to show that the same experiment can be a clinical trial under one case and not under the other.

    Before I do that, however, let me make it clear that I (and, I think, most of my colleagues) are NOT opposed to registering and reporting basic behavioral research. We are concerned with a set of unintended consequences, discussed elsewhere, of trying to force the basic science peg into the clinical trials hole.

    Turning to Cases 18a and 18c: Here is a sample experiment for purposes of illustration. We will use working memory because that is the example used in 18 a and c. I will show my observers four colored squares. I will cover them up and then ask the observers to specify the color of one of those squares from memory. On some trials, I will ask the observers to count backwards by 3s during the working memory task. So, I have two tasks or conditions: working memory in isolation and working memory with a secondary task. A reasonable guess would be that the secondary task will interfere with working memory.

    First, you might ask yourself if you want this experiment classified as a clinical trial.

    Under Case 18a, this is not a clinical trial because “The purpose of administering these measures is not to modify a health-related outcome.” I am doing two tests and looking at the answers. 18a imagines a study with “various cognitive performance measures (e.g., working memory tasks)”. We have two working memory tasks. This is not a clinical trial.

    Under 18c, the same experiment is a clinical trial because it is designed to “to enhance or interfere with cognitive performance”. The differences between the conditions “will alter cognitive task performance and associated brain activity. “ Indeed, Case 18c would classify as a clinical trial any cognitive experiment with an independent variable (unless, I suppose, it produced a perfect null result and, thus, did not “alter cognitive task performance”). As such, 18c leaves us in the same position as the original Case 18. It sweeps vast portions of basic human behavioral research into the clinical trial category.

    My sample experiment would not be a clinical trial under any common usage. However, I understand that may not be the point here. This policy serves other goals. NIH funded my hypothetical study. My hypothetical significance section on that grant said that we need to understand working memory in order to look for early signs of dementia (or something like that). I used the taxpayers money and my observers’ time to run this study. It is reasonable to argue that I should register and report this study. But note, I would have written the same Significance sections for the “observational” Case 18a version and the experimental Case 18c version. This makes it hard to see why I should not register the observational study. Indeed, it is a bit hard to know why I shouldn’t register the hypothetical version that I might do with mice who I would have argued are an animal model of dementia.

    Let’s add one more variation on this theme. Suppose, I give the same working memory test to every child in a big, hypothetical observational study. I collect socio-economic data and I correlate working memory score with SES. Presumably, that is a very clear, Case 18a, observational study. But why, for purposes of the reporting requirement, is the interference produced (hypothetically) by low SES different from the interference produced by counting backwards by 3s?

    The more I think about this issue, the more I come to believe that there is no definition that divides basic human behavioral research into studies that should be registered as clinical trials and those that need not be registered. What to do?

    1) You could define “health-related” or “intervention” in a manner that moved nearly all of basic research out of the category of studies needing registrations.

    2) You could assert that all NIH funded human behavioral work needs to be registered and you could create an appropriate pathway to do that (perhaps by working with something like Brian Nosek’s “Open Science Framework”).

    3) You could do both. Define our research as basic research. Define clinical trials as clinical trials. Then tell us: Clinical trials register on this path (and follow the policies on clinical trials) – Basic research register on this path (and follow the policies for basic human research).

    The basic behavioral science community is committed to openness and transparency and has been working on these issues for years. We are eager to work with you on a national system of registration and results reporting that meets the interests of all stakeholders, including scientists engaged in basic discovery research in humans.

    Sincerely

    Jeremy Wolfe
    President: Federation of Associations of Behavioral and Brain Sciences

    • Case 18a is not a clinical trial because there is no manipulation. Your hypothetical study includes a manipulation intended to affect cognitive function (i.e., an “intervention” according to NIH), whereas 18a includes no manipulation of any kind. So your hypothetical study doesn’t seem to be comparable to 18a.

  12. The larger point raised by Dr. Lauer is an excellent one: we have the good fortune to have our research supported by the public in the form of NIH grants, and objecting to the need to disseminate what that support has funded is irresponsible at best. Any entity that funds research (whether it is the NIH, a corporation sponsoring research, or a private philanthropic entity) will require some dissemination of the results of the research they have supported. The NIH, since it is funded by US taxpayers, has every reason to require that our results are publicly available, and has mandated (since 2008) that papers with partial or full NIH support must be uploaded to PubMed after publication.

    However, this leaves an obvious gap – the file drawer, where experiments that didn’t work or didn’t get written up languish. NIH-supported studies which are never published are not available to the public, and this is a problem that should be remedied. Eliminating this gap is essential for open, transparent science which builds on the successes and failures that have come before. NIH has been quite willing to embrace the move towards greater scientific transparency and the rapid dissemination of research findings (e.g., the recent encouragement to cite preprints in funding applications; see NOT-OD-17-050). Greater scientific transparency is an unalloyed good – the science we do will be better for it, but doing it well requires clear guidance.

    Current guidance from NIH on the question of what is a clinical trial says, in essence, that almost all human subjects research will be classified as a clinical trial going forward (per the definition in 45 CFR part 46.102(b), which is not a creation of the NIH, but is rather part of the 2017 revisions to the Common Rule). In addition, all such studies will need to be registered with ClinicalTrials.gov and the results of these studies posted when available. However, not all clinical trials are the same. The NIH itself makes a critical distinction between “clinical trials” and “applicable clinical trials” (as defined in 42 CFR part 11.10(a), as well as 45 CFR part 46.102(b)).

    However, when a nonclinical researcher hears the phrase “clinical trial,” what they are thinking of is, in fact, a “applicable clinical trial,” as defined in 42 CFR part 11.10(a). New reporting requirements for applicable clinical trials came into effect at the beginning of 2017 (see NOT-OD-16-149 for NIH’s policy on this) and make a clear distinction between clinical trials and applicable clinical trials. However, NIH has also said that the new definition of clinical trials more broadly (as defined in 45 CFR part 46.102(b) and disseminated by NIH in NOT-OD-15-015) “is not intended to expand the scope of the category of clinical trials.” If this is the goal, NIH needs to make this distinction absolutely clear to researchers whose work will now be considered a clinical trial, but is in no way an applicable clinical trial under 42 CFR part 11.10(a).

    The goal of eliminating the file drawer for NIH-supported research will foster better, more transparent science than can build on successes and failures alike, but the distinction between “clinical trials” and “applicable clinical trials” needs to be taken into account. It may be that the best place for the registered methods and results Dr. Lauer wants to see for NIH-supported research is ClinicalTrials.gov, but “clinical trials” which are not “applicable clinical trials” will need to be distinguished from each other, due to the dramatically different reporting requirements which apply to each.

    Aside from the conflict between “clinical trials” and “applicable clinical trials” as have just been discussed, the new Common Rule significantly expands what human subjects research will be exempt from IRB review going forward (see 45 CFR part 46.104(d)(3)). This significantly expanded exempt category leads to the strange circumstance of human subjects research which is exempt from IRB review, but is classified as a clinical trial by the same Rule which exempts it.

    NIH’s guidance on applicable clinical trials (as an implementation of 42 CFR part 11) is exemplary. In particular, the checklist provided for applicants to determine whether the reporting requirements pertain to their submission is far more clear than what is currently available on the larger question of clinical trials. The guidance provided in NOT-OD-16-149 makes it abundantly clear that “applicable clinical trials” which trigger clinical trial reporting requirements are a subset of the category of clinical trials defined by NOT-OD-15-015 (and 42 CFR part 11.10(a)). This has not been conveyed in the existing documentation for the broad category of clinical trials. Reading NOT-OD-16-149, it appears that the intent of the broad definition in NOT-OD-15-015 is to mandate methodological and results registration of essentially all human subjects research supported by NIH as a condition of funding, but to not trigger the additional requirements under 42 CFR part 11 unless it correctly applies to the research.

    I believe that much of the consternation that has been expressed comes down to simply poor choices in nomenclature that were outside of NIH’s control; the distinction between “clinical trial” and “applicable clinical trial” is an absolutely essential one, and one that has not been considered at all. NIH’s guidance on applicable clinical trials (NOT-OD-16-149) makes it clear that what the public (and the researchers who are afraid their research will be considered clinical trials going forward) consider clinical trials are legally defined as “applicable clinical trials” and that essentially any experimental research with human subjects is a “clinical trial” under the new definition, with the reporting requirements that have been outlined by NIH (namely, study registration and results reporting). I believe that the distinction between the two needs to be made absolutely clear, and that further clarification is required before the new Rule and NIH policy go into effect.

    Perhaps the most concerning element here with the new clinical trial definition is the impact on what funding researchers whose work is classified as a “clinical trial” may apply for. Quite reasonably, work that fulfills the requirements for an applicable clinical trial has been long been restricted to study sections with the expertise to review them. The current guidance from NIH is that research classified as a clinical trial, rather than applicable clinical trial, will be similarly restricted going forward. This is problematic, since applicable clinical trials are a subset of clinical trials, but should not be treated as the same thing. Bearing in mind that the nomenclature is fixed and cannot be easily changed, clarification is required as to whether research which is a clinical trial but is not an applicable clinical trial should be submitted to the same study sections. This has profound consequences on researchers’ ability to do their research, further their careers and train students, and needs to be considered carefully.

    I support NIH’s goal here in increasing scientific transparency, but strongly suggest that efforts must be made to clarify the issues I have discussed here, as the consequences of leaving them as they are will be confusion, noncompliance and a general diminishment of human subjects research in the United States.

    • Thank you, Ben. I wasn’t aware that the Final Rule Definition of Clinical Trial published in the Federal Register earlier this year (see https://www.gpo.gov/fdsys/pkg/FR-2017-01-19/pdf/2017-01058.pdf, page 7163, item 4, “Response to Comments and Explanation of the Final Rule Definition of Clinical Trial”) is word-for-word identical to the revised NIH definition (see https://grants.nih.gov/grants/guide/notice-files/NOT-OD-15-015.html). As you pointed out, the intent to harmonize policy at the federal level is bigger than NIH. It’s the Federal Register (not NIH policy) that says (page 7149): “This rule is effective on January 19, 2018.” The date set by NIH is synchronized with the higher-level federal mandate.

      I also wasn’t aware of the crucial distinction between “clinical trial” (new sense) and “applicable clinical trial” (ACT). As you said, the terminology is confusing and unfortunate; but an ACT is much, MUCH more restrictive than a “clinical trial”, as we can see from this simple 4-item checklist at ClinicalTrials.gov: https://prsinfo.clinicaltrials.gov/ACT_Checklist.pdf. In particular, an ACT study must satisfy item 3: “Does the study evaluate at least one drug, biological, or device product regulated by the United States Food and Drug Administration (U.S. FDA)?” This excludes, almost by definition, all basic research studies.

      At minimum, this seems to imply that only ACT studies funded by the NIH must register at ClinicalTrials.gov. This should be the case because (as the ACT Checklist itself says): “ClinicalTrials.gov is a service of the National Institutes of Health.” Also, the Federal Register publication on the Final Rule (page 7163, item 4) says: “We generally expect that this definition will be applied harmoniously with the definition of clinical trial recently promulgated in the ClinicalTrials.gov final rule.” Thus, harmonization at the federal level seems to mandate harmonization with ClinicalTrials.gov which, in turn, is a service of NIH. Therefore, NIH should harmonize study registration requirements with the ClinicalTrials.gov ACT Checklist.

      If that’s right, then it would spare the unnecessary red tape of registering non-ACT studies with ClinicalTrials.gov. However, it still leaves open the concerns about funding mechanisms that exclude clinical trials, and IRB and research ethics review panels that require different standards for clinical trials.

      Here’s a proposal: By January 2018, many NIH notices and other documents should substitute “applicable clinical trial” for “clinical trial”. Also, many local institutional requirements currently triggered by “clinical trial” should be changed so they are triggered by “applicable clinical trial”. The former would require top-down action at the NIH, and the latter would require bottom-up local action.

      • It’s a bit more complicated than I thought it was when I wrote that comment. NIH requires all clinical trials (whether or not they’re applicable clinical trials under 42 CFR part 11) to register and report through ClinicalTrials.gov (see NOT-OD-16-149). They’re pretty clear about it there (in ways that would be nice to see here and now).

        My current take on things is that NIH went “hey, we’ve got this new reporting mandate for clinical trials (see NOT-OD-16-149), so let’s use the fact that the definition in 45 CFR part 46 is very broad to require identical reporting from much more of our supported research than just clinical trials, except that we need to use the definition in 45 CFR part 46 to make that happen.

        • Thanks again, Ben. Now I can see that it’s more complicated. Speaking for myself, the learning curve for policy issues related to clinical trials has been steep. For example, slide #9 of Mike’s recent presentation at the NIH Council of Councils meeting [1] links to a 2016 Federal Register document that discusses issues and distinctions between “the policy” and “the rule”, and between “clinical trials” (revised sense) and “applicable clinical trials”. The reasons behind these distinctions are still unclear to me, but a bit of light is dawning. On reflection, it does make sense that NIH inclusion criteria are broader than FDA criteria: FDA regulates safety/efficacy studies for only some kinds of therapies and diagnostics (pharmaceuticals, devices, and biologics). Yet–aside from the orthogonal need for transparent planning and reporting–I don’t understand the inherent logic for expanding the *interpretation* of the revised definition of “clinical trials” to possibly include non-clinical studies (a) for which safety isn’t an issue, and (b) which are not designed to evaluate the effectiveness of a clinical practice.

          Fortunately, a strong point of agreement is the need for transparent planning and reporting of all confirmatory (positive finding) and disconfirmatory (negative finding) research studies funded by NIH–not just “clinical trials” (whether new sense or old sense). ClinicalTrials.gov might be a suitable platform for some basic research studies; but one size does not fit all! As others noted, the inherent dynamic adaptive complexity of brain/mind/behavior at multiple time scales leads neuroscientists to devise virtually unlimited cases to engage human participants. Also, as Aaron Seitz noted above, learning is a healthy brain process that can’t be *observed* without carefully designed experimental controls that engender different effects within the range of normal variation.

          Beyond policy issues: This discussion could develop into a substantive dialog about experimental designs for basic human neuroscience, translational studies, and clinical trials. It would be great if we could somehow recruit Mike for a late-breaking town hall meeting at the upcoming Society for Neuroscience meeting [2]…

          [1] https://dpcpsi.nih.gov/sites/default/files/CoC-Sept-2017-230PM-Implementation-Clinical-Trials.pdf

  13. Personally, I think The definitions of “intervention” and “health-related outcome” are entirely circular. An intervention is defined as a manipulation designed to affect a health-related outcome, and a health-related outcome is defined as something that is affected by an intervention. And I have found no guidance at all on the specific meaning of “health-related.” I probably just lack imagination, but what would be some examples of non-health-related biological characteristics that could be measured in humans?

    Dave

  14. Dear Dr. Lauer,
    I’ve followed your blog with interest for some time and appreciate it. Like my colleagues who have commented above, I’m concerned about trying to fit psychology/cognitive neuroscience research into a clinical trials format. As Jeremy Wolfe expressed, the basic behavioral science community is committed to openness and transparency and would welcome the opportunity to work with you on promoting these goals in a way that would work for our research objectives. In particular, I would like to see NIH require a data sharing plan for all funded projects (the plan could specify why data sharing is not possible for that project, but the issue would need to be addressed). However, the clinical trials format does not seem well suited to our research. Case 18C is especially troubling.

    Given these developments, I am trying to familiarize myself with what information is required to register on Clinicaltrials.gov. A fundamental problem for behavioral research to use this format is that “control” or “placebo” is not as simple as giving a sugar pill. I am particular troubled by the fact that participants can read the protocol online before completing the study. In the protocol, we would be revealing details about the expected outcomes for our two conditions and, given that we cannot make participants “blind” to condition in the same way that one can in a drug study, they would be able to figure out which condition they are in – and the act of reading the description on Clinicaltrials.gov could very well influence their behavior in our studies.

    For instance, consider some of our lab’s work on stereotype threat. We ask older adults to complete working memory tasks after either reading a news article about how memory declines with age or reading a control news article. With the current Clinicaltrial.gov reporting requirements, we would need to detail which condition is the “intervention” and which is the control, and describe the intervention. Even if we are as vague as “reading a news article,” if participants read this before completing the study, this would likely change how they attend to the news articles. This is seriously problematic as it would change how our participants respond to our behavioral manipulations.

    Mara Mather
    Professor
    University of Southern California

  15. Dear Dr. Lauer,

    I continue to be very concerned about this extension of the “clinical trial” definition to basic research.
    If anything the concept has become more confusing with the extra examples. Just consider case 24 and 26 – it seems that here the specific content of the information determines whether information comprehension it is a health-related outcome.
    Or case #33 where the conclusion is that “preferences are not a health-related biomedical and behavioral outcome” … again, this seems to be dependent on content or – even more troubling – inferences about the intent of the researchers.
    Given that these are such fine-grained nuances which may be perceived differently by potential reviewers, most of us will have to err on the side of safety and describe our studies as clinical trials leading to added administrative burden as well as concerns about participants being able to review experimental manipulations ahead of time through clinicaltrials.gov.
    I fully agree with the need to ensure that tax-payer funded research is reported in a timely manner, but the overextension of the clinical trial definition is not the right means to this end.

    Sincerely,

    Corinna Loeckenhoff
    Associate Professor
    Cornell University

  16. Pingback: Science Policy Around the Web – September 12, 2017 | Science Policy For All

  17. Dear Dr. Lauer,

    The core faculty of the Center for Cognitive Neuroscience appreciates very much your willingness to listen to researchers’ feedback about the NIH definition of clinical trials. We would like to make the following 4 points regarding the current debate.

    1) We agree with you that missing or delayed reporting of NIH-funded clinical trials is a serious problem, and strongly support the goal of transparency.

    2) We reason that if the main problem is missing/delayed reporting of clinical trials, then the solution is not to convert basic human research studies to clinical studies but to enforce reporting of clinical trials.

    3) We think that if the goal is the registration of basic human research studies, then the solution is not to convert them to clinical trials but to create a parallel registration system for basic human research studies. This new registration system should take into account the particularities of these studies and hence it would have to be different than the one for clinical studies. If a parallel registration system is a possibility, basic human researchers including us would be happy to provide suggestions.

    4) Finally, we believe that new Case 18c is ambiguous because it may or may not be a clinical trial depending on whether or not the enhancement/interference effects involve (i) a persistent change in the participant’s cognitive ability, and (ii) generalize beyond the particular set stimuli used in the experiment. In cognitive/cognitive neuroscience research, we enhance/interfere with cognitive processes in order to study them but the effects do not affect participants’ cognitive abilities beyond the experiment; participant do not have better/worse attention, memory, etc after the experiment than before the experiment. They may remember some of the stimuli presented in the experiment but their basic memory abilities are not affected. In contrast, cognitive training studies that could be closer to the definition of clinical trials seek to produce (i) a persistent enhancement in cognitive abilities that (ii) generalize beyond the stimuli employed. Thus, it is critical to distinguish between basic cognitive studies and cognitive training studies.

    In short, we think the public, researchers, and NIH would all be best served by a six month delay in implementing the new clinical trial definitions and standards. As your recent changes to the case studies given researchers’ feedback has highlighted, further conversation is critical. This delay would allow these important stakeholders to converge on the right operationalizations that will ensure for transparency and discovery in health-related research.

    Sincerely yours,

    Alison Adcock
    Elika Bergelson
    Roberto Cabeza
    Felipe De Brigard
    Tobias Egner
    Jennifer Groh
    Brian Hare
    Katherine Heller
    Scott Huettel
    Kevin LaBar
    David Madden
    Elizabeth Marsh
    Tobias Overath
    John Pearson
    Dale Purves
    Gregory Samanez-Larkin

    • additional signatures:
      Walter Sinnott-Armstrong
      Marc Sommer
      Marty Woldorff

Leave a Reply to Mark E. Pflieger Cancel reply

Your email address will not be published. Required fields are marked *