Nexus June 2013
Dr. Sally Rockey

Rock Talk

Chimpanzees in Biomedical Research

Today NIH made an important announcement about the use of chimpanzees in biomedical and behavioral research. After accepting the findings of an extensive Institute of Medicine (IOM) study commissioned by NIH, and reviewing the implementation recommendations from the Council of Councils  and public feedback, NIH leadership has decided to significantly reduce the use of chimpanzees in the biomedical research it supports, and expects to designate the majority of NIH-owned chimpanzees for retirement.

The IOM study released in 2011 proposed three principles to analyze research using chimpanzees. First, the knowledge gained by the research must be necessary to advance the public’s health. Second, there must be no other animal research model that could provide this knowledge, and the research cannot be ethically performed on human subjects. Finally, the animals used in the proposed research must be maintained in natural habitats or appropriate physical and social environments.

NIH-supported research projects involving chimpanzees that do not meet these principles will wind down in a planned way that will avoid an impact on the animals and unacceptable losses to the science supported by these projects.

On this website you can find more on the recommendations accepted by NIH, and a summary of the public comments received as part of our request for comments earlier this year.

We plan to prepare subsequent procedural guidance and technical assistance, as appropriate, to implement some of these decisions. Researchers should continue to follow the existing guidance (see NOT-OD-12-025) regarding the submission of applications, proposals, or protocols for research involving chimpanzees until NIH finalizes this procedural guidance. We will be working closely with all of our stakeholders to ensure as smooth a transition as possible for research projects that will be affected by NIH’s decision.

Posted in Rock Talk | Tagged , | 7 Comments

More on Percentiling

Understanding NIH’s system of percentiling can be a challenge. My posts on how percentiles relate to paylines and success rates continue to get a lot of hits. A recent presentation by NIH’s Center for Scientific Review provided an example that I thought might be helpful to people trying to understand more about how a percentile is calculated for some grant applications.

A percentile, defined in its broadest sense, is a relative ranking of an application within a set of applications. In many cases a percentile rank is based on impact scores, and calculated against the set of all applications reviewed in the current and the preceding two review rounds. But this isn’t always the case. For example, applications reviewed by an ad hoc study section might use a different percentile base. (When percentiling is used, the summary statement you receive will identify the base that was used.)

NIH uses percentile calculations to improve our ability to compare applications across different application cycles and across different study sections. Percentiling allows NIH institutes to compare applications even when different study sections have different scoring behaviors. It’s natural for reviewers to judge an application in comparison to those immediately around it. Additionally, some study sections may have a relatively small sample of applications; combining review rounds in these cases can reduce noise and variability.

Let’s look at an example so you can see how this works. Say we have 15 applications, with impact scores ranging from a high of 10 to those that are Not Discussed (ND). These applications would be ranked in order, like so:

Table showing application impact scores (10, 11, 19, 20, 20, 20, 28, 29,30, 30, 30, 31, ND, ND,ND) and their corresponding application rank: (1 through 15)

 

 

 

 

 

 

 

 

To calculate the percentile, we use the formula: Image of equation: 100 x [(Rank - 0.5)/N]  ,where N is the total number of applications in the set. So for this example, N=15, resulting in the following:

Table showing application impact scores (10, 11, 19, 20, 20, 20, 28, 29,30, 30, 30, 31, ND, ND,ND) and their corresponding application rank: (1 through 15), and a third column with tied percentiles unreconciled. Corresponding percentiles are: 3, 10, 17, 23, 30, 37, 43, 50, 57, 63, 70, 77, 83, 90, 97. The cells containing tied application impact scores are highlighted.

However, as you can see, this formula doesn’t work as is for applications with tied scores (see the highlighted cells above) so the tied application are all assigned their respective average percentile:

Table showing application impact scores (10, 11, 19, 20, 20, 20, 28, 29,30, 30, 30, 31, ND, ND,ND) and their corresponding application rank: (1 through 15), and a third column with tied percentiles unreconciled and reconciled. Unreconciled percentiles are: 3, 10, 17, 23, 30, 37, 43, 50, 57, 63, 70, 77, 83, 90, 97. Tie-reconciled percentiles are: 3, 10, 17, 30, 30, 30, 43, 50, 63, 63, 63, 77, 83, 90, 97. The cells containing tied application impact scores, and reconciled percentiles are highlighted.

Not all applications are percentiled. Whether an application is percentiled depends on the grant mechanism, the institute, and the funding opportunity. For example, applications submitted in response to a request for applications (RFA) are never percentiled.

I hope this example – along with my earlier posts – helps shed more light on how peer review works at NIH.

Posted in Rock Talk | Tagged , | 21 Comments

Update on the Physician Scientist Workforce Workgroup and More at the ACD Meeting

The Advisory Committee to the Director meeting will be held today and tomorrow, and there are many exciting topics that will be presented and discussed, such as neuroscience research and the BRAIN initiative, improvement of preclinical research reporting and design, the Big Data to Knowledge programs, and I will provide an update on our implementation of the Biomedical Research Workforce Initiative.

Among the ACD Working Group updates is a presentation on the Physician-Scientist Workforce Working Group which, as you may remember, was launched at the last ACD meeting as a result of recommendations in the Biomedical Research Workforce report. The working group is focused on identifying the optimal research training for individuals in clinical disciplines. Many of the same challenges faced by the biomedical workforce working group are also an issue for understanding clinician-scientist training, namely a need to understand the composition and size of this group of researchers. This new working group is charged with several activities including developing a better understanding of this population of researchers, analyzing the influences on clinicians’ decisions to begin a research career, and identifying the incentives and barriers to clinician participation in scientific research activities. Ultimately this work will inform NIH on how we can better support a sustainable and diverse clinical research infrastructure.

I’m looking forward to hearing how they are going to embark on these activities – as well as all of the discussion over the next two days. Tune in to the videocast and join us on Thursday and Friday if you’re interested!

Posted in Rock Talk | Tagged , , | 4 Comments

Another Look at Enhancing Peer Review

A critical component in assuring the efficacy of NIH’s peer review system is the continuous assessment of peer review activities, to be sure that the practices and policies uphold the core values of peer review. In fact, this continual assessment was a key component of the 2008 NIH Enhancing Peer Review Initiative.

These continuous assessment activities include  ongoing analysis and monitoring of peer review outcomes as well as online surveys to give applicants, peer reviewers, and NIH  staff the opportunity to weigh in on our peer review process.

We’ve posted a report of the most recent (Spring 2012 Phase II) surveys on our peer review web pages. (As described in the report, Phase I surveys took place in Spring 2010 shortly after peer review changes were made.) Overall, applicants and reviewers are more satisfied with the new peer review system than the system in place before the Enhancing Peer Review initiative. Most respondents rated the peer review system as fair and consider themselves satisfied with the peer review process.

The graph depicts applicants’ responses in Phase 1 and Phase 2 to the question:  How fair is the peer review process at NIH?: Phase 1:  44% rated the system as very fair or somewhat fair;  25% rated the system as neither fair nor unfair, and 31% rated the system as somewhat unfair or very unfair.   Phase 2:  49% rated the system as very fair or somewhat fair;  18% rated the system as neither fair nor unfair, and 33% rated the system as somewhat unfair or very unfair.

The graph depicts reviewers’ responses in Phase 1 and Phase 2 to the question:  How fair is the peer review process at NIH?: Phase 1:  73% rated the system as very fair or somewhat fair;  11% rated the system as neither fair nor unfair; and 17% rated the system as somewhat unfair or very unfair.   Phase 2:  76% rated the system as very fair or somewhat fair;  8% rated the system as neither fair nor unfair; and 16% rated the system as somewhat unfair or very unfair.

The report asked program officers, scientific review officers, applicants and peer reviewers about specific aspects of the Enhancing Peer Review changes (such as single resubmission and nine-point scoring), and we are continuing to examine the impact of these policies, and provide guidance in response to these concerns. For instance, in open-ended comments, reviewers responding to this survey expressed their concern for uneven use of scores across the 9 point range, and a need for more scoring guidance. NIH recently issued revised scoring guidance to encourage use of the entire scoring range.

As I’ve said before on my blog, we are committed to continuous review of our peer review system because we know as science evolves, so should our peer review processes. Thanks for participating in these surveys.

Posted in Rock Talk | Tagged , , | 17 Comments

New Resources

New NIH Website on SBIR/STTR Reauthorization

NIH has set up a new website to keep the small business research community abreast of its implementation plan for the many changes resulting from the SBIR/STTR Reauthorization Act.  Applicants and grantees interested in the SBIR program can stay tuned to this website, as well as the SBIR listserv and Twitter feed, for news as the implementation plan rolls out.

Posted in New Resources | Leave a comment

New Tutorial on Submitting a Reference Letter in eRA Commons

A new 5 minute video outlines what applicants and their referees need to do to successfully submit a reference letter. As many career (“K”) development grant applications require reference letters, we encourage you to share this resource with your colleagues that are new to the NIH application process.

Posted in New Resources | Leave a comment

You Ask, We Answer

How Can I Tell If I Have the Right Application Form Version?

NIH gives each set of updated forms a version name for quick identification and easier communications (in the case of the upcoming transition, the version name is “FORMS-C”).

When we post a funding opportunity and its application package on Grants.gov there is a “Competition ID” field that Federal agencies can use to provide further details about the funding opportunity. NIH uses the Competition ID to convey the form version name. The version name displays in the Competition ID field when you are downloading the application from Grants.gov as well as in the header of the downloaded application. (See screen shots and more details in Do I have the Right Electronic Forms for My Application?).

If the Competition ID is ADOBE-FORMS-B1 or B2, you have the older form set. If it is FORMS-C, you are using the newest forms. NIH is posting the new forms to funding opportunity announcements (FOAs) in July/August. So if you don’t see the form version you need, check the FOA again in a few weeks.

Posted in You Ask, We Answer | Leave a comment

Will NIH be Reissuing Each FOA or Simply Posting a New Application Package to Incorporate Form Updates?

For the activity codes transitioning to updated forms for deadlines on/after September 25, 2013, we will simply expire old application packages and post new ones to the existing Funding Opportunity Announcements (FOAs)  – so the FOA will not change, but applicants must use the application package with a Competition ID of “FORMS-C”.

Planning is still underway for the Fellowship, Training, Career Development and Small Business programs that will transition later. Since those form changes will be made in conjunction with other programmatic changes, it is likely that we will need to post new FOAs as well as application packages for those transitions.

Posted in You Ask, We Answer | Leave a comment

When Do I Need to Use the Updated Application Forms (FORMS-C)?

Applications to funding opportunity announcements (FOAs) with due dates on or after September 25, 2013 must use updated forms (FORMS-C), with these exceptions:

  • Career Development, Fellowship, and Training  FOAs will transition to updated forms for deadlines on or after January 25, 2014
  • Small Business FOAs will not transition to updated forms until Small Business Reauthorization form changes are also available, so the timing is to be determined.

Learn more about our forms transition from our eSubmission Items of Interest.

Posted in You Ask, We Answer | Leave a comment

Calendar

July 5, 2013: Yes, We’re Open!

Since Thursday, July the 4th is a Federal holiday, NIH (including its help desks) will be closed. Since Friday, July 5, 2013 is not a Federal holiday, NIH and its help desks will be open, and application deadlines of July 5th remain in place. If you have an application due on July 5 and are planning a long weekend, then we recommend planning to submit early in the week to give yourself time to correct any errors with your submission, view your application in the eRA Commons before the deadline, and enjoy your long weekend.

Posted in Calendar | Leave a comment