What are Institutional Training Table Data Used For?

Posted

NIH institutional training grant applications request past and present faculty and trainee data, which are used by peer reviewers and NIH program staff in the evaluation of the application and making funding decisions. For active training grants, NIH requests trainee and faculty data to assess the progress of these ongoing training awards. These data provide insight into:

  • the environment of the proposed training
  • distribution of participating faculty by research interest
  • existing support for training, and availability of funds to support trainees’ research
  • recruitment of new trainees and progress of existing trainees, as assessed by outcomes such as:
    • degree completion
    • publications
    • career progress
    • subsequent involvement in research-related activities

Have other questions about training grant data tables? Visit our FAQs.

For more information on institutional training grants, generally, visit the “T Kiosk” page on researchtraining.nih.gov.

2 Comments

  1. I am fine with these data being solicited. That said, I am concerned about how reviewers use these data to assess quality of the training experience. I have met several Ph.D.s over the years who were at one time supported on training grants who had essentially none of the skills that one would expect of a Ph.D. In a couple of cases, I spoke to the dissertation advisor about these situations and it turned out that the program did not allow them to resign as advisor. This was because the incentive to have a “perfect” graduate rate outstripped the importance of making sure that everyone who graduates with a Ph.D. has the skills/accomplishments deserving of this. This can be a big problem for the students in question in the long run. While one could argue that it is great for them that they have a Ph.D., if they do not have the skills to go along with it, they become essentially unemployable in the profession. They would have been way better off washed out of the program. I hope that 100% completion rates are looked at the a healthy skepticism.

  2. Having observed training grant reviews for over 20 years as an NIGMS program director, I have found that NIGMS reviewers treat this matter very carefully. A 100% completion record is not good, if not coupled to other strong outcome metrics (publications, subsequent placements, grant support). A reasonable attrition (say less than about 10%) is accepted and each case is examined. Why did students leave the program? Early, late, academic reasons or otherwise? A high level of attrition is not acceptable because in many cases this means an inadequate process was used for recruiting and selecting trainees and/or failures in monitoring progress and mentoring the trainees and the trainees’ mentors.

Before submitting your comment, please review our blog comment policies.

Leave a Reply

Your email address will not be published. Required fields are marked *