A few months ago, a researcher told me about his experiences with the relatively new NIH policy by which investigators are allowed to submit what we have come to call “virtual A2s.”
Under NIH’s previous single resubmission policy, if an investigator’s de novo R01 grant application (called an “A0”) was not funded, they had one chance to submit a revision (called an “A1”). If the A1 application was unsuccessful, the applicant was required to make significant changes in the application compared to the previous submissions. NIH took measures to turn away subsequent submissions that were materially similar to the unfunded A1. Under NIH’s current policy, investigators may resubmit a materially similar application as a new submission after the A1 submission. We will call these applications “virtual A2s.” The researcher told me that his virtual A2 did not fare well; although his A0 and A1 had received good scores (though not good enough for funding), the virtual A2 was not discussed. He wondered, just how likely is it for a virtual A2 to be successful?
Because we treat virtual A2s as de novo submissions, we do not link the application to previous versions; therefore, this was not a simple question to answer. However, we have taken advantage of text-mining software to identify virtual A2s and to describe their review outcomes compared to other submissions from the same investigators.
We begin with 4,952 unique R01 A1 applications that were considered for funding in fiscal year (FY) 2014 and which were not funded. These 4,952 applications had been submitted by 5,660 unique applicants (about 18% of the applications named multiple principal investigators (PIs)). We then looked through the middle of FY 2016 and found that among these 4,952, there were 4,030 cases in which at least one subsequent R01 or R21 application was submitted by these same PIs. We used text-mining software to determine the word and concept frequency of the titles, abstracts and specific aims of the subsequent applications and found that among the 4,952 unfunded applications, there were 1,090 cases in which at least one later application was more than 80% similar: these cases are counted as “any virtual A2s.”
Table 1 shows the characteristics of the baseline unfunded A1 applications according to whether they were followed by at least one virtual A2 application. As might be expected, PIs were more likely to submit a virtual A2 if the unfunded A1 had been discussed during peer review, and if the scores (if discussed) were better. New investigators were also slightly less likely to submit virtual A2s.
The figure below shows the events following unfunded A1 applications. Of the 1,090 cases in which at least one virtual A2 application was submitted, 219 – or 20% – had at least one application funded.
Table 2 shows the characteristics of the baseline unfunded A1 applications according to whether at least one corresponding virtual A2 application was funded. As might be expected, over 80% of the unfunded A1s that resulted in a successful virtual A2 were discussed in peer review groups. Otherwise there were no major differences.
In summary we found that:
- Investigators are taking advantage of the new policy. For about 22% of unsuccessful A1 applications, we see a materially similar subsequent submission, what we’re calling a virtual A2.
- Investigators who fail to obtain funding on an A1 remain active. We saw that 80% of the PIs with unsuccessful A1 applications submitted at least one application within the following fiscal year.
- A small number of virtual A2 applications are successful.
Returning to the question that the researcher asked me a few months ago, his story is not all that atypical: the percentage of virtual A2 applications that are funded is similar to that of de novo applications. There does not appear to be any special advantage from previous submission and review. However, there does not appear to be any disadvantage either; it appears that the policy is allowing for a small, but real, number of second-time revisions to get their chance.
I am most grateful to Andrei Manoli, Judy Riggie, and Rick Ikeda for their invaluable work on these analyses.