Using Artificial Intelligence and Other Digital Technologies to Enhance Grant Management Operations

Posted

Here, we will explore some innovative and analytical ways that NIH is leveraging artificial intelligence (AI) and other digital technologies to strengthen our internal grant and application management operations. When taken together with other efforts, these approaches move us towards continually funding the most meritorious research possible. 

Where does your application go…  

NIH reviewed over 75,000 applications in fiscal year 2023. It is a Herculean task to ensure all of these applications are assigned to the appropriate program and review staff at NIH, so they get a fair and timely review. One way NIH referral staff identify potential study sections to assign these many applications to is using the AI-based Automated Referral Tool. This tool launched in 2022, which is also available to investigators, utilizes referral data from the previous three review cycles and is able to adapt to changes in study sections. It is a useful tool to assist NIH receipt and referral staff when making decisions about the referral of applications to the appropriate review branches. 

After application receipt and referral, liaisons at NIH Institutes or Centers receive a large number of grant applications from review staff and determine which program officer would be the correct fit. The effort was time consuming and repetitious. 

Colleagues within eRA, partnering with the National Institute of General Medical Sciences, built an enterprise level AI-based tool that compares application abstracts to the expertise of program officials, and then suggests a match. This capability within NIH’s enterprise grants management system uses natural language processing to cluster and assign grant applications. It has proven highly successful at getting the right applications to the right program staff in a timely fashion, dramatically reducing workloads and resource needs. 

Looking at those applications in more detail…  

Applicants are not allowed  to submit duplicate or highly overlapping applications to be reviewed at the same time. Digital tools can help us compare submitted applications for any potential scientific or budgetary overlap. For instance, NIH uses natural language processing (NLP), a type of AI, to find similar language in grant applications. Related tools also flag potentially overlapping applications in the review process for more in-depth analysis. When these overlapping applications are identified, NIH scientific review staff can then refocus their attention on tasks that require human decision making, thought, and experience. With the help of this tool, 243 applications were withdrawn for overlap in the first quarter of 2024 as a point of reference. Looking more broadly, such tools and internal controls help prevent us from funding the same idea twice and ensure NIH’s limited resources can support different innovative research ideas. 

We are also exploring how AI and machine learning can guide programmatic assessment of an application’s Data Management and Sharing Plan. These plans explain how to promote the sharing of scientific data underlying findings from NIH-supported research to accelerate biomedical research discovery. 

Better understanding the research we fund…  

AI algorithms are also being tested to improve public reporting on NIH funded projects. We want to know, for instance, what  projects other algorithms identify to validate results from  the Research, Condition, and Disease Categorization system. If successful, the process will help make better use of NIH staff expertise and time throughout the category validation and maintenance processes.  

Keep in mind that we are developing and testing these tools described here for analytical purposes, not their ability to create new information based on how the tools were trained. Tool developers may not have the same (or any) confidentiality requirements. This is important to remember because confidentiality rules prohibit outside scientific peer reviewers from using generative AI technologies to formulate peer review critiques from sensitive grant application materials. Reviewers are prohibited from sharing uploaded review information where locally hosted AI technologies can access the data (see this FAQ). Moreover, as we have explained before, we invite researchers to serve in peer review because of their individual scientific expertise and professional original opinion. If we wanted a generative AI tool to do it, then we would have asked it ourselves. 

As another reminder, our NIH colleagues recently released a centralized resource of policies, best practices, and regulations to “responsibly guide and govern advancing science and emerging technologies, including development and use of AI technologies in research.” As noted on a prior post, they share information to consider for participant protections, data management and sharing, peer review, intellectual property, and biosecurity, amongst other policy areas.  

We are carefully exploring how AI and other related technologies may make NIH’s grant operations more robust, innovative, and flexible. We will continue testing and refining these analytical tools’ accuracy to help improve grants management operations, save time, and enhance stewardship of financial resources.  

Before submitting your comment, please review our blog comment policies.

Leave a Reply

Your email address will not be published. Required fields are marked *