Using Artificial Intelligence and Other Digital Technologies to Enhance Grant Management Operations

Posted

Here, we will explore some innovative and analytical ways that NIH is leveraging artificial intelligence (AI) and other digital technologies to strengthen our internal grant and application management operations. When taken together with other efforts, these approaches move us towards continually funding the most meritorious research possible. 

Where does your application go…  

NIH reviewed over 75,000 applications in fiscal year 2023. It is a Herculean task to ensure all of these applications are assigned to the appropriate program and review staff at NIH, so they get a fair and timely review. One way NIH referral staff identify potential study sections to assign these many applications to is using the AI-based Automated Referral Tool. This tool launched in 2022 by the Center for Scientific Review (CSR), which is also available to investigators, utilizes referral data from the previous three review cycles and is able to adapt to changes in study sections. It is a useful tool to assist NIH receipt and referral staff when making decisions about the referral of applications to the appropriate review branches. 

After application receipt and referral, liaisons at NIH Institutes or Centers receive a large number of grant applications from review staff and determine which program officer would be the correct fit. The effort was time consuming and repetitious. 

Colleagues within eRA, partnering with the National Institute of General Medical Sciences, built an enterprise level AI-based tool that compares application abstracts to the expertise of program officials, and then suggests a match. This capability within NIH’s enterprise grants management system uses natural language processing to cluster and assign grant applications. It has proven highly successful at getting the right applications to the right program staff in a timely fashion, dramatically reducing workloads and resource needs. 

Looking at those applications in more detail…  

Applicants are not allowed  to submit duplicate or highly overlapping applications to be reviewed at the same time. Digital tools can help us compare submitted applications for any potential scientific or budgetary overlap. For instance, NIH uses natural language processing (NLP), a type of AI, to find similar language in grant applications. Related tools, such as another developed by CSR, also flag potentially overlapping applications in the review process for more in-depth analysis. When these overlapping applications are identified, NIH scientific review staff can then refocus their attention on tasks that require human decision making, thought, and experience. With the help of this tool, 243 applications were withdrawn for overlap in the first quarter of 2024 as a point of reference. Looking more broadly, such tools and internal controls help prevent us from funding the same idea twice and ensure NIH’s limited resources can support different innovative research ideas. 

We are also exploring how AI and machine learning can guide programmatic assessment of an application’s Data Management and Sharing Plan. These plans explain how to promote the sharing of scientific data underlying findings from NIH-supported research to accelerate biomedical research discovery. 

Better understanding the research we fund…  

AI algorithms are also being tested to improve public reporting on NIH funded projects. We want to know, for instance, what  projects other algorithms identify to validate results from  the Research, Condition, and Disease Categorization system. If successful, the process will help make better use of NIH staff expertise and time throughout the category validation and maintenance processes.  

Keep in mind that we are developing and testing these tools described here for analytical purposes, not their ability to create new information based on how the tools were trained. Tool developers may not have the same (or any) confidentiality requirements. This is important to remember because confidentiality rules prohibit outside scientific peer reviewers from using generative AI technologies to formulate peer review critiques from sensitive grant application materials. Reviewers are prohibited from sharing uploaded review information where locally hosted AI technologies can access the data (see this FAQ). Moreover, as we have explained before, we invite researchers to serve in peer review because of their individual scientific expertise and professional original opinion. If we wanted a generative AI tool to do it, then we would have asked it ourselves. 

As another reminder, our NIH colleagues recently released a centralized resource of policies, best practices, and regulations to “responsibly guide and govern advancing science and emerging technologies, including development and use of AI technologies in research.” As noted on a prior post, they share information to consider for participant protections, data management and sharing, peer review, intellectual property, and biosecurity, amongst other policy areas.  

We are carefully exploring how AI and other related technologies may make NIH’s grant operations more robust, innovative, and flexible. We will continue testing and refining these analytical tools’ accuracy to help improve grants management operations, save time, and enhance stewardship of financial resources.  

Editor’s note: We updated the post on November 25, 2024 to acknowledge CSR’s role in developing the referenced tools.

6 Comments

  1. Gen AI could be very useful for scientific review, and not using it is leaving valuable capabilities on the table. A huge amount of work at CSR (and effort for scientists) could be saved if ideas that were ultimately going to be funded were identified on the first pass at a higher rate. Grant applications are long and dense, and gen ai allows natural language search and summary with much better results than traditional keywords. Outside of 2 or 3 reviews, the rest of the panel is unlikely to read more than the first page, and gen ai may help them rapidly answer questions about the proposal as they weight the analysis of the primary and secondary reviewer. This is especially true for multidisciplinary applications where a single reviewer lacks expertise in some aspects of a proposal. “Did the authors address these critiques?” And “How would the authors address these points.” could help reviewers better distinguish between easily changed critiques and fundamental problems unlikely to improve.

  2. It would be great if AI tools could also be used for the efforts NIH is taking to reduce “reputational bias” during merit review. While anonymizing applications entirely wouldn’t be logical, AI tools can anonymize scientific sections (including citations and locations). Reviewers can still access complete packages but such tools help them minimize any implicit bias by initially focusing on the key ideas before knowing the team/environment.

  3. Thank you very much for this deployment of AI at NIH. The use AI is the ultimate tool for increased indepth interrogation of the grant applications and also point towards the trends in the thought process of the scientific community. The AI with time will be very accurate as the database expands and the scientific community will need a sustained socialization on the role and the utility of AI in this space to build and sustain consensus.

  4. To my understanding the RCDC is not based on AI. And the help you can get from RCDC is pretty archaic when you see what level of expertise can be expected from specifically trained LLN these days (and the speed of evolution in this area is mind blowing). The historic use of RCDC for “diagnostic” purposes in program offices is understandable, but the utilization of RCDC for the reverse function of finding “key ideas”, “key methods”, and “key required expertise” is painful as RCDC is by definition is a function of “one – word”. The quicker we move from RCDC as a QVR engine search to something less archaic the better.

  5. Looking at the statement: “Reviewers are prohibited from sharing uploaded review information where locally hosted AI technologies can access the data” but I wonder how anyone can regulate this?

Before submitting your comment, please review our blog comment policies.

Leave a Reply

Your email address will not be published. Required fields are marked *