In today’s New England Journal of Medicine, Richard Nakamura, the director of NIH’s Center for Scientific Review (CSR), and I published an essay titled “Reviewing Peer Review at the NIH.”1 As the competition for NIH research grants has become increasingly stiff, review scores often are pointed to as the reason for failure to obtain funding. Indeed, over the past few years, peer review has come under increasing scrutiny. Critics have argued that peer review fails in its primary mission – to help funding agencies make the best decisions about which projects and which investigators to support.2,3 Recent analyses of NIH grants suggest that peer review scores, at best, are weak predictors of bibliometric outcomes (i.e. publications and citations)4, 5, 6 whereas prior investigator productivity may do a better job of predicting grant productivity.4,7
In our essay, Richard and I consider three issues regarding the ongoing debates about peer review. First, how do we measure scientific impact? It is clear that it is not enough to focus solely on bibliometric outcomes, which have their share of problems. One approach, proposed by Ioannidis and Khoury, goes by the “PQRST” moniker: Productivity (which includes bibliometrics), Quality, Reproducibility, Sharing of data and other resources, and Translational influence.8
Second, we note that imprecise predictions of productivity do not necessarily mean that the current peer review system is failing. As measured by current cutting-edge techniques,9 NIH-funded grants are performing well—producing at least twice as many papers as expected, and these papers garner unusually high numbers of citations in their respective fields.4,7 Furthermore, given the relatively low success rates in securing funding, it may not be surprising that peer review cannot yield precise distinctions among proposals that are all excellent or outstanding.
Third, we focus on the idea that the process of funding science should itself be subject to evaluation “using the most rigorous scientific tools we have at our disposal.” Some thought leaders have called on funding agencies to apply the scientific method to the study of their own work.10 We consider a number of analyses or experiments we could do, including changing the way scores are reported (e.g., using “bins” rather than numeric scores); anonymizing grant applications prior to review; testing differentiated peer review processes (e.g., having one study section only review new applications while another only reviews renewals); and comparing peer review scores to nonbibliometric measures, such as reproducibility of findings, meaningful data sharing, impact on clinical practice guidelines, and so on.
We invite you to read our Perspective essay1 (which the New England Journal of Medicine is making available for free) and to join us in efforts to enhance our most important common interest: to fund the best science and the best scientists who, working with all stakeholders, will advance knowledge and improve our public’s health.