1
|
Blatch-Jones AJ, Boxall C, Asante E, Meadmore K. Exploring virtual funding committee practices in the allocation of National Institute for Health and Care Research funding: A netnographic study. F1000Res 2024; 13:338. [PMID: 38910591 PMCID: PMC11193083 DOI: 10.12688/f1000research.145582.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 07/03/2024] [Indexed: 06/25/2024] Open
Abstract
Background Funding committees, comprising members with a range of knowledge, skills, and experience, are considered integral to the decision-making process of funding organisations for recommending or allocating research funding. However, there is limited research investigating the decision-making processes, the role of members and their social interactions during funding committee meetings conducted both virtually and face-to-face. Methods Using a mixed-methods design and following netnography principles, the study observed nine National Institute for Health and Care Research programmes funding committee meetings conducted virtually during October 2020 to December 2021; complemented by interviews with committee chairs and members (18 interviews) and NIHR staff (12 interviews); an online survey (50 responses); and documentary analysis. Personal reflections through immersive journals also formed part of the analysis. Results Three main themes were identified from the observations, interviews, and online survey: efficiency of virtual committee meetings (importance of preparation, and the role of formality, process, and structure); understanding the effect of virtual committee meetings on well-being (effects of fatigue and apprehension, and the importance of work life balance); understanding social interactions and engagement (levels of engagement, contribution and inclusivity, awareness of unconscious bias and the value of social networking). Conclusions Examining the decision-making practices of one funding organisation across several research programmes, across multiple committee meetings over one year has generated new insights around funding committee practices that previous studies have not been able to explore or investigate. Overall, it was observed that fair and transparent funding recommendations and outcomes can be achieved through virtual funding committees. However, whilst virtual funding committees have many benefits and opportunities, such as the potential to increase membership diversity and inclusivity, and be more environmentally sustainable, more evidence is needed to evaluate their effectiveness, with particular focus on issues of fatigue, engagement, and committee cohesion, especially when new committee members join.
Collapse
Affiliation(s)
- Amanda Jane Blatch-Jones
- National Institute for Health and Care Research (NIHR) Coordinating Centre, School of Healthcare Enterprise and Innovation, University of Southampton, Southampton, England, SO16 7NS, UK
| | - Cherish Boxall
- Southampton Clinical Trials Unit, University of Southampton, Southampton, England, SO16 6YD, UK
| | - Emmanuel Asante
- School of Health, Psychology and Social Care, University of Derby, Derby, England, DE22 1GB, UK
| | - Katie Meadmore
- National Institute for Health and Care Research (NIHR) Coordinating Centre, School of Healthcare Enterprise and Innovation, University of Southampton, Southampton, England, SO16 7NS, UK
| |
Collapse
|
2
|
Gallo SA, Schmaling KB. Peer review: Risk and risk tolerance. PLoS One 2022; 17:e0273813. [PMID: 36026494 PMCID: PMC9417194 DOI: 10.1371/journal.pone.0273813] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2022] [Accepted: 08/16/2022] [Indexed: 11/18/2022] Open
Abstract
Peer review, commonly used in grant funding decisions, relies on scientists’ ability to evaluate research proposals’ quality. Such judgments are sometimes beyond reviewers’ discriminatory power and could lead to a reliance on subjective biases, including preferences for lower risk, incremental projects. However, peer reviewers’ risk tolerance has not been well studied. We conducted a cross-sectional experiment of peer reviewers’ evaluations of mock primary reviewers’ comments in which the level and sources of risks and weaknesses were manipulated. Here we show that proposal risks more strongly predicted reviewers’ scores than proposal strengths based on mock proposal evaluations. Risk tolerance was not predictive of scores but reviewer scoring leniency was predictive of overall and criteria scores. The evaluation of risks dominates reviewers’ evaluation of research proposals and is a source of inter-reviewer variability. These results suggest that reviewer scoring variability may be attributed to the interpretation of proposal risks, and could benefit from intervention to improve the reliability of reviews. Additionally, the valuation of risk drives proposal evaluations and may reduce the chances that risky, but highly impactful science, is supported.
Collapse
Affiliation(s)
- Stephen A. Gallo
- Scientific Peer Advisory and Review Services Division, American Institute of Biological Sciences, Herndon, Virginia, United States of America
- * E-mail:
| | - Karen B. Schmaling
- Department of Psychology, Washington State University, Vancouver, Washington, United States of America
| |
Collapse
|
3
|
Recio-Saucedo A, Crane K, Meadmore K, Fackrell K, Church H, Fraser S, Blatch-Jones A. What works for peer review and decision-making in research funding: a realist synthesis. Res Integr Peer Rev 2022; 7:2. [PMID: 35246264 PMCID: PMC8894828 DOI: 10.1186/s41073-022-00120-2] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2021] [Accepted: 02/01/2022] [Indexed: 11/17/2022] Open
Abstract
INTRODUCTION Allocation of research funds relies on peer review to support funding decisions, and these processes can be susceptible to biases and inefficiencies. The aim of this work was to determine which past interventions to peer review and decision-making have worked to improve research funding practices, how they worked, and for whom. METHODS Realist synthesis of peer-review publications and grey literature reporting interventions in peer review for research funding. RESULTS We analysed 96 publications and 36 website sources. Sixty publications enabled us to extract stakeholder-specific context-mechanism-outcomes configurations (CMOCs) for 50 interventions, which formed the basis of our synthesis. Shorter applications, reviewer and applicant training, virtual funding panels, enhanced decision models, institutional submission quotas, applicant training in peer review and grant-writing reduced interrater variability, increased relevance of funded research, reduced time taken to write and review applications, promoted increased investment into innovation, and lowered cost of panels. CONCLUSIONS Reports of 50 interventions in different areas of peer review provide useful guidance on ways of solving common issues with the peer review process. Evidence of the broader impact of these interventions on the research ecosystem is still needed, and future research should aim to identify processes that consistently work to improve peer review across funders and research contexts.
Collapse
Affiliation(s)
- Alejandra Recio-Saucedo
- Wessex Institute, National Institute of Health Research Evaluation, Trials and Studies Coordinating Centre, University of Southampton, Alpha House, Enterprise Road, Southampton, Southampton, SO16 7NS UK
| | - Ksenia Crane
- Wessex Institute, National Institute of Health Research Evaluation, Trials and Studies Coordinating Centre, University of Southampton, Alpha House, Enterprise Road, Southampton, Southampton, SO16 7NS UK
| | - Katie Meadmore
- Wessex Institute, National Institute of Health Research Evaluation, Trials and Studies Coordinating Centre, University of Southampton, Alpha House, Enterprise Road, Southampton, Southampton, SO16 7NS UK
| | - Kathryn Fackrell
- Wessex Institute, National Institute of Health Research Evaluation, Trials and Studies Coordinating Centre, University of Southampton, Alpha House, Enterprise Road, Southampton, Southampton, SO16 7NS UK
| | - Hazel Church
- Wessex Institute, National Institute of Health Research Evaluation, Trials and Studies Coordinating Centre, University of Southampton, Alpha House, Enterprise Road, Southampton, Southampton, SO16 7NS UK
| | - Simon Fraser
- Wessex Institute, National Institute of Health Research Evaluation, Trials and Studies Coordinating Centre, University of Southampton, Alpha House, Enterprise Road, Southampton, Southampton, SO16 7NS UK
- School of Primary Care, Population Sciences and Medical Education, Faculty of Medicine, University of Southampton, Southampton, SO17 1BJ UK
| | - Amanda Blatch-Jones
- Wessex Institute, National Institute of Health Research Evaluation, Trials and Studies Coordinating Centre, University of Southampton, Alpha House, Enterprise Road, Southampton, Southampton, SO16 7NS UK
| |
Collapse
|
4
|
Derrick GE, Bayley J. The Corona-Eye: Exploring the risks of COVID-19 on fair assessments of impact for REF2021. RESEARCH EVALUATION 2021. [PMCID: PMC8499985 DOI: 10.1093/reseval/rvab033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/02/2022]
Abstract
This paper assesses the risk of two COVID-19 related changes necessary for the expert-review of the REF2021’s Impact criterion: the move from F2F to virtual deliberation; and the changing research landscape caused by the COVID-19 crisis requiring an extension of deadlines, and accommodation of COVID-19 related mitigation. Peer review in its basic form requires expert debate, where dissenting opinions and non-verbal cues are absorbed into a groups deliberative practice and therefore inform outcomes. With a move to deliberations in virtual settings, the most likely current outcome for REF2021 evaluations, the extent that negotiation dynamics necessary in F2F evaluations are diminished and how this limits panelists’ ability to sensitively assess COVID-19 mitigation statements is questioned. This article explores the nature of, and associated capabilities to undertake, complex decision making in virtual settings around the Impact criterion as well the consequences of COVID-19 on normal Impact trajectories. It examines the risks these changes present for evaluation of the Impact criterion and provides recommendations to offset these risks to enhance discussion and safeguard the legitimacy of evaluation outcomes. This paper is also relevant for evaluation processes of academic criteria that require both a shift to virtual, and/or guidance of how to sensitively assess the effect of COVID-19 on narratives of individual, group or organisational performance.
Collapse
Affiliation(s)
- Gemma E Derrick
- Department of Educational Research, Centre for Higher Education Research & Evaluation, Lancaster University, Lancaster LA1 4YD, UK
| | - Julie Bayley
- Lincoln Impact Literacy Institute, Vice Chancellor's Office, University of Lincoln, Brayford Pool, Lincoln, LN6 7TS, UK
| |
Collapse
|
5
|
Burnaska DR, Huang GD, O'Leary TJ. Clinical trials proposed for the VA Cooperative Studies Program: Success rates and factors impacting approval. Contemp Clin Trials Commun 2021; 23:100811. [PMID: 34307958 PMCID: PMC8287148 DOI: 10.1016/j.conctc.2021.100811] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2020] [Revised: 04/26/2021] [Accepted: 06/21/2021] [Indexed: 11/29/2022] Open
Abstract
The process by which funding organizations select among the myriad number of proposals they receive is a matter of significant concern for researchers and the public alike. Despite an extensive literature on the topic of peer review and publications on criteria by which clinical investigations are reviewed, publications analyzing peer review and other processes leading to government funding decisions on large multi-site clinical trials proposals are sparse. To partially address this gap, we reviewed the outcomes of scientific and programmatic evaluation for all letters of intent (LOIs) received by the Department of Veterans Affairs (VA) Cooperative Studies Program (CSP) between July 4, 2008, and November 28, 2016. If accepted, these LOIs represented initial steps towards later full proposals that also underwent scientific peer review. Twenty-two of 87 LOIs were ultimately funded and executed as CSP projects, for an overall success rate of 25%. Most proposals which received a negative decision did so prior to submission of a full proposal. Common reasons for negative scientific review of LOIs included investigator inexperience, perceived lack of major scientific impact, lack of preliminary data and flawed or confused experimental design, while the most common reasons for negative reviews of final proposals included questions of scientific impact and issues of study design, including outcome measures, randomization, and stratification. Completed projects have been published in high impact clinical journals. Findings highlight several factors leading to successfully obtaining funding support for clinical trials. While our analysis is restricted to trials proposed for CSP, the similarities in review processes with those employed by the National Institutes of Health and the Patient Centered Outcomes Research Institute suggest the possibility that they may also be important in a broader context.
Collapse
Affiliation(s)
- David R. Burnaska
- Cooperative Studies Program, Office of Research and Development, Veterans Health Administration, Washington DC, 20420, USA
| | - Grant D. Huang
- Cooperative Studies Program, Office of Research and Development, Veterans Health Administration, Washington DC, 20420, USA
| | - Timothy J. O'Leary
- Cooperative Studies Program, Office of Research and Development, Veterans Health Administration, Washington DC, 20420, USA
- Department of Pathology, University of Maryland School of Medicine, Baltimore, MD, USA
| |
Collapse
|
6
|
Bieri M, Roser K, Heyard R, Egger M. Face-to-face panel meetings versus remote evaluation of fellowship applications: simulation study at the Swiss National Science Foundation. BMJ Open 2021; 11:e047386. [PMID: 33952554 PMCID: PMC8103360 DOI: 10.1136/bmjopen-2020-047386] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/04/2022] Open
Abstract
OBJECTIVES To trial a simplified, time and cost-saving method for remote evaluation of fellowship applications and compare this with existing panel review processes by analysing concordance between funding decisions, and the use of a lottery-based decision method for proposals of similar quality. DESIGN The study involved 134 junior fellowship proposals for postdoctoral research ('Postdoc.Mobility'). The official method used two panel reviewers who independently scored the application, followed by triage and discussion of selected applications in a panel. Very competitive/uncompetitive proposals were directly funded/rejected without discussion. The simplified procedure used the scores of the two panel members, with or without the score of an additional, third expert. Both methods could further use a lottery to decide on applications of similar quality close to the funding threshold. The same funding rate was applied, and the agreement between the two methods analysed. SETTING Swiss National Science Foundation (SNSF). PARTICIPANTS Postdoc.Mobility panel reviewers and additional expert reviewers. PRIMARY OUTCOME MEASURE Per cent agreement between the simplified and official evaluation method with 95% CIs. RESULTS The simplified procedure based on three reviews agreed in 80.6% (95% CI: 73.9% to 87.3%) of applicants with the official funding outcome. The agreement was 86.6% (95% CI: 80.6% to 91.8%) when using the two reviews of the panel members. The agreement between the two methods was lower for the group of applications discussed in the panel (64.2% and 73.1%, respectively), and higher for directly funded/rejected applications (range: 96.7%-100%). The lottery was used in 8 (6.0%) of 134 applications (official method), 19 (14.2%) applications (simplified, three reviewers) and 23 (17.2%) applications (simplified, two reviewers). With the simplified procedure, evaluation costs could have been halved and 31 hours of meeting time saved for the two 2019 calls. CONCLUSION Agreement between the two methods was high. The simplified procedure could represent a viable evaluation method for the Postdoc.Mobility early career instrument at the SNSF.
Collapse
Affiliation(s)
- Marco Bieri
- Careers Division, Swiss National Science Foundation, Bern, Switzerland
| | - Katharina Roser
- Careers Division, Swiss National Science Foundation, Bern, Switzerland
- Department of Health Sciences and Medicine, University of Lucerne, Lucerne, Switzerland
| | - Rachel Heyard
- Data Team, Swiss National Science Foundation, Bern, Switzerland
| | - Matthias Egger
- Institute of Social & Preventive Medicine, University of Bern, Bern, Switzerland
- Population Health Sciences, Bristol Medical School, University of Bristol, Bristol, UK
- Research Council, Swiss National Science Foundation, Bern, Switzerland
| |
Collapse
|
7
|
Pina DG, Buljan I, Hren D, Marušić A. A retrospective analysis of the peer review of more than 75,000 Marie Curie proposals between 2007 and 2018. eLife 2021; 10:59338. [PMID: 33439120 PMCID: PMC7806263 DOI: 10.7554/elife.59338] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2020] [Accepted: 12/30/2020] [Indexed: 11/26/2022] Open
Abstract
Most funding agencies rely on peer review to evaluate grant applications and proposals, but research into the use of this process by funding agencies has been limited. Here we explore if two changes to the organization of peer review for proposals submitted to various funding actions by the European Union has an influence on the outcome of the peer review process. Based on an analysis of more than 75,000 applications to three actions of the Marie Curie programme over a period of 12 years, we find that the changes – a reduction in the number of evaluation criteria used by reviewers and a move from in-person to virtual meetings – had little impact on the outcome of the peer review process. Our results indicate that other factors, such as the type of grant or area of research, have a larger impact on the outcome.
Collapse
Affiliation(s)
- David G Pina
- Research Executive Agency, European Commission, Brussels, Belgium
| | - Ivan Buljan
- Department for Research in Biomedicine and Health, University of Split School of Medicine, Split, Croatia
| | - Darko Hren
- Department of Psychology, University of Split School of Humanities and Social Sciences, Split, Croatia
| | - Ana Marušić
- Department for Research in Biomedicine and Health, University of Split School of Medicine, Split, Croatia
| |
Collapse
|
8
|
Gallo SA, Schmaling KB, Thompson LA, Glisson SR. Grant reviewer perceptions of the quality, effectiveness, and influence of panel discussion. Res Integr Peer Rev 2020; 5:7. [PMID: 32467777 PMCID: PMC7229595 DOI: 10.1186/s41073-020-00093-0] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2019] [Accepted: 04/24/2020] [Indexed: 12/05/2022] Open
Abstract
Background Funding agencies have long used panel discussion in the peer review of research grant proposals as a way to utilize a set of expertise and perspectives in making funding decisions. Little research has examined the quality of panel discussions and how effectively they are facilitated. Methods Here, we present a mixed-method analysis of data from a survey of reviewers focused on their perceptions of the quality, effectiveness, and influence of panel discussion from their last peer review experience. Results Reviewers indicated that panel discussions were viewed favorably in terms of participation, clarifying differing opinions, informing unassigned reviewers, and chair facilitation. However, some reviewers mentioned issues with panel discussions, including an uneven focus, limited participation from unassigned reviewers, and short discussion times. Most reviewers felt the discussions affected the review outcome, helped in choosing the best science, and were generally fair and balanced. However, those who felt the discussion did not affect the outcome were also more likely to evaluate panel communication negatively, and several reviewers mentioned potential sources of bias related to the discussion. While respondents strongly acknowledged the importance of the chair in ensuring appropriate facilitation of the discussion to influence scoring and to limit the influence of potential sources of bias from the discussion on scoring, nearly a third of respondents did not find the chair of their most recent panel to have performed these roles effectively. Conclusions It is likely that improving chair training in the management of discussion as well as creating review procedures that are informed by the science of leadership and team communication would improve review processes and proposal review reliability.
Collapse
Affiliation(s)
- Stephen A Gallo
- 1Scientific Peer Advisory and Review Services, American Institute of Biological Sciences, Herndon, VA USA
| | | | - Lisa A Thompson
- 1Scientific Peer Advisory and Review Services, American Institute of Biological Sciences, Herndon, VA USA
| | - Scott R Glisson
- 1Scientific Peer Advisory and Review Services, American Institute of Biological Sciences, Herndon, VA USA
| |
Collapse
|
9
|
What makes an effective grants peer reviewer? An exploratory study of the necessary skills. PLoS One 2020; 15:e0232327. [PMID: 32401806 PMCID: PMC7219739 DOI: 10.1371/journal.pone.0232327] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2019] [Accepted: 04/13/2020] [Indexed: 11/20/2022] Open
Abstract
This exploratory mixed methods study describes skills required to be an effective peer reviewer as a member of review panels conducted for federal agencies that fund research, and examines how reviewer experience and the use of technology within such panels impacts reviewer skill development. Two specific review panel formats are considered: in-person face-to-face and virtual video conference. Data were collected through interviews with seven program officers and five expert peer review panelists, and surveys from 51 respondents. Results include the skills reviewers’ consider necessary for effective review panel participation, their assessment of the relative importance of these skills, how they are learned, and how review format affects skill development and improvement. Results are discussed relative to the peer review literature and with consideration of the importance of professional skills needed by successful scientists and peer reviewers.
Collapse
|
10
|
Forsythe LP, Frank LB, Tafari AT, Cohen SS, Lauer M, Clauser S, Goertz C, Schrandt S. Unique Review Criteria and Patient and Stakeholder Reviewers: Analysis of PCORI's Approach to Research Funding. VALUE IN HEALTH : THE JOURNAL OF THE INTERNATIONAL SOCIETY FOR PHARMACOECONOMICS AND OUTCOMES RESEARCH 2018; 21:1152-1160. [PMID: 30314615 DOI: 10.1016/j.jval.2018.03.017] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/11/2017] [Revised: 02/15/2018] [Accepted: 03/19/2018] [Indexed: 06/08/2023]
Abstract
OBJECTIVE The Patient-Centered Outcomes Research Institute (PCORI) uses a unique approach to Merit Review that includes patients and stakeholders as reviewers with scientists, and includes unique review criteria (patient-centeredness and active engagement of end users in the research). This study assessed the extent to which different reviewer types influence review scores and funding outcomes, the emphasis placed on technical merit compared to other criteria by a multistakeholder panel, and the impact of the in-person discussion on agreement among different reviewer types. METHODS Cross-sectional analysis of administrative data from PCORI online and in-person Merit Review (N = 1312 applications from the five funding cycles from November 2013 to August 2015). Linear and logistic regression models were used to analyze the data. RESULTS For all reviewer types, final review scores were associated with at least one review criterion score from each of the three reviewer types. The strongest predictor of final overall scores for all reviewer types was scientists' prediscussion ratings of technical merit. All reviewers' prediscussion ratings of the potential to improve health care and outcomes, and scientists' ratings of technical merit and patient-centeredness, were associated with funding success. For each reviewer type, overall impact scores from the online scoring were changed on at least half of the applications at the in-person panel discussion. Score agreement across reviewer types was greater after panel discussion. CONCLUSIONS Scientist, patient, and stakeholder views all contribute to PCORI Merit Review of applications for research funding. Technical merit is critical to funding success but patient and stakeholder ratings of other criteria also influence application disposition.
Collapse
Affiliation(s)
- Laura P Forsythe
- Patient-Centered Outcomes Research Institute (PCORI), Washington, DC, USA.
| | - Lori B Frank
- Patient-Centered Outcomes Research Institute (PCORI), Washington, DC, USA
| | - A Tsahai Tafari
- Patient-Centered Outcomes Research Institute (PCORI), Washington, DC, USA
| | | | | | - Steven Clauser
- Patient-Centered Outcomes Research Institute (PCORI), Washington, DC, USA
| | | | - Suzanne Schrandt
- Patient-Centered Outcomes Research Institute (PCORI), Washington, DC, USA; Arthritis Foundation, Washington, DC, USA
| |
Collapse
|
11
|
Gallo SA, Glisson SR. External Tests of Peer Review Validity Via Impact Measures. Front Res Metr Anal 2018. [DOI: 10.3389/frma.2018.00022] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
|
12
|
Shepherd J, Frampton GK, Pickett K, Wyatt JC. Peer review of health research funding proposals: A systematic map and systematic review of innovations for effectiveness and efficiency. PLoS One 2018; 13:e0196914. [PMID: 29750807 PMCID: PMC5947897 DOI: 10.1371/journal.pone.0196914] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2017] [Accepted: 04/23/2018] [Indexed: 12/22/2022] Open
Abstract
OBJECTIVE To investigate methods and processes for timely, efficient and good quality peer review of research funding proposals in health. METHODS A two-stage evidence synthesis: (1) a systematic map to describe the key characteristics of the evidence base, followed by (2) a systematic review of the studies stakeholders prioritised as relevant from the map on the effectiveness and efficiency of peer review 'innovations'. Standard processes included literature searching, duplicate inclusion criteria screening, study keyword coding, data extraction, critical appraisal and study synthesis. RESULTS A total of 83 studies from 15 countries were included in the systematic map. The evidence base is diverse, investigating many aspects of the systems for, and processes of, peer review. The systematic review included eight studies from Australia, Canada, and the USA, evaluating a broad range of peer review innovations. These studies showed that simplifying the process by shortening proposal forms, using smaller reviewer panels, or expediting processes can speed up the review process and reduce costs, but this might come at the expense of peer review quality, a key aspect that has not been assessed. Virtual peer review using videoconferencing or teleconferencing appears promising for reducing costs by avoiding the need for reviewers to travel, but again any consequences for quality have not been adequately assessed. CONCLUSIONS There is increasing international research activity into the peer review of health research funding. The studies reviewed had methodological limitations and variable generalisability to research funders. Given these limitations it is not currently possible to recommend immediate implementation of these innovations. However, many appear promising based on existing evidence, and could be adapted as necessary by funders and evaluated. Where feasible, experimental evaluation, including randomised controlled trials, should be conducted, evaluating impact on effectiveness, efficiency and quality.
Collapse
Affiliation(s)
- Jonathan Shepherd
- Wessex Institute, Faculty of Medicine, University of Southampton, Southampton, United Kingdom
| | - Geoff K. Frampton
- Wessex Institute, Faculty of Medicine, University of Southampton, Southampton, United Kingdom
| | - Karen Pickett
- Wessex Institute, Faculty of Medicine, University of Southampton, Southampton, United Kingdom
| | - Jeremy C. Wyatt
- Wessex Institute, Faculty of Medicine, University of Southampton, Southampton, United Kingdom
| |
Collapse
|
13
|
Liaw L, Freedman JE, Becker LB, Mehta NN, Liscum L. Peer Review Practices for Evaluating Biomedical Research Grants: A Scientific Statement From the American Heart Association. Circ Res 2017; 121:e9-e19. [PMID: 28684631 DOI: 10.1161/res.0000000000000158] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The biomedical research enterprise depends on the fair and objective peer review of research grants, leading to the distribution of resources through efficient and robust competitive methods. In the United States, federal funding agencies and foundations collectively distribute billions of dollars annually to support biomedical research. For the American Heart Association, a Peer Review Subcommittee is charged with establishing the highest standards for peer review. This scientific statement reviews the current literature on peer review practices, describes the current American Heart Association peer review process and those of other agencies, analyzes the strengths and weaknesses of American Heart Association peer review practices, and recommends best practices for the future.
Collapse
|