1
|
Wang Y, Li N, Chen L, Wu M, Meng S, Dai Z, Zhang Y, Clarke M. Guidelines, Consensus Statements, and Standards for the Use of Artificial Intelligence in Medicine: Systematic Review. J Med Internet Res 2023; 25:e46089. [PMID: 37991819 PMCID: PMC10701655 DOI: 10.2196/46089] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2023] [Revised: 08/21/2023] [Accepted: 09/26/2023] [Indexed: 11/23/2023] Open
Abstract
BACKGROUND The application of artificial intelligence (AI) in the delivery of health care is a promising area, and guidelines, consensus statements, and standards on AI regarding various topics have been developed. OBJECTIVE We performed this study to assess the quality of guidelines, consensus statements, and standards in the field of AI for medicine and to provide a foundation for recommendations about the future development of AI guidelines. METHODS We searched 7 electronic databases from database establishment to April 6, 2022, and screened articles involving AI guidelines, consensus statements, and standards for eligibility. The AGREE II (Appraisal of Guidelines for Research & Evaluation II) and RIGHT (Reporting Items for Practice Guidelines in Healthcare) tools were used to assess the methodological and reporting quality of the included articles. RESULTS This systematic review included 19 guideline articles, 14 consensus statement articles, and 3 standard articles published between 2019 and 2022. Their content involved disease screening, diagnosis, and treatment; AI intervention trial reporting; AI imaging development and collaboration; AI data application; and AI ethics governance and applications. Our quality assessment revealed that the average overall AGREE II score was 4.0 (range 2.2-5.5; 7-point Likert scale) and the mean overall reporting rate of the RIGHT tool was 49.4% (range 25.7%-77.1%). CONCLUSIONS The results indicated important differences in the quality of different AI guidelines, consensus statements, and standards. We made recommendations for improving their methodological and reporting quality. TRIAL REGISTRATION PROSPERO International Prospective Register of Systematic Reviews (CRD42022321360); https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=321360.
Collapse
Affiliation(s)
- Ying Wang
- Department of Medical Administration, West China Hospital, Sichuan University, Chengdu, China
| | - Nian Li
- Department of Medical Administration, West China Hospital, Sichuan University, Chengdu, China
| | - Lingmin Chen
- Department of Anesthesiology, National Clinical Research Center for Geriatrics, West China Hospital, Sichuan University, Chengdu, China
| | - Miaomiao Wu
- Department of General Practice, National Clinical Research Center for Geriatrics, International Medical Center, West China Hospital, Sichuan University, Chengdu, China
| | - Sha Meng
- Department of Operation Management, West China Hospital, Sichuan University, Chengdu, China
| | - Zelei Dai
- Department of Radiation Oncology, Cancer Center and State Key Laboratory of Biotherapy, West China Hospital, Sichuan University, Chengdu, China
| | - Yonggang Zhang
- Department of Periodical Press, National Clinical Research Center for Geriatrics, Chinese Evidence-based Medicine Center, Nursing Key Laboratory of Sichuan Province, West China Hospital, Sichuan University, Chengdu, China
| | - Mike Clarke
- Northern Ireland Methodology Hub, Queen's University Belfast, Belfast, United Kingdom
| |
Collapse
|
2
|
Love M, Staggs J, Walters C, Wayant C, Torgerson T, Hartwell M, Anderson JM, Lillie A, Myers K, Brachtenbach T, Derby M, Vassar M. An analysis of the evidence underpinning the national comprehensive cancer network practice guidelines. Crit Rev Oncol Hematol 2021; 169:103549. [PMID: 34838981 DOI: 10.1016/j.critrevonc.2021.103549] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2021] [Revised: 10/18/2021] [Accepted: 11/01/2021] [Indexed: 10/19/2022] Open
Abstract
OBJECTIVE This study assesses the quality and completeness of systematic reviews (SRs) included by the National Comprehensive Cancer Network (NCCN) cancer screening clinical practice guidelines (CPGs). METHODS We evaluated SRs according to PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) and AMSTAR-2 (A Measurement Tool to Assess systematic Reviews). RESULTS Seven NCCN CPGs were included with 109 SRs. The mean PRISMA percent completeness of included SRs was 71 % (range 0.1-1.0). The mean AMSTAR-2 percent completeness was 56 % (range 0.05-0.99). Of the 70 SRs assessed via AMSTAR-2, 42 (60 %) received a "critically low" rating, 11 (15.7 %) received "low" ratings, and 17 (24.3 %) received "moderate". None of the SRs received a "high" rating. CONCLUSION Lack of adherence to AMSTAR-2 and PRISMA reporting standards among the SRs included is prevalent. We suggest improved reporting of SR inclusion criteria and evaluation to bolster the reporting quality of SRs underpinning CPG recommendations.
Collapse
Affiliation(s)
- Mitchell Love
- Office of Medical Student Research, Oklahoma State University Center for Health Sciences, Tulsa, OK, United States; Department of School of Educational Foundations, Leadership And Aviation, Oklahoma State University, Tulsa, OK, United States.
| | - Jordan Staggs
- Office of Medical Student Research, Oklahoma State University Center for Health Sciences, Tulsa, OK, United States; Department of School of Educational Foundations, Leadership And Aviation, Oklahoma State University, Tulsa, OK, United States
| | - Corbin Walters
- Office of Medical Student Research, Oklahoma State University Center for Health Sciences, Tulsa, OK, United States; Department of School of Educational Foundations, Leadership And Aviation, Oklahoma State University, Tulsa, OK, United States
| | - Cole Wayant
- Department of Internal Medicine, Baylor College of Medicine, Houston, TX, United States; Department of School of Educational Foundations, Leadership And Aviation, Oklahoma State University, Tulsa, OK, United States
| | - Trevor Torgerson
- Office of Medical Student Research, Oklahoma State University Center for Health Sciences, Tulsa, OK, United States; Department of School of Educational Foundations, Leadership And Aviation, Oklahoma State University, Tulsa, OK, United States
| | - Micah Hartwell
- Office of Medical Student Research, Oklahoma State University Center for Health Sciences, Tulsa, OK, United States; Department of School of Educational Foundations, Leadership And Aviation, Oklahoma State University, Tulsa, OK, United States
| | - J Michael Anderson
- Office of Medical Student Research, Oklahoma State University Center for Health Sciences, Tulsa, OK, United States; Department of School of Educational Foundations, Leadership And Aviation, Oklahoma State University, Tulsa, OK, United States
| | - Anna Lillie
- Office of Medical Student Research, Oklahoma State University Center for Health Sciences, Tulsa, OK, United States; Department of School of Educational Foundations, Leadership And Aviation, Oklahoma State University, Tulsa, OK, United States
| | - Kate Myers
- Office of Medical Student Research, Oklahoma State University Center for Health Sciences, Tulsa, OK, United States; Department of School of Educational Foundations, Leadership And Aviation, Oklahoma State University, Tulsa, OK, United States
| | - Travis Brachtenbach
- Department of Internal Medicine, Oklahoma State University Medical Center, Tulsa, OK, United States; Department of School of Educational Foundations, Leadership And Aviation, Oklahoma State University, Tulsa, OK, United States
| | - Micah Derby
- Department of Internal Medicine, Oklahoma State University Medical Center, Tulsa, OK, United States; Department of School of Educational Foundations, Leadership And Aviation, Oklahoma State University, Tulsa, OK, United States
| | - Matt Vassar
- Office of Medical Student Research, Oklahoma State University Center for Health Sciences, Tulsa, OK, United States; Department of School of Educational Foundations, Leadership And Aviation, Oklahoma State University, Tulsa, OK, United States
| |
Collapse
|
4
|
Wayant C, Page MJ, Vassar M. Evaluation of Reproducible Research Practices in Oncology Systematic Reviews With Meta-analyses Referenced by National Comprehensive Cancer Network Guidelines. JAMA Oncol 2019; 5:1550-1555. [PMID: 31486837 PMCID: PMC6735674 DOI: 10.1001/jamaoncol.2019.2564] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2019] [Accepted: 05/15/2019] [Indexed: 12/19/2022]
Abstract
IMPORTANCE Reproducible research practices are essential to biomedical research because these practices promote trustworthy evidence. In systematic reviews and meta-analyses, reproducible research practices ensure that summary effects used to guide patient care are stable and trustworthy. OBJECTIVE To evaluate the reproducibility in theory of meta-analyses in oncology systematic reviews cited by the 49 National Comprehensive Cancer Network (NCCN) guidelines for the treatment of cancer by site and evaluate whether Cochrane reviews or systematic reviews that report adherence to Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines use more reproducible research practices. DESIGN, SETTING, AND PARTICIPANTS A cross-sectional investigation of all systematic reviews with at least 1 meta-analysis and at least 1 included randomized clinical trial (RCT) that are cited by NCCN guidelines for treatment of cancer by site. We scanned the reference list of all NCCN guidelines (n = 49) for potential systematic reviews and meta-analyses. All retrieved studies were screened, and data were extracted, independently and in duplicate. The analysis was carried out between May 6, 2018, and January 28, 2019. MAIN OUTCOMES AND MEASURES The frequency of reproducible research practices, defined as (1) effect estimate and measure of precision (eg, hazard ratio with 95% confidence interval); (2) clear list of studies included for each analysis; and (3) for subgroup and sensitivity analyses, it must be clear which studies were included in each group or level. RESULTS We identified 1124 potential systematic reviews, and 154 meta-analyses comprising 3696 meta-analytic effect size estimates were included. Only 2375 of the 3696 meta-analytic estimates (64.3%), including subgroup and sensitivity analyses, were reproducible in theory. Forest plots appear to improve the reproducibility of meta-analyses. All meta-analytic estimates were reproducible in theory in 100 systematic reviews (64.9%), and in 15 systematic reviews (9.7%), no meta-analytic estimates could potentially be reproduced. Data were said to be imputed in 29 meta-analyses, but none specified which data. Only 1 meta-analysis included a link to an online data set. CONCLUSIONS AND RELEVANCE More reproducible research practices are needed in oncology meta-analyses, as suggested by those that are cited by the NCCN. Reporting meta-analyses in forest plots and requirements for full data sharing are recommended.
Collapse
Affiliation(s)
- Cole Wayant
- Department of Biomedical Sciences, Oklahoma State University Center for Health Sciences, Tulsa
| | - Matthew J. Page
- School of Public Health and Preventive Medicine, Monash University, Melbourne, Victoria, Australia
| | - Matt Vassar
- Department of Psychiatry and Behavioral Sciences, Oklahoma State University Center for Health Sciences, Tulsa
| |
Collapse
|