201
|
Bajgain B, Lorenzetti D, Lee J, Sauro K. Determinants of implementing artificial intelligence-based clinical decision support tools in healthcare: a scoping review protocol. BMJ Open 2023; 13:e068373. [PMID: 36822813 PMCID: PMC9950925 DOI: 10.1136/bmjopen-2022-068373] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/25/2023] Open
Abstract
INTRODUCTION Artificial intelligence (AI), the simulation of human intelligence processes by machines, is being increasingly leveraged to facilitate clinical decision-making. AI-based clinical decision support (CDS) tools can improve the quality of care and appropriate use of healthcare resources, and decrease healthcare provider burnout. Understanding the determinants of implementing AI-based CDS tools in healthcare delivery is vital to reap the benefits of these tools. The objective of this scoping review is to map and synthesise determinants (barriers and facilitators) to implementing AI-based CDS tools in healthcare. METHODS AND ANALYSIS This scoping review will follow the Joanna Briggs Institute methodology and the Preferred Reporting Items for Systematic reviews and Meta-Analysis extension for Scoping Reviews checklist. The search terms will be tailored to each database, which includes MEDLINE, Embase, CINAHL, APA PsycINFO and the Cochrane Library. Grey literature and references of included studies will also be searched. The search will include studies published from database inception until 10 May 2022. We will not limit searches by study design or language. Studies that either report determinants or describe the implementation of AI-based CDS tools in clinical practice or/and healthcare settings will be included. The identified determinants (barriers and facilitators) will be described by synthesising the themes using the Theoretical Domains Framework. The outcome variables measured will be mapped and the measures of effectiveness will be summarised using descriptive statistics. ETHICS AND DISSEMINATION Ethics approval is not required because all data for this study have been previously published. The findings of this review will be published in a peer-reviewed journal and presented at academic conferences. Importantly, the findings of this scoping review will be widely presented to decision-makers, health system administrators, healthcare providers, and patients and family/caregivers as part of an implementation study of an AI-based CDS for the treatment of coronary artery disease.
Collapse
Affiliation(s)
- Bishnu Bajgain
- Department of Community Health Sciences, University of Calgary, Calgary, Alberta, Canada
| | - Diane Lorenzetti
- Department of Community Health Sciences, University of Calgary, Calgary, Alberta, Canada
| | - Joon Lee
- Department of Community Health Sciences, University of Calgary, Calgary, Alberta, Canada
- Department of Cardiac Sciences, University of Calgary, Calgary, Alberta, Canada
| | - Khara Sauro
- Departments of Community Health Sciences, Surgery & Oncology, University of Calgary, Calgary, Alberta, Canada
| |
Collapse
|
202
|
Liu L, Wu X, Lin D, Zhao L, Li M, Yun D, Lin Z, Pang J, Li L, Wu Y, Lai W, Xiao W, Shang Y, Feng W, Tan X, Li Q, Liu S, Lin X, Sun J, Zhao Y, Yang X, Ye Q, Zhong Y, Huang X, He Y, Fu Z, Xiang Y, Zhang L, Zhao M, Qu J, Xu F, Lu P, Li J, Xu F, Wei W, Dong L, Dai G, He X, Yan W, Zhu Q, Lu L, Zhang J, Zhou W, Meng X, Li S, Shen M, Jiang Q, Chen N, Zhou X, Li M, Wang Y, Zou H, Zhong H, Yang W, Shou W, Zhong X, Yang Z, Ding L, Hu Y, Tan G, He W, Zhao X, Chen Y, Liu Y, Lin H. DeepFundus: A flow-cytometry-like image quality classifier for boosting the whole life cycle of medical artificial intelligence. Cell Rep Med 2023; 4:100912. [PMID: 36669488 PMCID: PMC9975093 DOI: 10.1016/j.xcrm.2022.100912] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2022] [Revised: 11/01/2022] [Accepted: 12/26/2022] [Indexed: 01/20/2023]
Abstract
Medical artificial intelligence (AI) has been moving from the research phase to clinical implementation. However, most AI-based models are mainly built using high-quality images preprocessed in the laboratory, which is not representative of real-world settings. This dataset bias proves a major driver of AI system dysfunction. Inspired by the design of flow cytometry, DeepFundus, a deep-learning-based fundus image classifier, is developed to provide automated and multidimensional image sorting to address this data quality gap. DeepFundus achieves areas under the receiver operating characteristic curves (AUCs) over 0.9 in image classification concerning overall quality, clinical quality factors, and structural quality analysis on both the internal test and national validation datasets. Additionally, DeepFundus can be integrated into both model development and clinical application of AI diagnostics to significantly enhance model performance for detecting multiple retinopathies. DeepFundus can be used to construct a data-driven paradigm for improving the entire life cycle of medical AI practice.
Collapse
Affiliation(s)
- Lixue Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Xiaohang Wu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China.
| | - Duoru Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Lanqin Zhao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Mingyuan Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Dongyuan Yun
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Zhenzhe Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Jianyu Pang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Longhui Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Yuxuan Wu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Weiyi Lai
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Wei Xiao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Yuanjun Shang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Weibo Feng
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Xiao Tan
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Qiang Li
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Shenzhen Liu
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Xinxin Lin
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Jiaxin Sun
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Yiqi Zhao
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Ximei Yang
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Qinying Ye
- Department of Ophthalmology, Second Affiliated Hospital, Guangdong Medical University, Zhanjiang, Guangdong, China
| | - Yuesi Zhong
- Department of Ophthalmology, Third Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Xi Huang
- Department of Ophthalmology, Third Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Yuan He
- Department of Ophthalmology, The Second Affiliated Hospital of Xi'an Medical University, Xi'an, Shaanxi, China
| | - Ziwei Fu
- Department of Ophthalmology, The Second Affiliated Hospital of Xi'an Medical University, Xi'an, Shaanxi, China
| | - Yi Xiang
- Department of Ophthalmology, Central Hospital of Wuhan, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Li Zhang
- Department of Ophthalmology, Central Hospital of Wuhan, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Mingwei Zhao
- Department of Ophthalmology, People's Hospital of Peking University, Beijing, China
| | - Jinfeng Qu
- Department of Ophthalmology, People's Hospital of Peking University, Beijing, China
| | - Fan Xu
- Department of Ophthalmology, People's Hospital of Guangxi Zhuang Autonomous Region, Nanning, Guangxi, China
| | - Peng Lu
- Department of Ophthalmology, People's Hospital of Guangxi Zhuang Autonomous Region, Nanning, Guangxi, China
| | - Jianqiao Li
- Department of Ophthalmology, Qilu Hospital, Shandong University, Jinan, Shandong, China
| | - Fabao Xu
- Department of Ophthalmology, Qilu Hospital, Shandong University, Jinan, Shandong, China
| | - Wenbin Wei
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Li Dong
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | | | - Xingru He
- School of Public Health, He University, Shenyang, Liaoning, China
| | - Wentao Yan
- The Eye Hospital of Wenzhou Medical University, Wenzhou, Zhejiang, China
| | - Qiaolin Zhu
- The Eye Hospital of Wenzhou Medical University, Wenzhou, Zhejiang, China
| | - Linna Lu
- Department of Ophthalmology, Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Jiaying Zhang
- Department of Ophthalmology, Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Wei Zhou
- Department of Ophthalmology, Tianjin Medical University General Hospital, Tianjin, China
| | - Xiangda Meng
- Department of Ophthalmology, Tianjin Medical University General Hospital, Tianjin, China
| | - Shiying Li
- Department of Ophthalmology, Xiang'an Hospital of Xiamen University, Xiamen, Fujian, China
| | - Mei Shen
- Department of Ophthalmology, Xiang'an Hospital of Xiamen University, Xiamen, Fujian, China
| | - Qin Jiang
- The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, Jiangsu, China
| | - Nan Chen
- The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, Jiangsu, China
| | - Xingtao Zhou
- Department of Ophthalmology, Eye and ENT Hospital, Fudan University, Shanghai, China
| | - Meiyan Li
- Department of Ophthalmology, Eye and ENT Hospital, Fudan University, Shanghai, China
| | - Yan Wang
- Tianjin Eye Hospital, Tianjin Key Lab of Ophthalmology and Visual Science, Tianjin Eye Institute, Nankai University, Tianjin, China
| | - Haohan Zou
- Tianjin Eye Hospital, Tianjin Key Lab of Ophthalmology and Visual Science, Tianjin Eye Institute, Nankai University, Tianjin, China
| | - Hua Zhong
- Department of Ophthalmology, The First Affiliated Hospital of Kunming Medical University, Kunming, Yunnan, China
| | - Wenyan Yang
- Department of Ophthalmology, The First Affiliated Hospital of Kunming Medical University, Kunming, Yunnan, China
| | - Wulin Shou
- Jiaxing Chaoju Eye Hospital, Jiaxing, Zhejiang, China
| | - Xingwu Zhong
- Hainan Eye Hospital and Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Haikou, Hainan, China
| | - Zhenduo Yang
- Hainan Eye Hospital and Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Haikou, Hainan, China
| | - Lin Ding
- Department of Ophthalmology, People's Hospital of Xinjiang Uygur Autonomous Region, Urumqi, Xinjiang, China
| | - Yongcheng Hu
- Bayannur Xudong Eye Hospital, Bayannur, Inner Mongolia, China
| | - Gang Tan
- Department of Ophthalmology, The First Affiliated Hospital, Hengyang Medical School, University of South China, Hengyang, Hunan, China
| | - Wanji He
- Beijing Airdoc Technology Co., Ltd., Beijing, China
| | - Xin Zhao
- Beijing Airdoc Technology Co., Ltd., Beijing, China
| | - Yuzhong Chen
- Beijing Airdoc Technology Co., Ltd., Beijing, China
| | - Yizhi Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China.
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China; Hainan Eye Hospital and Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Haikou, Hainan, China; Center for Precision Medicine and Department of Genetics and Biomedical Informatics, Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China.
| |
Collapse
|
203
|
Joseph N, Benetz BA, Chirra P, Menegay H, Oellerich S, Baydoun L, Melles GRJ, Lass JH, Wilson DL. Machine Learning Analysis of Postkeratoplasty Endothelial Cell Images for the Prediction of Future Graft Rejection. Transl Vis Sci Technol 2023; 12:22. [PMID: 36790821 PMCID: PMC9940770 DOI: 10.1167/tvst.12.2.22] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/16/2023] Open
Abstract
Purpose This study developed machine learning (ML) classifiers of postoperative corneal endothelial cell images to identify postkeratoplasty patients at risk for allograft rejection within 1 to 24 months of treatment. Methods Central corneal endothelium specular microscopic images were obtained from 44 patients after Descemet membrane endothelial keratoplasty (DMEK), half of whom had experienced graft rejection. After deep learning segmentation of images from all patients' last and second-to-last imaging, time points prior to rejection were analyzed (175 and 168, respectively), and 432 quantitative features were extracted assessing cellular spatial arrangements and cell intensity values. Random forest (RF) and logistic regression (LR) models were trained on novel-to-this-application features from single time points, delta-radiomics, and traditional morphometrics (endothelial cell density, coefficient of variation, hexagonality) via 10 iterations of threefold cross-validation. Final assessments were evaluated on a held-out test set. Results ML classifiers trained on novel-to-this-application features outperformed those trained on traditional morphometrics for predicting future graft rejection. RF and LR models predicted post-DMEK patients' allograft rejection in the held-out test set with >0.80 accuracy. RF models trained on novel features from second-to-last time points and delta-radiomics predicted post-DMEK patients' rejection with >0.70 accuracy. Cell-graph spatial arrangement, intensity, and shape features were most indicative of graft rejection. Conclusions ML classifiers successfully predicted future graft rejections 1 to 24 months prior to clinically apparent rejection. This technology could aid clinicians to identify patients at risk for graft rejection and guide treatment plans accordingly. Translational Relevance Our software applies ML techniques to clinical images and enhances patient care by detecting preclinical keratoplasty rejection.
Collapse
Affiliation(s)
- Naomi Joseph
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA
| | - Beth Ann Benetz
- Department of Ophthalmology and Visual Sciences, Case Western Reserve University, Cleveland, OH, USA,Cornea Image Analysis Reading Center, Cleveland, OH, USA
| | - Prathyush Chirra
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA
| | - Harry Menegay
- Department of Ophthalmology and Visual Sciences, Case Western Reserve University, Cleveland, OH, USA,Cornea Image Analysis Reading Center, Cleveland, OH, USA
| | - Silke Oellerich
- Netherlands Institute for Innovative Ocular Surgery (NIIOS), Rotterdam, The Netherlands
| | - Lamis Baydoun
- Netherlands Institute for Innovative Ocular Surgery (NIIOS), Rotterdam, The Netherlands,University Eye Hospital Münster, Münster, Germany,ELZA Institute Dietikon/Zurich, Zurich, Switzerland
| | - Gerrit R. J. Melles
- Netherlands Institute for Innovative Ocular Surgery (NIIOS), Rotterdam, The Netherlands,NIIOS-USA, San Diego, CA, USA
| | - Jonathan H. Lass
- Department of Ophthalmology and Visual Sciences, Case Western Reserve University, Cleveland, OH, USA,Cornea Image Analysis Reading Center, Cleveland, OH, USA
| | - David L. Wilson
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA
| |
Collapse
|
204
|
Peeters F, Rommes S, Elen B, Gerrits N, Stalmans I, Jacob J, De Boever P. Artificial Intelligence Software for Diabetic Eye Screening: Diagnostic Performance and Impact of Stratification. J Clin Med 2023; 12:jcm12041408. [PMID: 36835942 PMCID: PMC9967595 DOI: 10.3390/jcm12041408] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2023] [Revised: 01/31/2023] [Accepted: 02/07/2023] [Indexed: 02/12/2023] Open
Abstract
AIM To evaluate the MONA.health artificial intelligence screening software for detecting referable diabetic retinopathy (DR) and diabetic macular edema (DME), including subgroup analysis. METHODS The algorithm's threshold value was fixed at the 90% sensitivity operating point on the receiver operating curve to perform the disease classification. Diagnostic performance was appraised on a private test set and publicly available datasets. Stratification analysis was executed on the private test set considering age, ethnicity, sex, insulin dependency, year of examination, camera type, image quality, and dilatation status. RESULTS The software displayed an area under the curve (AUC) of 97.28% for DR and 98.08% for DME on the private test set. The specificity and sensitivity for combined DR and DME predictions were 94.24 and 90.91%, respectively. The AUC ranged from 96.91 to 97.99% on the publicly available datasets for DR. AUC values were above 95% in all subgroups, with lower predictive values found for individuals above the age of 65 (82.51% sensitivity) and Caucasians (84.03% sensitivity). CONCLUSION We report good overall performance of the MONA.health screening software for DR and DME. The software performance remains stable with no significant deterioration of the deep learning models in any studied strata.
Collapse
Affiliation(s)
- Freya Peeters
- Department of Ophthalmology, University Hospitals Leuven, 3000 Leuven, Belgium
- Biomedical Sciences Group, Research Group Ophthalmology, Department of Neurosciences, KU Leuven, 3000 Leuven, Belgium
- Correspondence:
| | - Stef Rommes
- MONA.health, 3060 Bertem, Belgium
- Flemish Institute for Technological Research (VITO), 2400 Mol, Belgium
| | - Bart Elen
- Flemish Institute for Technological Research (VITO), 2400 Mol, Belgium
| | - Nele Gerrits
- Flemish Institute for Technological Research (VITO), 2400 Mol, Belgium
| | - Ingeborg Stalmans
- Department of Ophthalmology, University Hospitals Leuven, 3000 Leuven, Belgium
- Biomedical Sciences Group, Research Group Ophthalmology, Department of Neurosciences, KU Leuven, 3000 Leuven, Belgium
| | - Julie Jacob
- Department of Ophthalmology, University Hospitals Leuven, 3000 Leuven, Belgium
- Biomedical Sciences Group, Research Group Ophthalmology, Department of Neurosciences, KU Leuven, 3000 Leuven, Belgium
| | - Patrick De Boever
- Flemish Institute for Technological Research (VITO), 2400 Mol, Belgium
- Centre for Environmental Sciences, Hasselt University, Diepenbeek, 3500 Hasselt, Belgium
| |
Collapse
|
205
|
Taribagil P, Hogg HDJ, Balaskas K, Keane PA. Integrating artificial intelligence into an ophthalmologist’s workflow: obstacles and opportunities. EXPERT REVIEW OF OPHTHALMOLOGY 2023. [DOI: 10.1080/17469899.2023.2175672] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/11/2023]
Affiliation(s)
- Priyal Taribagil
- Medical Retina Department, Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - HD Jeffry Hogg
- Medical Retina Department, Moorfields Eye Hospital NHS Foundation Trust, London, UK
- Department of Population Health Science, Population Health Science Institute, Newcastle University, Newcastle upon Tyne, UK
- Department of Ophthalmology, Newcastle upon Tyne Hospitals NHS Foundation Trust, Freeman Road, Newcastle upon Tyne, UK
| | - Konstantinos Balaskas
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust, London, UK
- Medical Retina, Institute of Ophthalmology, University College of London Institute of Ophthalmology, London, UK
| | - Pearse A Keane
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust, London, UK
- Medical Retina, Institute of Ophthalmology, University College of London Institute of Ophthalmology, London, UK
| |
Collapse
|
206
|
Soh ZD, Jiang Y, S/O Ganesan SS, Zhou M, Nongiur M, Majithia S, Tham YC, Rim TH, Qian C, Koh V, Aung T, Wong TY, Xu X, Liu Y, Cheng CY. From 2 dimensions to 3rd dimension: Quantitative prediction of anterior chamber depth from anterior segment photographs via deep-learning. PLOS DIGITAL HEALTH 2023; 2:e0000193. [PMID: 36812642 PMCID: PMC9931242 DOI: 10.1371/journal.pdig.0000193] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Accepted: 01/06/2023] [Indexed: 02/04/2023]
Abstract
Anterior chamber depth (ACD) is a major risk factor of angle closure disease, and has been used in angle closure screening in various populations. However, ACD is measured from ocular biometer or anterior segment optical coherence tomography (AS-OCT), which are costly and may not be readily available in primary care and community settings. Thus, this proof-of-concept study aims to predict ACD from low-cost anterior segment photographs (ASPs) using deep-learning (DL). We included 2,311 pairs of ASPs and ACD measurements for algorithm development and validation, and 380 pairs for algorithm testing. We captured ASPs with a digital camera mounted on a slit-lamp biomicroscope. Anterior chamber depth was measured with ocular biometer (IOLMaster700 or Lenstar LS9000) in data used for algorithm development and validation, and with AS-OCT (Visante) in data used for testing. The DL algorithm was modified from the ResNet-50 architecture, and assessed using mean absolute error (MAE), coefficient-of-determination (R2), Bland-Altman plot and intraclass correlation coefficients (ICC). In validation, our algorithm predicted ACD with a MAE (standard deviation) of 0.18 (0.14) mm; R2 = 0.63. The MAE of predicted ACD was 0.18 (0.14) mm in eyes with open angles and 0.19 (0.14) mm in eyes with angle closure. The ICC between actual and predicted ACD measurements was 0.81 (95% CI 0.77, 0.84). In testing, our algorithm predicted ACD with a MAE of 0.23 (0.18) mm; R2 = 0.37. Saliency maps highlighted the pupil and its margin as the main structures used in ACD prediction. This study demonstrates the possibility of predicting ACD from ASPs via DL. This algorithm mimics an ocular biometer in making its prediction, and provides a foundation to predict other quantitative measurements that are relevant to angle closure screening.
Collapse
Affiliation(s)
- Zhi Da Soh
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | - Yixing Jiang
- Institute of High Performance Computing, Agency for Science, Technology and Research, Singapore
| | | | - Menghan Zhou
- Institute of High Performance Computing, Agency for Science, Technology and Research, Singapore
| | - Monisha Nongiur
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Ophthalmology & Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Shivani Majithia
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Yih Chung Tham
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- Ophthalmology & Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Tyler Hyungtaek Rim
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Ophthalmology & Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Chaoxu Qian
- Department of Ophthalmology, The First Affiliated Hospital of Kunming Medical University, Kunming, China
| | - Victor Koh
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- Department of Ophthalmology, National University Hospital, Singapore
| | - Tin Aung
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- Ophthalmology & Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Tien Yin Wong
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Ophthalmology & Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
- Tsinghua Medicine, Tsinghua University, China
| | - Xinxing Xu
- Institute of High Performance Computing, Agency for Science, Technology and Research, Singapore
| | - Yong Liu
- Institute of High Performance Computing, Agency for Science, Technology and Research, Singapore
| | - Ching-Yu Cheng
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- Ophthalmology & Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
| |
Collapse
|
207
|
Kamalipour A, Moghimi S, Khosravi P, Mohammadzadeh V, Nishida T, Micheletti E, Wu JH, Mahmoudinezhad G, Li EHF, Christopher M, Zangwill L, Javidi T, Weinreb RN. Combining Optical Coherence Tomography and Optical Coherence Tomography Angiography Longitudinal Data for the Detection of Visual Field Progression in Glaucoma. Am J Ophthalmol 2023; 246:141-154. [PMID: 36328200 DOI: 10.1016/j.ajo.2022.10.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Revised: 10/14/2022] [Accepted: 10/15/2022] [Indexed: 11/05/2022]
Abstract
PURPOSE To use longitudinal optical coherence tomography (OCT) and OCT angiography (OCTA) data to detect glaucomatous visual field (VF) progression with a supervised machine learning approach. DESIGN Prospective cohort study. METHODS One hundred ten eyes of patients with suspected glaucoma (33.6%) and patients with glaucoma (66.4%) with a minimum of 5 24-2 VF tests and 3 optic nerve head and macula images over an average follow-up duration of 4.1 years were included. VF progression was defined using a composite measure including either a "likely progression event" on Guided Progression Analysis, a statistically significant negative slope of VF mean deviation or VF index, or a positive pointwise linear regression event. Feature-based gradient boosting classifiers were developed using different subsets of baseline and longitudinal OCT and OCTA summary parameters. The area under the receiver operating characteristic curve (AUROC) was used to compare the classification performance of different models. RESULTS VF progression was detected in 28 eyes (25.5%). The model with combined baseline and longitudinal OCT and OCTA parameters at the global and hemifield levels had the best classification accuracy to detect VF progression (AUROC = 0.89). Models including combined OCT and OCTA parameters had higher classification accuracy compared with those with individual subsets of OCT or OCTA features alone. Including hemifield measurements significantly improved the models' classification accuracy compared with using global measurements alone. Including longitudinal rates of change of OCT and OCTA parameters (AUROCs = 0.80-0.89) considerably increased the classification accuracy of the models with baseline measurements alone (AUROCs = 0.60-0.63). CONCLUSIONS Longitudinal OCTA measurements complement OCT-derived structural metrics for the evaluation of functional VF loss in patients with glaucoma.
Collapse
Affiliation(s)
- Alireza Kamalipour
- From the Hamilton Glaucoma (A.K., S.M., V.M., T.N., E.M., J-H.W., G.M., E.H.F.L., M.C., L.Z., R.N.W.), Shiley Eye Institute, Viterbi Family Department of Ophthalmology
| | - Sasan Moghimi
- From the Hamilton Glaucoma (A.K., S.M., V.M., T.N., E.M., J-H.W., G.M., E.H.F.L., M.C., L.Z., R.N.W.), Shiley Eye Institute, Viterbi Family Department of Ophthalmology
| | - Pooya Khosravi
- School of Medicine (P.K.), University of California, Irvine, Irvine, California, USA
| | - Vahid Mohammadzadeh
- From the Hamilton Glaucoma (A.K., S.M., V.M., T.N., E.M., J-H.W., G.M., E.H.F.L., M.C., L.Z., R.N.W.), Shiley Eye Institute, Viterbi Family Department of Ophthalmology
| | - Takashi Nishida
- From the Hamilton Glaucoma (A.K., S.M., V.M., T.N., E.M., J-H.W., G.M., E.H.F.L., M.C., L.Z., R.N.W.), Shiley Eye Institute, Viterbi Family Department of Ophthalmology
| | - Eleonora Micheletti
- From the Hamilton Glaucoma (A.K., S.M., V.M., T.N., E.M., J-H.W., G.M., E.H.F.L., M.C., L.Z., R.N.W.), Shiley Eye Institute, Viterbi Family Department of Ophthalmology
| | - Jo-Hsuan Wu
- From the Hamilton Glaucoma (A.K., S.M., V.M., T.N., E.M., J-H.W., G.M., E.H.F.L., M.C., L.Z., R.N.W.), Shiley Eye Institute, Viterbi Family Department of Ophthalmology
| | - Golnoush Mahmoudinezhad
- From the Hamilton Glaucoma (A.K., S.M., V.M., T.N., E.M., J-H.W., G.M., E.H.F.L., M.C., L.Z., R.N.W.), Shiley Eye Institute, Viterbi Family Department of Ophthalmology
| | - Elizabeth H F Li
- From the Hamilton Glaucoma (A.K., S.M., V.M., T.N., E.M., J-H.W., G.M., E.H.F.L., M.C., L.Z., R.N.W.), Shiley Eye Institute, Viterbi Family Department of Ophthalmology
| | - Mark Christopher
- From the Hamilton Glaucoma (A.K., S.M., V.M., T.N., E.M., J-H.W., G.M., E.H.F.L., M.C., L.Z., R.N.W.), Shiley Eye Institute, Viterbi Family Department of Ophthalmology
| | - Linda Zangwill
- From the Hamilton Glaucoma (A.K., S.M., V.M., T.N., E.M., J-H.W., G.M., E.H.F.L., M.C., L.Z., R.N.W.), Shiley Eye Institute, Viterbi Family Department of Ophthalmology
| | - Tara Javidi
- Department of Electrical and Computer Engineering (T.J.), University of California San Diego, La Jolla
| | - Robert N Weinreb
- From the Hamilton Glaucoma (A.K., S.M., V.M., T.N., E.M., J-H.W., G.M., E.H.F.L., M.C., L.Z., R.N.W.), Shiley Eye Institute, Viterbi Family Department of Ophthalmology.
| |
Collapse
|
208
|
Medhin LB, Beasley AB, Warburton L, Amanuel B, Gray ES. Extracellular vesicles as a liquid biopsy for melanoma: Are we there yet? Semin Cancer Biol 2023; 89:92-98. [PMID: 36706847 DOI: 10.1016/j.semcancer.2023.01.008] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2022] [Revised: 01/10/2023] [Accepted: 01/23/2023] [Indexed: 01/26/2023]
Abstract
Melanoma is the most aggressive form of skin cancer owing to its high propensity to metastasise in distant organs and develop resistance to treatment. The scarce treatment options available for melanoma underscore the need for biomarkers to guide treatment decisions. In this context, an attractive alternative to overcome the limitations of repeated tissue sampling is the analysis of peripheral blood samples, referred to as 'liquid biopsy'. In particular, the analysis of extracellular vesicles (EVs) has emerged as a promising candidate due to their role in orchestrating cancer dissemination, immune modulation, and drug resistance. As we gain insights into the role of EVs in cancer and melanoma their potential for clinical use is becoming apparent. Herein, we critically summarise the current evidence supporting EVs as biomarkers for melanoma diagnosis, prognostication, therapy response prediction, and drug resistance. EVs are proposed as a candidate biomarker for predicting therapeutic response to immune checkpoint inhibition. However, to realise the potential of EV analysis for clinical decision-making strong clinical validation is required, underscoring the need for further research in this area.
Collapse
Affiliation(s)
- Lidia B Medhin
- Centre for Precision Health, Edith Cowan University, Joondalup WA 6027, Australia; School of Medical and Health Sciences, Edith Cowan University, Joondalup WA 6027, Australia
| | - Aaron B Beasley
- Centre for Precision Health, Edith Cowan University, Joondalup WA 6027, Australia; School of Medical and Health Sciences, Edith Cowan University, Joondalup WA 6027, Australia
| | - Lydia Warburton
- Centre for Precision Health, Edith Cowan University, Joondalup WA 6027, Australia; School of Medical and Health Sciences, Edith Cowan University, Joondalup WA 6027, Australia; Department of Medical Oncology, Fiona Stanley Hospital, Murdoch, Australia
| | - Benhur Amanuel
- School of Medical and Health Sciences, Edith Cowan University, Joondalup WA 6027, Australia; Department of Anatomical Pathology PathWest, QEII Medical Centre, Nedlands WA 6009, Australia
| | - Elin S Gray
- Centre for Precision Health, Edith Cowan University, Joondalup WA 6027, Australia; School of Medical and Health Sciences, Edith Cowan University, Joondalup WA 6027, Australia.
| |
Collapse
|
209
|
Fluorescence Angiography with Dual Fluorescence for the Early Detection and Longitudinal Quantitation of Vascular Leakage in Retinopathy. Biomedicines 2023; 11:biomedicines11020293. [PMID: 36830829 PMCID: PMC9953145 DOI: 10.3390/biomedicines11020293] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2022] [Revised: 01/03/2023] [Accepted: 01/18/2023] [Indexed: 01/26/2023] Open
Abstract
BACKGROUND Diabetic retinopathy (DR) afflicts more than 93 million people worldwide and is a leading cause of vision loss in working adults. While DR therapies are available, early DR development may go undetected without treatment due to the lack of sufficiently sensitive tools. Therefore, early detection is critically important to enable efficient treatment before progression to vision-threatening complications. A major clinical manifestation of early DR is retinal vascular leakage that may progress from diffuse to more localized focal leakage, leading to increased retinal thickness and diabetic macular edema (DME). In preclinical research, a hallmark of DR in mouse models is diffuse retinal leakage without increased thickness or DME, which limits the utility of optical coherence tomography and fluorescein angiography (FA) for early detection. The Evans blue assay detects diffuse leakage but requires euthanasia, which precludes longitudinal studies in the same animals. METHODS We developed a new modality of ratiometric fluorescence angiography with dual fluorescence (FA-DF) to reliably detect and longitudinally quantify diffuse retinal vascular leakage in mouse models of induced and spontaneous DR. RESULTS These studies demonstrated the feasibility and sensitivity of FA-DF in detecting and quantifying retinal vascular leakage in the same mice over time during DR progression in association with chronic hyperglycemia and age. CONCLUSIONS These proof-of-concept studies demonstrated the promise of FA-DF as a minimally invasive method to quantify DR leakage in preclinical mouse models longitudinally.
Collapse
|
210
|
Jacoba CMP, Celi LA, Lorch AC, Fickweiler W, Sobrin L, Gichoya JW, Aiello LP, Silva PS. Bias and Non-Diversity of Big Data in Artificial Intelligence: Focus on Retinal Diseases. Semin Ophthalmol 2023:1-9. [PMID: 36651834 DOI: 10.1080/08820538.2023.2168486] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
Abstract
Artificial intelligence (AI) applications in healthcare will have a potentially far-reaching impact on patient care, however issues regarding algorithmic bias and fairness have recently surfaced. There is a recognized lack of diversity in the available ophthalmic datasets, with 45% of the global population having no readily accessible representative images, leading to potential misrepresentations of their unique anatomic features and ocular pathology. AI applications in retinal disease may show less accuracy with underrepresented populations that may further widen the gap of health inequality if left unaddressed. Beyond disease symptomatology, social determinants of health must be integrated into our current paradigms of disease understanding, with the goal of more personalized care. AI has the potential to decrease global healthcare inequality, but it will need to be based on a more diverse, transparent and responsible use of healthcare data.
Collapse
Affiliation(s)
- Cris Martin P Jacoba
- Ophthalmology Department, Beetham Eye Institute, Joslin Diabetes Centre, Boston, MA, USA.,Massachusetts Eye and Ear Infirmary Department of Ophthalmology, Harvard Medical School, Boston, MA, USA
| | - Leo Anthony Celi
- Division of Pulmonary, Critical Care and Pain Medicine, Beth Israel Deaconess Medical Center, Boston, MA, USA.,Harvard-MIT Health Sciences and Technology Division, Laboratory for Computational Physiology, Cambridge, MA, USA.,Department of Biostatistics, Harvard T.H. Chan School of Public Health, Boston, MA, USA
| | - Alice C Lorch
- Massachusetts Eye and Ear Infirmary Department of Ophthalmology, Harvard Medical School, Boston, MA, USA
| | - Ward Fickweiler
- Ophthalmology Department, Beetham Eye Institute, Joslin Diabetes Centre, Boston, MA, USA.,Massachusetts Eye and Ear Infirmary Department of Ophthalmology, Harvard Medical School, Boston, MA, USA
| | - Lucia Sobrin
- Massachusetts Eye and Ear Infirmary Department of Ophthalmology, Harvard Medical School, Boston, MA, USA
| | - Judy Wawira Gichoya
- Department of Radiology & Imaging Sciences, Emory University, Atlanta, GA, USA
| | - Lloyd P Aiello
- Ophthalmology Department, Beetham Eye Institute, Joslin Diabetes Centre, Boston, MA, USA.,Massachusetts Eye and Ear Infirmary Department of Ophthalmology, Harvard Medical School, Boston, MA, USA
| | - Paolo S Silva
- Ophthalmology Department, Beetham Eye Institute, Joslin Diabetes Centre, Boston, MA, USA.,Massachusetts Eye and Ear Infirmary Department of Ophthalmology, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
211
|
Baldi PF, Abdelkarim S, Liu J, To JK, Ibarra MD, Browne AW. Vitreoretinal Surgical Instrument Tracking in Three Dimensions Using Deep Learning. Transl Vis Sci Technol 2023; 12:20. [PMID: 36648414 PMCID: PMC9851279 DOI: 10.1167/tvst.12.1.20] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023] Open
Abstract
Purpose To evaluate the potential for artificial intelligence-based video analysis to determine surgical instrument characteristics when moving in the three-dimensional vitreous space. Methods We designed and manufactured a model eye in which we recorded choreographed videos of many surgical instruments moving throughout the eye. We labeled each frame of the videos to describe the surgical tool characteristics: tool type, location, depth, and insertional laterality. We trained two different deep learning models to predict each of the tool characteristics and evaluated model performances on a subset of images. Results The accuracy of the classification model on the training set is 84% for the x-y region, 97% for depth, 100% for instrument type, and 100% for laterality of insertion. The accuracy of the classification model on the validation dataset is 83% for the x-y region, 96% for depth, 100% for instrument type, and 100% for laterality of insertion. The close-up detection model performs at 67 frames per second, with precision for most instruments higher than 75%, achieving a mean average precision of 79.3%. Conclusions We demonstrated that trained models can track surgical instrument movement in three-dimensional space and determine instrument depth, tip location, instrument insertional laterality, and instrument type. Model performance is nearly instantaneous and justifies further investigation into application to real-world surgical videos. Translational Relevance Deep learning offers the potential for software-based safety feedback mechanisms during surgery or the ability to extract metrics of surgical technique that can direct research to optimize surgical outcomes.
Collapse
Affiliation(s)
- Pierre F. Baldi
- Department of Computer Science, University of California, Irvine, CA, USA,Institute for Genomics and Bioinformatics, University of California, Irvine, CA, USA,Department of Biomedical Engineering, University of California, Irvine, CA, USA,Center for Translational Vision Research, Department of Ophthalmology, University of California, Irvine, CA, USA
| | - Sherif Abdelkarim
- Department of Computer Science, University of California, Irvine, CA, USA,Institute for Genomics and Bioinformatics, University of California, Irvine, CA, USA
| | - Junze Liu
- Department of Computer Science, University of California, Irvine, CA, USA,Institute for Genomics and Bioinformatics, University of California, Irvine, CA, USA
| | - Josiah K. To
- Center for Translational Vision Research, Department of Ophthalmology, University of California, Irvine, CA, USA
| | | | - Andrew W. Browne
- Department of Biomedical Engineering, University of California, Irvine, CA, USA,Center for Translational Vision Research, Department of Ophthalmology, University of California, Irvine, CA, USA,Gavin Herbert Eye Institute, Department of Ophthalmology, University of California, Irvine, CA, USA
| |
Collapse
|
212
|
Wang X, He X, Wei J, Liu J, Li Y, Liu X. Application of artificial intelligence to the public health education. Front Public Health 2023; 10:1087174. [PMID: 36703852 PMCID: PMC9872201 DOI: 10.3389/fpubh.2022.1087174] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Accepted: 12/15/2022] [Indexed: 01/11/2023] Open
Abstract
With the global outbreak of coronavirus disease 2019 (COVID-19), public health has received unprecedented attention. The cultivation of emergency and compound professionals is the general trend through public health education. However, current public health education is limited to traditional teaching models that struggle to balance theory and practice. Fortunately, the development of artificial intelligence (AI) has entered the stage of intelligent cognition. The introduction of AI in education has opened a new era of computer-assisted education, which brought new possibilities for teaching and learning in public health education. AI-based on big data not only provides abundant resources for public health research and management but also brings convenience for students to obtain public health data and information, which is conducive to the construction of introductory professional courses for students. In this review, we elaborated on the current status and limitations of public health education, summarized the application of AI in public health practice, and further proposed a framework for how to integrate AI into public health education curriculum. With the rapid technological advancements, we believe that AI will revolutionize the education paradigm of public health and help respond to public health emergencies.
Collapse
Affiliation(s)
- Xueyan Wang
- Laboratory of Integrative Medicine, Clinical Research Center for Breast, State Key Laboratory of Biotherapy, West China Hospital, Sichuan University, Chengdu, Sichuan, China
| | - Xiujing He
- Laboratory of Integrative Medicine, Clinical Research Center for Breast, State Key Laboratory of Biotherapy, West China Hospital, Sichuan University, Chengdu, Sichuan, China
| | - Jiawei Wei
- Research Center for Nano-Biomaterials, Analytical and Testing Center, Sichuan University, Chengdu, Sichuan, China
| | - Jianping Liu
- The First People's Hospital of Yibin, Yibin, Sichuan, China
| | - Yuanxi Li
- Laboratory of Integrative Medicine, Clinical Research Center for Breast, State Key Laboratory of Biotherapy, West China Hospital, Sichuan University, Chengdu, Sichuan, China
| | - Xiaowei Liu
- Laboratory of Integrative Medicine, Clinical Research Center for Breast, State Key Laboratory of Biotherapy, West China Hospital, Sichuan University, Chengdu, Sichuan, China
| |
Collapse
|
213
|
ElSayed NA, Aleppo G, Aroda VR, Bannuru RR, Brown FM, Bruemmer D, Collins BS, Gibbons CH, Giurini JM, Hilliard ME, Isaacs D, Johnson EL, Kahan S, Khunti K, Leon J, Lyons SK, Perry ML, Prahalad P, Pratley RE, Seley JJ, Stanton RC, Sun JK, Gabbay RA, on behalf of the American Diabetes Association. 12. Retinopathy, Neuropathy, and Foot Care: Standards of Care in Diabetes-2023. Diabetes Care 2023; 46:S203-S215. [PMID: 36507636 PMCID: PMC9810462 DOI: 10.2337/dc23-s012] [Citation(s) in RCA: 56] [Impact Index Per Article: 56.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
The American Diabetes Association (ADA) "Standards of Care in Diabetes" includes the ADA's current clinical practice recommendations and is intended to provide the components of diabetes care, general treatment goals and guidelines, and tools to evaluate quality of care. Members of the ADA Professional Practice Committee, a multidisciplinary expert committee, are responsible for updating the Standards of Care annually, or more frequently as warranted. For a detailed description of ADA standards, statements, and reports, as well as the evidence-grading system for ADA's clinical practice recommendations and a full list of Professional Practice Committee members, please refer to Introduction and Methodology. Readers who wish to comment on the Standards of Care are invited to do so at professional.diabetes.org/SOC.
Collapse
|
214
|
Ong JX, Konopek N, Fukuyama H, Fawzi AA. Deep Capillary Nonperfusion on OCT Angiography Predicts Complications in Eyes with Referable Nonproliferative Diabetic Retinopathy. Ophthalmol Retina 2023; 7:14-23. [PMID: 35803524 PMCID: PMC9813273 DOI: 10.1016/j.oret.2022.06.018] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2022] [Revised: 06/22/2022] [Accepted: 06/30/2022] [Indexed: 01/28/2023]
Abstract
OBJECTIVE To evaluate the ability of capillary nonperfusion parameters on OCT angiography (OCTA) to predict the development of clinically significant outcomes in eyes with referable nonproliferative diabetic retinopathy (NPDR). DESIGN Prospective longitudinal observational study. SUBJECTS In total, 59 patients (74 eyes) with treatment-naive moderate and severe (referable) NPDR. METHODS Patients were imaged with OCTA at baseline and then followed-up for 1 year. We evaluated 2 OCTA capillary nonperfusion metrics, vessel density (VD) and geometric perfusion deficits (GPDs), in the superficial capillary plexus, middle capillary plexus (MCP), and deep capillary plexus (DCP). We compared the predictive accuracy of baseline OCTA metrics for clinically significant diabetic retinopathy (DR) outcomes at 1 year. MAIN OUTCOME MEASURES Significant clinical outcomes at 1 year, defined as 1 or more of the following-vitreous hemorrhage, center-involving diabetic macular edema, and initiation of treatment with pan-retinal photocoagulation or anti-VEGF injections. RESULTS Overall, 49 patients (61 eyes) returned for the 1-year follow-up. Geometric perfusion deficits and VD in the MCP and DCP correlated with clinically significant outcomes at 1 year (P < 0.001). Eyes with these outcomes had lower VD and higher GPD, indicating worse nonperfusion of the deeper retinal layers than those that remained free from complication. These differences remained significant (P = 0.046 to < 0.001) when OCTA parameters were incorporated into models that also considered sex, baseline corrected visual acuity, and baseline DR severity. Adjusted receiver operating characteristic curve for DCP GPD achieved an area under the curve (AUC) of 0.929, with sensitivity of 89% and specificity of 98%. In a separate analysis focusing on high-risk proliferative diabetic retinopathy outcomes, MCP and DCP GPD and VD remained significantly predictive with comparable AUC and sensitivities to the pooled analysis. CONCLUSIONS Evidence of deep capillary nonperfusion at baseline in eyes with clinically referable NPDR can predict short-term DR complications with high accuracy, suggesting that deep retinal ischemia has an important pathophysiologic role in DR progression. Our results suggest that OCTA may provide additional prognostic benefit to clinical DR staging in eyes with high risk.
Collapse
Affiliation(s)
- Janice X Ong
- Department of Ophthalmology, Feinberg School of Medicine, Northwestern University, Chicago, Illinois
| | - Nicholas Konopek
- Department of Ophthalmology, Feinberg School of Medicine, Northwestern University, Chicago, Illinois
| | - Hisashi Fukuyama
- Department of Ophthalmology, Feinberg School of Medicine, Northwestern University, Chicago, Illinois; Department of Ophthalmology, Hyogo College of Medicine, Nishinomiya, Japan
| | - Amani A Fawzi
- Department of Ophthalmology, Feinberg School of Medicine, Northwestern University, Chicago, Illinois.
| |
Collapse
|
215
|
Karthik K, Mahadevappa M. Convolution neural networks for optical coherence tomography (OCT) image classification. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104176] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
216
|
Arrigo A, Aragona E, Battaglia Parodi M, Bandello F. Quantitative approaches in multimodal fundus imaging: State of the art and future perspectives. Prog Retin Eye Res 2023; 92:101111. [PMID: 35933313 DOI: 10.1016/j.preteyeres.2022.101111] [Citation(s) in RCA: 14] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Revised: 07/16/2022] [Accepted: 07/19/2022] [Indexed: 02/01/2023]
Abstract
When it first appeared, multimodal fundus imaging revolutionized the diagnostic workup and provided extremely useful new insights into the pathogenesis of fundus diseases. The recent addition of quantitative approaches has further expanded the amount of information that can be obtained. In spite of the growing interest in advanced quantitative metrics, the scientific community has not reached a stable consensus on repeatable, standardized quantitative techniques to process and analyze the images. Furthermore, imaging artifacts may considerably affect the processing and interpretation of quantitative data, potentially affecting their reliability. The aim of this survey is to provide a comprehensive summary of the main multimodal imaging techniques, covering their limitations as well as their strengths. We also offer a thorough analysis of current quantitative imaging metrics, looking into their technical features, limitations, and interpretation. In addition, we describe the main imaging artifacts and their potential impact on imaging quality and reliability. The prospect of increasing reliance on artificial intelligence-based analyses suggests there is a need to develop more sophisticated quantitative metrics and to improve imaging technologies, incorporating clear, standardized, post-processing procedures. These measures are becoming urgent if these analyses are to cross the threshold from a research context to real-life clinical practice.
Collapse
Affiliation(s)
- Alessandro Arrigo
- Department of Ophthalmology, IRCCS San Raffaele Scientific Institute, via Olgettina 60, 20132, Milan, Italy.
| | - Emanuela Aragona
- Department of Ophthalmology, IRCCS San Raffaele Scientific Institute, via Olgettina 60, 20132, Milan, Italy
| | - Maurizio Battaglia Parodi
- Department of Ophthalmology, IRCCS San Raffaele Scientific Institute, via Olgettina 60, 20132, Milan, Italy
| | - Francesco Bandello
- Department of Ophthalmology, IRCCS San Raffaele Scientific Institute, via Olgettina 60, 20132, Milan, Italy
| |
Collapse
|
217
|
Yousefi S. Clinical Applications of Artificial Intelligence in Glaucoma. J Ophthalmic Vis Res 2023; 18:97-112. [PMID: 36937202 PMCID: PMC10020779 DOI: 10.18502/jovr.v18i1.12730] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2022] [Accepted: 11/05/2022] [Indexed: 02/25/2023] Open
Abstract
Ophthalmology is one of the major imaging-intensive fields of medicine and thus has potential for extensive applications of artificial intelligence (AI) to advance diagnosis, drug efficacy, and other treatment-related aspects of ocular disease. AI has made impressive progress in ophthalmology within the past few years and two autonomous AI-enabled systems have received US regulatory approvals for autonomously screening for mid-level or advanced diabetic retinopathy and macular edema. While no autonomous AI-enabled system for glaucoma screening has yet received US regulatory approval, numerous assistive AI-enabled software tools are already employed in commercialized instruments for quantifying retinal images and visual fields to augment glaucoma research and clinical practice. In this literature review (non-systematic), we provide an overview of AI applications in glaucoma, and highlight some limitations and considerations for AI integration and adoption into clinical practice.
Collapse
Affiliation(s)
- Siamak Yousefi
- Department of Ophthalmology, University of Tennessee Health Science Center, Memphis, TN, USA
- Department of Genetics, Genomics, and Informatics, University of Tennessee Health Science Center, Memphis, TN, USA
| |
Collapse
|
218
|
Yin H, Yang X, Sun L, Pan P, Peng L, Li K, Zhang D, Cui F, Xia C, Huang H, Li Z. The value of artificial intelligence techniques in predicting pancreatic ductal adenocarcinoma with EUS images: A meta-analysis and systematic review. Endosc Ultrasound 2023; 12:50-58. [PMID: 35313419 PMCID: PMC10134944 DOI: 10.4103/eus-d-21-00131] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/04/2022] Open
Abstract
Conventional EUS plays an important role in identifying pancreatic cancer. However, the accuracy of EUS is strongly influenced by the operator's experience in performing EUS. Artificial intelligence (AI) is increasingly being used in various clinical diagnoses, especially in terms of image classification. This study aimed to evaluate the diagnostic test accuracy of AI for the prediction of pancreatic cancer using EUS images. We searched the Embase, PubMed, and Cochrane Library databases to identify studies that used endoscopic ultrasound images of pancreatic cancer and AI to predict the diagnostic accuracy of pancreatic cancer. Two reviewers extracted the data independently. The risk of bias of eligible studies was assessed using a Deek funnel plot. The quality of the included studies was measured by the QUDAS-2 tool. Seven studies involving 1110 participants were included: 634 participants with pancreatic cancer and 476 participants with nonpancreatic cancer. The accuracy of the AI for the prediction of pancreatic cancer (area under the curve) was 0.95 (95% confidence interval [CI], 0.93-0.97), with a corresponding pooled sensitivity of 93% (95% CI, 0.90-0.95), specificity of 90% (95% CI, 0.8-0.95), positive likelihood ratio 9.1 (95% CI 4.4-18.6), negative likelihood ratio 0.08 (95% CI 0.06-0.11), and diagnostic odds ratio 114 (95% CI 56-236). The methodological quality in each study was found to be the source of heterogeneity in the meta-regression combined model, which was statistically significant (P = 0.01). There was no evidence of publication bias. The accuracy of AI in diagnosing pancreatic cancer appears to be reliable. Further research and investment in AI could lead to substantial improvements in screening and early diagnosis.
Collapse
Affiliation(s)
- Hua Yin
- Department of Gastroenterology, General Hospital of Ningxia Medical University, Yinchuan; Department of Gastroenterology, Changhai Hospital, Second Military Medical University, Shanghai; Postgraduate Training Base in Shanghai Gongli Hospital, Ningxia Medical University, Shanghai, China
| | - Xiaoli Yang
- Department of Gastroenterology, General Hospital of Ningxia Medical University, Yinchuan; Department of Gastroenterology, Changhai Hospital, Second Military Medical University, Shanghai; Postgraduate Training Base in Shanghai Gongli Hospital, Ningxia Medical University, Shanghai, China
| | - Liqi Sun
- Department of Gastroenterology, Changhai Hospital, Second Military Medical University, Shanghai, China
| | - Peng Pan
- Department of Gastroenterology, Changhai Hospital, Second Military Medical University, Shanghai, China
| | - Lisi Peng
- Department of Gastroenterology, Changhai Hospital, Second Military Medical University, Shanghai, China
| | - Keliang Li
- Department of Gastroenterology, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, Henan Province, China
| | - Deyu Zhang
- Department of Gastroenterology, Changhai Hospital, Second Military Medical University, Shanghai, China
| | - Fang Cui
- Department of Gastroenterology, Changhai Hospital, Second Military Medical University, Shanghai, China
| | - Chuanchao Xia
- Department of Gastroenterology, Changhai Hospital, Second Military Medical University, Shanghai, China
| | - Haojie Huang
- Department of Gastroenterology, Changhai Hospital, Second Military Medical University, Shanghai, China
| | - Zhaoshen Li
- Department of Gastroenterology, Changhai Hospital, Second Military Medical University, Shanghai, China
| |
Collapse
|
219
|
Technology and Innovation in Global Ophthalmology: The Past, the Potential, and a Path Forward. Int Ophthalmol Clin 2023; 63:25-32. [PMID: 36598831 PMCID: PMC9819211 DOI: 10.1097/iio.0000000000000450] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/05/2023]
|
220
|
Li HY, Wang DX, Dong L, Wei WB. Deep learning algorithms for detection of diabetic macular edema in OCT images: A systematic review and meta-analysis. Eur J Ophthalmol 2023; 33:278-290. [PMID: 35473414 DOI: 10.1177/11206721221094786] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2023]
Abstract
PURPOSE Artificial intelligence (AI) can detect diabetic macular edema (DME) from optical coherence tomography (OCT) images. We aimed to evaluate the performance of deep learning neural networks in DME detection. METHODS Embase, Pubmed, the Cochrane Library, and IEEE Xplore were searched up to August 14, 2021. We included studies using deep learning algorithms to detect DME from OCT images. Two reviewers extracted the data independently, and the Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) tool was applied to assess the risk of bias. The study is reported according to Preferred Reporting Items for a Systematic Review and Meta-analysis of Diagnostic Test Accuracy Studies (PRISMA-DTA). RESULTS Ninteen studies involving 41005 subjects were included. The pooled sensitivity and specificity were 96.0% (95% confidence interval (CI): 93.9% to 97.3%) and 99.3% (95% CI: 98.2% to 99.7%), respectively. Subgroup analyses found that data set selection, sample size of training set and the choice of OCT devices contributed to the heterogeneity (all P < 0.05). While there was no association between the diagnostic accuracy and transfer learning adoption or image management (all P > 0.05). CONCLUSIONS Deep learning methods, particularly the convolutional neural networks (CNNs) could effectively detect clinically significant DME, which can provide referral suggestions to the patients.
Collapse
Affiliation(s)
- He-Yan Li
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, 117902Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Dai-Xi Wang
- 12517Capital Medical University, Beijing, China
| | - Li Dong
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, 117902Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Wen-Bin Wei
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, 117902Beijing Tongren Hospital, Capital Medical University, Beijing, China
| |
Collapse
|
221
|
Application of Deep Learning to Retinal-Image-Based Oculomics for Evaluation of Systemic Health: A Review. J Clin Med 2022; 12:jcm12010152. [PMID: 36614953 PMCID: PMC9821402 DOI: 10.3390/jcm12010152] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2022] [Revised: 12/17/2022] [Accepted: 12/22/2022] [Indexed: 12/28/2022] Open
Abstract
The retina is a window to the human body. Oculomics is the study of the correlations between ophthalmic biomarkers and systemic health or disease states. Deep learning (DL) is currently the cutting-edge machine learning technique for medical image analysis, and in recent years, DL techniques have been applied to analyze retinal images in oculomics studies. In this review, we summarized oculomics studies that used DL models to analyze retinal images-most of the published studies to date involved color fundus photographs, while others focused on optical coherence tomography images. These studies showed that some systemic variables, such as age, sex and cardiovascular disease events, could be consistently robustly predicted, while other variables, such as thyroid function and blood cell count, could not be. DL-based oculomics has demonstrated fascinating, "super-human" predictive capabilities in certain contexts, but it remains to be seen how these models will be incorporated into clinical care and whether management decisions influenced by these models will lead to improved clinical outcomes.
Collapse
|
222
|
Shinde RK, Alam MS, Hossain MB, Md Imtiaz S, Kim J, Padwal AA, Kim N. Squeeze-MNet: Precise Skin Cancer Detection Model for Low Computing IoT Devices Using Transfer Learning. Cancers (Basel) 2022; 15:cancers15010012. [PMID: 36612010 PMCID: PMC9817940 DOI: 10.3390/cancers15010012] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2022] [Revised: 12/15/2022] [Accepted: 12/16/2022] [Indexed: 12/24/2022] Open
Abstract
Cancer remains a deadly disease. We developed a lightweight, accurate, general-purpose deep learning algorithm for skin cancer classification. Squeeze-MNet combines a Squeeze algorithm for digital hair removal during preprocessing and a MobileNet deep learning model with predefined weights. The Squeeze algorithm extracts important image features from the image, and the black-hat filter operation removes noise. The MobileNet model (with a dense neural network) was developed using the International Skin Imaging Collaboration (ISIC) dataset to fine-tune the model. The proposed model is lightweight; the prototype was tested on a Raspberry Pi 4 Internet of Things device with a Neo pixel 8-bit LED ring; a medical doctor validated the device. The average precision (AP) for benign and malignant diagnoses was 99.76% and 98.02%, respectively. Using our approach, the required dataset size decreased by 66%. The hair removal algorithm increased the accuracy of skin cancer detection to 99.36% with the ISIC dataset. The area under the receiver operating curve was 98.9%.
Collapse
Affiliation(s)
- Rupali Kiran Shinde
- Department of Information and Communication Engineering, Chungbuk National University, Cheongju 28644, Republic of Korea
| | | | - Md. Biddut Hossain
- Department of Information and Communication Engineering, Chungbuk National University, Cheongju 28644, Republic of Korea
| | - Shariar Md Imtiaz
- Department of Information and Communication Engineering, Chungbuk National University, Cheongju 28644, Republic of Korea
| | - JoonHyun Kim
- Department of Information and Communication Engineering, Chungbuk National University, Cheongju 28644, Republic of Korea
| | | | - Nam Kim
- Department of Information and Communication Engineering, Chungbuk National University, Cheongju 28644, Republic of Korea
- Correspondence:
| |
Collapse
|
223
|
Yang Z, Tan TE, Shao Y, Wong TY, Li X. Classification of diabetic retinopathy: Past, present and future. Front Endocrinol (Lausanne) 2022; 13:1079217. [PMID: 36589807 PMCID: PMC9800497 DOI: 10.3389/fendo.2022.1079217] [Citation(s) in RCA: 30] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Accepted: 11/28/2022] [Indexed: 12/23/2022] Open
Abstract
Diabetic retinopathy (DR) is a leading cause of visual impairment and blindness worldwide. Since DR was first recognized as an important complication of diabetes, there have been many attempts to accurately classify the severity and stages of disease. These historical classification systems evolved as understanding of disease pathophysiology improved, methods of imaging and assessing DR changed, and effective treatments were developed. Current DR classification systems are effective, and have been the basis of major research trials and clinical management guidelines for decades. However, with further new developments such as recognition of diabetic retinal neurodegeneration, new imaging platforms such as optical coherence tomography and ultra wide-field retinal imaging, artificial intelligence and new treatments, our current classification systems have significant limitations that need to be addressed. In this paper, we provide a historical review of different classification systems for DR, and discuss the limitations of our current classification systems in the context of new developments. We also review the implications of new developments in the field, to see how they might feature in a future, updated classification.
Collapse
Affiliation(s)
- Zhengwei Yang
- Tianjin Key Laboratory of Retinal Functions and Diseases, Tianjin Branch of National Clinical Research Center for Ocular Disease, Eye Institute and School of Optometry, Tianjin Medical University Eye Hospital, Tianjin, China
| | - Tien-En Tan
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Duke-National University of Singapore Medical School, Singapore, Singapore
| | - Yan Shao
- Tianjin Key Laboratory of Retinal Functions and Diseases, Tianjin Branch of National Clinical Research Center for Ocular Disease, Eye Institute and School of Optometry, Tianjin Medical University Eye Hospital, Tianjin, China
| | - Tien Yin Wong
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Duke-National University of Singapore Medical School, Singapore, Singapore
- Tsinghua Medicine, Tsinghua University, Beijing, China
| | - Xiaorong Li
- Tianjin Key Laboratory of Retinal Functions and Diseases, Tianjin Branch of National Clinical Research Center for Ocular Disease, Eye Institute and School of Optometry, Tianjin Medical University Eye Hospital, Tianjin, China
| |
Collapse
|
224
|
Hao T, Wissel B, Ni Y, Pajor N, Glauser T, Pestian J, Dexheimer JW. Implementation of Machine Learning Pipelines for Clinical Practice: Development and Validation Study. JMIR Med Inform 2022; 10:e37833. [PMID: 36525289 PMCID: PMC9804095 DOI: 10.2196/37833] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2022] [Revised: 09/01/2022] [Accepted: 09/19/2022] [Indexed: 01/03/2023] Open
Abstract
BACKGROUND Artificial intelligence (AI) technologies, such as machine learning and natural language processing, have the potential to provide new insights into complex health data. Although powerful, these algorithms rarely move from experimental studies to direct clinical care implementation. OBJECTIVE We aimed to describe the key components for successful development and integration of two AI technology-based research pipelines for clinical practice. METHODS We summarized the approach, results, and key learnings from the implementation of the following two systems implemented at a large, tertiary care children's hospital: (1) epilepsy surgical candidate identification (or epilepsy ID) in an ambulatory neurology clinic; and (2) an automated clinical trial eligibility screener (ACTES) for the real-time identification of patients for research studies in a pediatric emergency department. RESULTS The epilepsy ID system performed as well as board-certified neurologists in identifying surgical candidates (with a sensitivity of 71% and positive predictive value of 77%). The ACTES system decreased coordinator screening time by 12.9%. The success of each project was largely dependent upon the collaboration between machine learning experts, research and operational information technology professionals, longitudinal support from clinical providers, and institutional leadership. CONCLUSIONS These projects showcase novel interactions between machine learning recommendations and providers during clinical care. Our deployment provides seamless, real-time integration of AI technology to provide decision support and improve patient care.
Collapse
Affiliation(s)
| | - Benjamin Wissel
- Division of Biomedical Informatics, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, United States
| | - Yizhao Ni
- Division of Biomedical Informatics, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, United States.,Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, United States
| | - Nathan Pajor
- Division of Pulmonary Medicine, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, United States.,Division of Biomedical Informatics, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, United States.,Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, United States
| | - Tracy Glauser
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, United States.,Division of Neurology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, United States
| | - John Pestian
- Division of Biomedical Informatics, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, United States.,Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, United States
| | - Judith W Dexheimer
- Division of Biomedical Informatics, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, United States.,Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, United States.,Division of Emergency Medicine, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, United States
| |
Collapse
|
225
|
Coiera E, Liu S. Evidence synthesis, digital scribes, and translational challenges for artificial intelligence in healthcare. Cell Rep Med 2022; 3:100860. [PMID: 36513071 PMCID: PMC9798027 DOI: 10.1016/j.xcrm.2022.100860] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2022] [Revised: 10/15/2022] [Accepted: 11/18/2022] [Indexed: 12/14/2022]
Abstract
Healthcare has well-known challenges with safety, quality, and effectiveness, and many see artificial intelligence (AI) as essential to any solution. Emerging applications include the automated synthesis of best-practice research evidence including systematic reviews, which would ultimately see all clinical trial data published in a computational form for immediate synthesis. Digital scribes embed themselves in the process of care to detect, record, and summarize events and conversations for the electronic record. However, three persistent translational challenges must be addressed before AI is widely deployed. First, little effort is spent replicating AI trials, exposing patients to risks of methodological error and biases. Next, there is little reporting of patient harms from trials. Finally, AI built using machine learning may perform less effectively in different clinical settings.
Collapse
Affiliation(s)
- Enrico Coiera
- Centre for Health Informatics, Australian Institute of Health Innovation, Macquarie University, Level 6, 75 Talavera Road, North Ryde, Sydney, NSW 2109, Australia.
| | - Sidong Liu
- Centre for Health Informatics, Australian Institute of Health Innovation, Macquarie University, Level 6, 75 Talavera Road, North Ryde, Sydney, NSW 2109, Australia
| |
Collapse
|
226
|
Eyeing severe diabetes upfront. Nat Biomed Eng 2022; 6:1321-1322. [PMID: 35411115 DOI: 10.1038/s41551-022-00879-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/14/2023]
|
227
|
Vujosevic S, Limoli C, Luzi L, Nucci P. Digital innovations for retinal care in diabetic retinopathy. Acta Diabetol 2022; 59:1521-1530. [PMID: 35962258 PMCID: PMC9374293 DOI: 10.1007/s00592-022-01941-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/03/2022] [Accepted: 07/04/2022] [Indexed: 12/02/2022]
Abstract
AIM The purpose of this review is to examine the applications of novel digital technology domains for the screening and management of patients with diabetic retinopathy (DR). METHODS A PubMed engine search was performed, using the terms "Telemedicine", "Digital health", "Telehealth", "Telescreening", "Artificial intelligence", "Deep learning", "Smartphone", "Triage", "Screening", "Home-based", "Monitoring", "Ophthalmology", "Diabetes", "Diabetic Retinopathy", "Retinal imaging". Full-text English language studies from January 1, 2010, to February 1, 2022, and reference lists were considered for the conceptual framework of this review. RESULTS Diabetes mellitus and its eye complications, including DR, are particularly well suited to digital technologies, providing an ideal model for telehealth initiatives and real-world applications. The current development in the adoption of telemedicine, artificial intelligence and remote monitoring as an alternative to or in addition to traditional forms of care will be discussed. CONCLUSIONS Advances in digital health have created an ecosystem ripe for telemedicine in the field of DR to thrive. Stakeholders and policymakers should adopt a participatory approach to ensure sustained implementation of these technologies after the COVID-19 pandemic. This article belongs to the Topical Collection "Diabetic Eye Disease", managed by Giuseppe Querques.
Collapse
Affiliation(s)
- Stela Vujosevic
- Department of Biomedical, Surgical and Dental Sciences, University of Milan, Milan, Italy.
- Eye Clinic, IRCCS MultiMedica, Via San Vittore 12, 20123, Milan, Italy.
| | - Celeste Limoli
- Eye Clinic, IRCCS MultiMedica, Via San Vittore 12, 20123, Milan, Italy
- University of Milan, Milan, Italy
| | - Livio Luzi
- Department of Biomedical Sciences for Health, University of Milan, Milan, Italy
- Department of Endocrinology, Nutrition and Metabolic Diseases, IRCCS MultiMedica, Milan, Italy
| | - Paolo Nucci
- Department of Biomedical, Surgical and Dental Sciences, University of Milan, Milan, Italy
| |
Collapse
|
228
|
Poschkamp B, Stahl A. Application of deep learning algorithms for diabetic retinopathy screening. ANNALS OF TRANSLATIONAL MEDICINE 2022; 10:1298. [PMID: 36660730 PMCID: PMC9843336 DOI: 10.21037/atm-2022-73] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 10/17/2022] [Accepted: 12/07/2022] [Indexed: 12/23/2022]
Affiliation(s)
- Broder Poschkamp
- Department of Ophthalmology, University Medicine Greifswald, Greifswald, Germany
| | - Andreas Stahl
- Department of Ophthalmology, University Medicine Greifswald, Greifswald, Germany
| |
Collapse
|
229
|
Nanegrungsunk O, Ruamviboonsuk P, Grzybowski A. Prospective studies on artificial intelligence (AI)-based diabetic retinopathy screening. ANNALS OF TRANSLATIONAL MEDICINE 2022; 10:1297. [PMID: 36660630 PMCID: PMC9843399 DOI: 10.21037/atm-2022-71] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Accepted: 12/06/2022] [Indexed: 12/23/2022]
Affiliation(s)
- Onnisa Nanegrungsunk
- Retina Division, Department of Ophthalmology, Faculty of Medicine, Chiang Mai University, Chiang Mai, Thailand
| | - Paisan Ruamviboonsuk
- Department of Ophthalmology, College of Medicine, Rangsit University, Rajavithi Hospital, Bangkok, Thailand
| | - Andrzej Grzybowski
- Department of Ophthalmology, University of Warmia and Mazury, Olsztyn, Poland;,Institute for Research in Ophthalmology, Foundation for Ophthalmology Development, Poznan, Poland
| |
Collapse
|
230
|
Mehra AA, Softing A, Guner MK, Hodge DO, Barkmeier AJ. Diabetic Retinopathy Telemedicine Outcomes With Artificial Intelligence-Based Image Analysis, Reflex Dilation, and Image Overread. Am J Ophthalmol 2022; 244:125-132. [PMID: 35970206 DOI: 10.1016/j.ajo.2022.08.008] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2022] [Revised: 08/03/2022] [Accepted: 08/03/2022] [Indexed: 01/30/2023]
Abstract
PURPOSE To examine real-world telemedicine outcomes of diabetic retinopathy (DR) screening with artificial intelligence (AI)-based image analysis, reflex dilation, and secondary image overread in a primary care setting. DESIGN Validity and reliability analysis. METHODS Single institution review of 1052 consecutive adult patients who received diabetic retinopathy photoscreening in the primary care setting over an 18-month period. Nonmydriatic fundus photographs were acquired and analyzed by the IDx-DR AI-based system. When nonmydriatic images were ungradable, reflex dilation (1% tropicamide) and mydriatic photography were performed for repeat AI-based analysis. Manual overread was performed on all images. Patient demographics, clinical characteristics, and screening outcomes were recorded. RESULTS A total of 965 of 1052 patients (91.7%) had AI-gradable fundus photographs: 580 had gradable nonmydriatic imaging (55.1%) and 440 of 472 patients with ungradable nonmydriatic photographs had reflex dilation (93.2%). One hundred thirty-eight of 965 patients (14.3%) were AI-graded as "positive" (greater than mild NPDR) and 827 of 965 were "negative" (85.7%), with 100% sensitivity (95% CI 90.8-100%), 89.2% specificity (95% CI 87.0-91.1%), 27.5% positive predictive value (95% CI 24.0-31.4%), and 100% negative predictive value (95% CI 99.6-100%) compared with manual overread assessment of greater than mild NPDR requiring further evaluation with a comprehensive dilated examination. Image gradeability was inversely related to patient age: 93.5% gradable (61.9% nonmydriatic) for patients aged <70 years vs 85.3% (31.0% nonmydriatic) for patients aged 70+ years (P < .001). CONCLUSION Incorporation of AI-based image analysis into real-world primary care diabetic retinopathy screening yielded no false negative results and offered excellent image gradeability within a protocol combining nonmydriatic fundus photography and pharmacologic dilation, as needed. Image gradeability was lower with increasing patient age.
Collapse
Affiliation(s)
- Ankur A Mehra
- From Department of Ophthalmology, Mayo Clinic, Rochester, USA (A.A.M, A.S, M.K.G, A.J.B)
| | - Alaina Softing
- From Department of Ophthalmology, Mayo Clinic, Rochester, USA (A.A.M, A.S, M.K.G, A.J.B)
| | | | - David O Hodge
- Department of Quantitative Health Sciences, Mayo Clinic, Jacksonville, USA (D.O.H)
| | - Andrew J Barkmeier
- Department of Quantitative Health Sciences, Mayo Clinic, Jacksonville, USA (D.O.H).
| |
Collapse
|
231
|
Potapenko I, Thiesson B, Kristensen M, Hajari JN, Ilginis T, Fuchs J, Hamann S, la Cour M. Automated artificial intelligence-based system for clinical follow-up of patients with age-related macular degeneration. Acta Ophthalmol 2022; 100:927-936. [PMID: 35322564 PMCID: PMC9790353 DOI: 10.1111/aos.15133] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2021] [Revised: 02/05/2022] [Accepted: 03/12/2022] [Indexed: 12/30/2022]
Abstract
PURPOSE In this study, we investigate the potential of a novel artificial intelligence-based system for autonomous follow-up of patients treated for neovascular age-related macular degeneration (AMD). METHODS A temporal deep learning model was trained on a data set of 84 489 optical coherence tomography scans from AMD patients to recognize disease activity, and its performance was compared with a published non-temporal model trained on the same data (Acta Ophthalmol, 2021). An autonomous follow-up system was created by augmenting the AI model with deterministic logic to suggest treatment according to the observe-and-plan regimen. To validate the AI-based system, a data set comprising clinical decisions and imaging data from 200 follow-up consultations was collected prospectively. In each case, both the autonomous AI decision and original clinical decision were compared with an expert panel consensus. RESULTS The temporal AI model proved superior at detecting disease activity compared with the model without temporal input (area under the curve 0.900 (95% CI 0.894-0.906) and 0.857 (95% CI 0.846-0.867) respectively). The AI-based follow-up system could make an autonomous decision in 73% of the cases, 91.8% of which were in agreement with expert consensus. This was on par with the 87.7% agreement rate between decisions made in the clinic and expert consensus (p = 0.33). CONCLUSIONS The proposed autonomous follow-up system was shown to be safe and compliant with expert consensus on par with clinical practice. The system could in the future ease the pressure on public ophthalmology services from an increasing number of AMD patients.
Collapse
Affiliation(s)
- Ivan Potapenko
- Department of OphthalmologyRigshospitaletCopenhagenDenmark,Faculty of Health and Medical SciencesUniversity of CopenhagenCopenhagenDenmark
| | - Bo Thiesson
- Enversion A/SAarhusDenmark,Department of EngineeringAarhus UniversityAarhusDenmark
| | | | | | - Tomas Ilginis
- Department of OphthalmologyRigshospitaletCopenhagenDenmark
| | - Josefine Fuchs
- Department of OphthalmologyRigshospitaletCopenhagenDenmark
| | - Steffen Hamann
- Department of OphthalmologyRigshospitaletCopenhagenDenmark,Faculty of Health and Medical SciencesUniversity of CopenhagenCopenhagenDenmark
| | - Morten la Cour
- Department of OphthalmologyRigshospitaletCopenhagenDenmark,Faculty of Health and Medical SciencesUniversity of CopenhagenCopenhagenDenmark
| |
Collapse
|
232
|
Rom Y, Aviv R, Ianchulev T, Dvey-Aharon Z. Predicting the future development of diabetic retinopathy using a deep learning algorithm for the analysis of non-invasive retinal imaging. BMJ Open Ophthalmol 2022. [DOI: 10.1136/bmjophth-2022-001140] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022] Open
Abstract
AimsDiabetic retinopathy (DR) is the most common cause of vision loss in the working age. This research aimed to develop an artificial intelligence (AI) machine learning model which can predict the development of referable DR from fundus imagery of otherwise healthy eyes.MethodsOur researchers trained a machine learning algorithm on the EyePACS data set, consisting of 156 363 fundus images. Referrable DR was defined as any level above mild on the International Clinical Diabetic Retinopathy scale.ResultsThe algorithm achieved 0.81 area under receiver operating curve (AUC) when averaging scores from multiple images on the task of predicting development of referrable DR, and 0.76 AUC when using a single image.ConclusionOur results suggest that risk of DR may be predicted from fundus photography alone. Prediction of personalised risk of DR may become key in treatment and contribute to patient compliance across the board, particularly when supported by further prospective research.
Collapse
|
233
|
Zhang A, Xing L, Zou J, Wu JC. Shifting machine learning for healthcare from development to deployment and from models to data. Nat Biomed Eng 2022; 6:1330-1345. [PMID: 35788685 DOI: 10.1038/s41551-022-00898-y] [Citation(s) in RCA: 70] [Impact Index Per Article: 35.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2021] [Accepted: 05/03/2022] [Indexed: 01/14/2023]
Abstract
In the past decade, the application of machine learning (ML) to healthcare has helped drive the automation of physician tasks as well as enhancements in clinical capabilities and access to care. This progress has emphasized that, from model development to model deployment, data play central roles. In this Review, we provide a data-centric view of the innovations and challenges that are defining ML for healthcare. We discuss deep generative models and federated learning as strategies to augment datasets for improved model performance, as well as the use of the more recent transformer models for handling larger datasets and enhancing the modelling of clinical text. We also discuss data-focused problems in the deployment of ML, emphasizing the need to efficiently deliver data to ML models for timely clinical predictions and to account for natural data shifts that can deteriorate model performance.
Collapse
Affiliation(s)
- Angela Zhang
- Stanford Cardiovascular Institute, School of Medicine, Stanford University, Stanford, CA, USA. .,Department of Genetics, School of Medicine, Stanford University, Stanford, CA, USA. .,Greenstone Biosciences, Palo Alto, CA, USA. .,Department of Computer Science, Stanford University, Stanford, CA, USA.
| | - Lei Xing
- Department of Radiation Oncology, School of Medicine, Stanford University, Stanford, CA, USA
| | - James Zou
- Department of Computer Science, Stanford University, Stanford, CA, USA.,Department of Biomedical Informatics, School of Medicine, Stanford University, Stanford, CA, USA
| | - Joseph C Wu
- Stanford Cardiovascular Institute, School of Medicine, Stanford University, Stanford, CA, USA. .,Greenstone Biosciences, Palo Alto, CA, USA. .,Departments of Medicine, Division of Cardiovascular Medicine Stanford University, Stanford, CA, USA. .,Department of Radiology, School of Medicine, Stanford University, Stanford, CA, USA.
| |
Collapse
|
234
|
Javaid A, Zghyer F, Kim C, Spaulding EM, Isakadze N, Ding J, Kargillis D, Gao Y, Rahman F, Brown DE, Saria S, Martin SS, Kramer CM, Blumenthal RS, Marvel FA. Medicine 2032: The future of cardiovascular disease prevention with machine learning and digital health technology. Am J Prev Cardiol 2022; 12:100379. [PMID: 36090536 PMCID: PMC9460561 DOI: 10.1016/j.ajpc.2022.100379] [Citation(s) in RCA: 23] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Revised: 08/21/2022] [Accepted: 08/28/2022] [Indexed: 11/30/2022] Open
Abstract
Machine learning (ML) refers to computational algorithms that iteratively improve their ability to recognize patterns in data. The digitization of our healthcare infrastructure is generating an abundance of data from electronic health records, imaging, wearables, and sensors that can be analyzed by ML algorithms to generate personalized risk assessments and promote guideline-directed medical management. ML's strength in generating insights from complex medical data to guide clinical decisions must be balanced with the potential to adversely affect patient privacy, safety, health equity, and clinical interpretability. This review provides a primer on key advances in ML for cardiovascular disease prevention and how they may impact clinical practice.
Collapse
Affiliation(s)
- Aamir Javaid
- Johns Hopkins Ciccarone Center for the Prevention of Cardiovascular Disease, Division of Cardiology, Department of Medicine, Johns Hopkins University School of Medicine, 600 N. Wolfe St, Carnegie 591, Baltimore, MD 21287, USA
| | - Fawzi Zghyer
- Johns Hopkins Ciccarone Center for the Prevention of Cardiovascular Disease, Division of Cardiology, Department of Medicine, Johns Hopkins University School of Medicine, 600 N. Wolfe St, Carnegie 591, Baltimore, MD 21287, USA
| | - Chang Kim
- Johns Hopkins Ciccarone Center for the Prevention of Cardiovascular Disease, Division of Cardiology, Department of Medicine, Johns Hopkins University School of Medicine, 600 N. Wolfe St, Carnegie 591, Baltimore, MD 21287, USA
| | - Erin M. Spaulding
- Johns Hopkins Ciccarone Center for the Prevention of Cardiovascular Disease, Division of Cardiology, Department of Medicine, Johns Hopkins University School of Medicine, 600 N. Wolfe St, Carnegie 591, Baltimore, MD 21287, USA
| | - Nino Isakadze
- Johns Hopkins Ciccarone Center for the Prevention of Cardiovascular Disease, Division of Cardiology, Department of Medicine, Johns Hopkins University School of Medicine, 600 N. Wolfe St, Carnegie 591, Baltimore, MD 21287, USA
| | - Jie Ding
- Johns Hopkins Ciccarone Center for the Prevention of Cardiovascular Disease, Division of Cardiology, Department of Medicine, Johns Hopkins University School of Medicine, 600 N. Wolfe St, Carnegie 591, Baltimore, MD 21287, USA
| | - Daniel Kargillis
- Johns Hopkins Ciccarone Center for the Prevention of Cardiovascular Disease, Division of Cardiology, Department of Medicine, Johns Hopkins University School of Medicine, 600 N. Wolfe St, Carnegie 591, Baltimore, MD 21287, USA
| | - Yumin Gao
- Johns Hopkins Ciccarone Center for the Prevention of Cardiovascular Disease, Division of Cardiology, Department of Medicine, Johns Hopkins University School of Medicine, 600 N. Wolfe St, Carnegie 591, Baltimore, MD 21287, USA
| | - Faisal Rahman
- Division of Cardiology, Department of Medicine, Baylor College of Medicine, Houston, TX, USA
| | - Donald E. Brown
- School of Data Science, University of Virginia, Charlottesville, VA, USA
| | - Suchi Saria
- Machine Learning and Healthcare Laboratory, Departments of Computer Science, Statistics, and Health Policy, Malone Center for Engineering in Healthcare, and Armstrong Institute for Patient Safety and Quality, Johns Hopkins University, Baltimore, MD, USA
| | - Seth S. Martin
- Johns Hopkins Ciccarone Center for the Prevention of Cardiovascular Disease, Division of Cardiology, Department of Medicine, Johns Hopkins University School of Medicine, 600 N. Wolfe St, Carnegie 591, Baltimore, MD 21287, USA
| | - Christopher M. Kramer
- Cardiovascular Division, Department of Medicine, University of Virginia Health, Charlottesville, VA, USA
| | - Roger S. Blumenthal
- Johns Hopkins Ciccarone Center for the Prevention of Cardiovascular Disease, Division of Cardiology, Department of Medicine, Johns Hopkins University School of Medicine, 600 N. Wolfe St, Carnegie 591, Baltimore, MD 21287, USA
| | - Francoise A. Marvel
- Johns Hopkins Ciccarone Center for the Prevention of Cardiovascular Disease, Division of Cardiology, Department of Medicine, Johns Hopkins University School of Medicine, 600 N. Wolfe St, Carnegie 591, Baltimore, MD 21287, USA
| |
Collapse
|
235
|
Nesper PL, Ong JX, Fawzi AA. Deep Capillary Geometric Perfusion Deficits on OCT Angiography Detect Clinically Referable Eyes with Diabetic Retinopathy. Ophthalmol Retina 2022; 6:1194-1205. [PMID: 35661804 PMCID: PMC9715815 DOI: 10.1016/j.oret.2022.05.028] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2022] [Revised: 05/25/2022] [Accepted: 05/27/2022] [Indexed: 01/06/2023]
Abstract
PURPOSE To evaluate the sensitivity (SN) and specificity (SP) of OCT angiography (OCTA) parameters for detecting clinically referable eyes with diabetic retinopathy (DR) in a cohort of patients with diabetes mellitus (DM). DESIGN Retrospective, cross-sectional study. SUBJECTS Patients with DM with various levels of DR. METHODS We measured vessel density, vessel length density (VLD), and geometric perfusion deficits (GPDs) in the full retina, superficial capillary plexus (SCP), and deep capillary plexus (DCP) on 3 × 3-mm OCTA images. Geometric perfusion deficit was recently described as retinal tissue located further than 30 μm from blood vessels, excluding the foveal avascular zone (FAZ). We modified the GPD metric by including the FAZ as an additional variable. Clinically referable eyes were defined as moderate nonproliferative DR (NPDR) or worse retinopathy, or diabetic macular edema (DME). One eye from each patient was selected for the analysis based on image quality. We used a binary logistic regression model to adjust for covariates. MAIN OUTCOME MEASURES Sensitivity, SP, and area under the curve (AUC). RESULTS Seventy-one of 150 included eyes from 150 patients (52 with DM without DR, 27 with mild NPDR, 16 with moderate NPDR, 10 with severe NPDR, 30 with proliferative DR, and 15 with DME) had clinically referable DR. Geometric perfusion deficit metric that included the FAZ performed better than GPD in detecting referable DR in the SCP (P = 0.025) but not the DCP or full retina (P > 0.05 for both). Deep capillary plexus GPD had the largest AUC for detecting clinically referable eyes (AUC = 0.965, SN = 97.2%, SP = 84.8%), which was significantly larger than the AUC for vessel density of any layer (P < 0.05 for all) but not DCP VLD (P = 0.166). The cutoff value of 2.5% for DCP GPD resulted in a highly sensitive test for detecting clinically referable eyes without adjusting for covariates (AUC = 0.955, SN = 97.2%, SP = 79.7%). CONCLUSIONS Vascular parameters in OCTA, especially in the DCP, have the potential to identify eyes that warrant further evaluation. Geometric perfusion deficits may better distinguish these clinically referable eyes with DR than standard vessel density parameters.
Collapse
Affiliation(s)
- Peter L Nesper
- Department of Ophthalmology, Feinberg School of Medicine, Northwestern University, Chicago, Illinois
| | - Janice X Ong
- Department of Ophthalmology, Feinberg School of Medicine, Northwestern University, Chicago, Illinois
| | - Amani A Fawzi
- Department of Ophthalmology, Feinberg School of Medicine, Northwestern University, Chicago, Illinois.
| |
Collapse
|
236
|
Social Determinants of Health and Impact on Screening, Prevalence, and Management of Diabetic Retinopathy in Adults: A Narrative Review. J Clin Med 2022; 11:jcm11237120. [PMID: 36498694 PMCID: PMC9739502 DOI: 10.3390/jcm11237120] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Revised: 11/24/2022] [Accepted: 11/28/2022] [Indexed: 12/05/2022] Open
Abstract
Diabetic retinal disease (DRD) is the leading cause of blindness among working-aged individuals with diabetes. In the United States, underserved and minority populations are disproportionately affected by diabetic retinopathy and other diabetes-related health outcomes. In this narrative review, we describe racial disparities in the prevalence and screening of diabetic retinopathy, as well as the wide-range of disparities associated with social determinants of health (SDOH), which include socioeconomic status, geography, health-care access, and education.
Collapse
|
237
|
Adam H, Balagopalan A, Alsentzer E, Christia F, Ghassemi M. Mitigating the impact of biased artificial intelligence in emergency decision-making. COMMUNICATIONS MEDICINE 2022; 2:149. [PMID: 36414774 PMCID: PMC9681767 DOI: 10.1038/s43856-022-00214-4] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2022] [Accepted: 11/07/2022] [Indexed: 11/24/2022] Open
Abstract
BACKGROUND Prior research has shown that artificial intelligence (AI) systems often encode biases against minority subgroups. However, little work has focused on ways to mitigate the harm discriminatory algorithms can cause in high-stakes settings such as medicine. METHODS In this study, we experimentally evaluated the impact biased AI recommendations have on emergency decisions, where participants respond to mental health crises by calling for either medical or police assistance. We recruited 438 clinicians and 516 non-experts to participate in our web-based experiment. We evaluated participant decision-making with and without advice from biased and unbiased AI systems. We also varied the style of the AI advice, framing it either as prescriptive recommendations or descriptive flags. RESULTS Participant decisions are unbiased without AI advice. However, both clinicians and non-experts are influenced by prescriptive recommendations from a biased algorithm, choosing police help more often in emergencies involving African-American or Muslim men. Crucially, using descriptive flags rather than prescriptive recommendations allows respondents to retain their original, unbiased decision-making. CONCLUSIONS Our work demonstrates the practical danger of using biased models in health contexts, and suggests that appropriately framing decision support can mitigate the effects of AI bias. These findings must be carefully considered in the many real-world clinical scenarios where inaccurate or biased models may be used to inform important decisions.
Collapse
Affiliation(s)
- Hammaad Adam
- Institute for Data Systems and Society, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA.
| | - Aparna Balagopalan
- Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
| | - Emily Alsentzer
- Harvard-MIT Program in Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA.,Institute for Medical Engineering & Science, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA.,Division of General Internal Medicine, Brigham and Women's Hospital, Boston, MA, 02115, USA
| | - Fotini Christia
- Institute for Data Systems and Society, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA.,Sociotechnical Systems Research Center, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA.,Department of Political Science, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
| | - Marzyeh Ghassemi
- Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA.,Institute for Medical Engineering & Science, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA.,CIFAR AI Chair, Vector Institute, Toronto, ON, M5G 1M1, Canada
| |
Collapse
|
238
|
Cao J, You K, Zhou J, Xu M, Xu P, Wen L, Wang S, Jin K, Lou L, Wang Y, Ye J. A cascade eye diseases screening system with interpretability and expandability in ultra-wide field fundus images: A multicentre diagnostic accuracy study. EClinicalMedicine 2022; 53:101633. [PMID: 36110868 PMCID: PMC9468501 DOI: 10.1016/j.eclinm.2022.101633] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Revised: 08/08/2022] [Accepted: 08/08/2022] [Indexed: 12/09/2022] Open
Abstract
BACKGROUND Clinical application of artificial intelligence is limited due to the lack of interpretability and expandability in complex clinical settings. We aimed to develop an eye diseases screening system with improved interpretability and expandability based on a lesion-level dissection and tested the clinical expandability and auxiliary ability of the system. METHODS The four-hierarchical interpretable eye diseases screening system (IEDSS) based on a novel structural pattern named lesion atlas was developed to identify 30 eye diseases and conditions using a total of 32,026 ultra-wide field images collected from the Second Affiliated Hospital of Zhejiang University, School of Medicine (SAHZU), the First Affiliated Hospital of University of Science and Technology of China (FAHUSTC), and the Affiliated People's Hospital of Ningbo University (APHNU) in China between November 1, 2016 to February 28, 2022. The performance of IEDSS was compared with ophthalmologists and classic models trained with image-level labels. We further evaluated IEDSS in two external datasets, and tested it in a real-world scenario and an extended dataset with new phenotypes beyond the training categories. The accuracy (ACC), F1 score and confusion matrix were calculated to assess the performance of IEDSS. FINDINGS IEDSS reached average ACCs (aACC) of 0·9781 (95%CI 0·9739-0·9824), 0·9660 (95%CI 0·9591-0·9730) and 0·9709 (95%CI 0·9655-0·9763), frequency-weighted average F1 scores of 0·9042 (95%CI 0·8957-0·9127), 0·8837 (95%CI 0·8714-0·8960) and 0·8874 (95%CI 0·8772-0·8972) in datasets of SAHZU, APHNU and FAHUSTC, respectively. IEDSS reached a higher aACC (0·9781, 95%CI 0·9739-0·9824) compared with a multi-class image-level model (0·9398, 95%CI 0·9329-0·9467), a classic multi-label image-level model (0·9278, 95%CI 0·9189-0·9366), a novel multi-label image-level model (0·9241, 95%CI 0·9151-0·9331) and a lesion-level model without Adaboost (0·9381, 95%CI 0·9299-0·9463). In the real-world scenario, the aACC of IEDSS (0·9872, 95%CI 0·9828-0·9915) was higher than that of the senior ophthalmologist (SO) (0·9413, 95%CI 0·9321-0·9504, p = 0·000) and the junior ophthalmologist (JO) (0·8846, 95%CI 0·8722-0·8971, p = 0·000). IEDSS remained strong performance (ACC = 0·8560, 95%CI 0·8252-0·8868) compared with JO (ACC = 0·784, 95%CI 0·7479-0·8201, p= 0·003) and SO (ACC = 0·8500, 95%CI 0·8187-0·8813, p = 0·789) in the extended dataset. INTERPRETATION IEDSS showed excellent and stable performance in identifying common eye conditions and conditions beyond the training categories. The transparency and expandability of IEDSS could tremendously increase the clinical application range and the practical clinical value of it. It would enhance the efficiency and reliability of clinical practice, especially in remote areas with a lack of experienced specialists. FUNDING National Natural Science Foundation Regional Innovation and Development Joint Fund (U20A20386), Key research and development program of Zhejiang Province (2019C03020), Clinical Medical Research Centre for Eye Diseases of Zhejiang Province (2021E50007).
Collapse
Affiliation(s)
- Jing Cao
- Department of Ophthalmology, the Second Affiliated Hospital of Zhejiang University, College of Medicine, Hangzhou, Zhejiang, China
| | - Kun You
- Zhejiang Feitu Medical Imaging Co.,LTD, Hangzhou, Zhejiang, China
| | - Jingxin Zhou
- Department of Ophthalmology, the Second Affiliated Hospital of Zhejiang University, College of Medicine, Hangzhou, Zhejiang, China
| | - Mingyu Xu
- Department of Ophthalmology, the Second Affiliated Hospital of Zhejiang University, College of Medicine, Hangzhou, Zhejiang, China
| | - Peifang Xu
- Department of Ophthalmology, the Second Affiliated Hospital of Zhejiang University, College of Medicine, Hangzhou, Zhejiang, China
| | - Lei Wen
- The First Affiliated Hospital of University of Science and Technology of China, Hefei, Anhui, China
| | - Shengzhan Wang
- The Affiliated People's Hospital of Ningbo University, Ningbo, Zhejiang, China
| | - Kai Jin
- Department of Ophthalmology, the Second Affiliated Hospital of Zhejiang University, College of Medicine, Hangzhou, Zhejiang, China
| | - Lixia Lou
- Department of Ophthalmology, the Second Affiliated Hospital of Zhejiang University, College of Medicine, Hangzhou, Zhejiang, China
| | - Yao Wang
- Department of Ophthalmology, the Second Affiliated Hospital of Zhejiang University, College of Medicine, Hangzhou, Zhejiang, China
| | - Juan Ye
- Department of Ophthalmology, the Second Affiliated Hospital of Zhejiang University, College of Medicine, Hangzhou, Zhejiang, China
- Corresponding author at: No. 1 West Lake Avenue, Hangzhou, Zhejiang Province, China, 310009.
| |
Collapse
|
239
|
Jin K, Ye J. Artificial intelligence and deep learning in ophthalmology: Current status and future perspectives. ADVANCES IN OPHTHALMOLOGY PRACTICE AND RESEARCH 2022; 2:100078. [PMID: 37846285 PMCID: PMC10577833 DOI: 10.1016/j.aopr.2022.100078] [Citation(s) in RCA: 29] [Impact Index Per Article: 14.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 06/21/2022] [Revised: 08/01/2022] [Accepted: 08/18/2022] [Indexed: 10/18/2023]
Abstract
Background The ophthalmology field was among the first to adopt artificial intelligence (AI) in medicine. The availability of digitized ocular images and substantial data have made deep learning (DL) a popular topic. Main text At the moment, AI in ophthalmology is mostly used to improve disease diagnosis and assist decision-making aiming at ophthalmic diseases like diabetic retinopathy (DR), glaucoma, age-related macular degeneration (AMD), cataract and other anterior segment diseases. However, most of the AI systems developed to date are still in the experimental stages, with only a few having achieved clinical applications. There are a number of reasons for this phenomenon, including security, privacy, poor pervasiveness, trust and explainability concerns. Conclusions This review summarizes AI applications in ophthalmology, highlighting significant clinical considerations for adopting AI techniques and discussing the potential challenges and future directions.
Collapse
Affiliation(s)
- Kai Jin
- Department of Ophthalmology, The Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China
| | - Juan Ye
- Department of Ophthalmology, The Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China
| |
Collapse
|
240
|
Cai CX, Kim M, Lundeen EA, Benoit SR. Differences in receipt of recommended eye examinations by comorbidity status and healthcare utilization among nonelderly adults with diabetes. J Diabetes 2022; 14:749-757. [PMID: 36285845 PMCID: PMC9705799 DOI: 10.1111/1753-0407.13328] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/04/2022] [Revised: 08/27/2022] [Accepted: 09/30/2022] [Indexed: 11/28/2022] Open
Abstract
BACKGROUND To evaluate the effect of diabetes comorbidities by baseline healthcare utilization on receipt of recommended eye examinations. METHODS Retrospective analysis of 310 691 nonelderly adults with type 2 diabetes in the IBM MarketScan Commercial Database from 2016 to 2019. Patients were grouped based on diabetes-concordant (related) or -discordant (unrelated) comorbidities. Logistic regression was used to estimate the prevalence ratio (PR) for eye examinations by comorbidity status, healthcare utilization, and an interaction between comorbidities and utilization, controlling for age, sex, region, and major eye disease. RESULTS Prevalence of biennial eye examinations varied by the four comorbidity groups: 43.5% (diabetes only), 52.7% (concordant + discordant comorbidities), 48.0% (concordant comorbidities only), and 45.3% (discordant comorbidities only). In the lowest healthcare utilization tertile, the concordant-only and concordant + discordant groups had lower prevalence of examinations compared to diabetes only (PR 0.95 [95% CI 0.92-0.98] and PR 0.91 [95% CI 0.88-0.95], respectively). In the medium utilization tertile, the discordant-only and concordant + discordant groups had lower prevalence of examinations (PR 0.89 [0.83-0.95] and PR 0.94 [0.90-0.98], respectively). In the highest utilization tertile, the concordant-only and concordant + discordant groups had higher prevalence of examinations. CONCLUSIONS Among patients with low healthcare utilization, having comorbid conditions is associated with lower prevalence of eye examinations. Among those with medium healthcare utilization, patients with diabetes-discordant comorbidities are particularly vulnerable. This study highlights populations of diabetes patients who would benefit from increased assistance in receiving vision-preserving eye examinations.
Collapse
Affiliation(s)
- Cindy X. Cai
- Wilmer Eye Institute, Johns Hopkins HospitalBaltimoreMarylandUSA
| | - Minchul Kim
- Center for Outcomes Research, Department of Internal MedicineUniversity of Illinois College of Medicine PeoriaPeoriaIllinoisUSA
| | - Elizabeth A. Lundeen
- Division of Diabetes TranslationNational Center for Chronic Disease Prevention and Health Promotion, Centers for Disease Control and PreventionAtlantaGeorgiaUSA
| | - Stephen R. Benoit
- Division of Diabetes TranslationNational Center for Chronic Disease Prevention and Health Promotion, Centers for Disease Control and PreventionAtlantaGeorgiaUSA
| |
Collapse
|
241
|
Sheng B, Chen X, Li T, Ma T, Yang Y, Bi L, Zhang X. An overview of artificial intelligence in diabetic retinopathy and other ocular diseases. Front Public Health 2022; 10:971943. [PMID: 36388304 PMCID: PMC9650481 DOI: 10.3389/fpubh.2022.971943] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2022] [Accepted: 10/04/2022] [Indexed: 01/25/2023] Open
Abstract
Artificial intelligence (AI), also known as machine intelligence, is a branch of science that empowers machines using human intelligence. AI refers to the technology of rendering human intelligence through computer programs. From healthcare to the precise prevention, diagnosis, and management of diseases, AI is progressing rapidly in various interdisciplinary fields, including ophthalmology. Ophthalmology is at the forefront of AI in medicine because the diagnosis of ocular diseases heavy reliance on imaging. Recently, deep learning-based AI screening and prediction models have been applied to the most common visual impairment and blindness diseases, including glaucoma, cataract, age-related macular degeneration (ARMD), and diabetic retinopathy (DR). The success of AI in medicine is primarily attributed to the development of deep learning algorithms, which are computational models composed of multiple layers of simulated neurons. These models can learn the representations of data at multiple levels of abstraction. The Inception-v3 algorithm and transfer learning concept have been applied in DR and ARMD to reuse fundus image features learned from natural images (non-medical images) to train an AI system with a fraction of the commonly used training data (<1%). The trained AI system achieved performance comparable to that of human experts in classifying ARMD and diabetic macular edema on optical coherence tomography images. In this study, we highlight the fundamental concepts of AI and its application in these four major ocular diseases and further discuss the current challenges, as well as the prospects in ophthalmology.
Collapse
Affiliation(s)
- Bin Sheng
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, China
- Beijing Retinal and Choroidal Vascular Diseases Study Group, Beijing Tongren Hospital, Beijing, China
| | - Xiaosi Chen
- Beijing Retinal and Choroidal Vascular Diseases Study Group, Beijing Tongren Hospital, Beijing, China
- Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Tingyao Li
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, China
- Beijing Retinal and Choroidal Vascular Diseases Study Group, Beijing Tongren Hospital, Beijing, China
| | - Tianxing Ma
- Chongqing University-University of Cincinnati Joint Co-op Institute, Chongqing University, Chongqing, China
| | - Yang Yang
- Beijing Retinal and Choroidal Vascular Diseases Study Group, Beijing Tongren Hospital, Beijing, China
- Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Lei Bi
- School of Computer Science, University of Sydney, Sydney, NSW, Australia
| | - Xinyuan Zhang
- Beijing Retinal and Choroidal Vascular Diseases Study Group, Beijing Tongren Hospital, Beijing, China
- Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| |
Collapse
|
242
|
Alali NM, Albazei A, Alotaibi HM, Almohammadi AM, Alsirhani EK, Alanazi TS, Alshammri BJ, Alqahtani MQ, Magliyah M, Alreshidi S, Albalawi HB. Diabetic Retinopathy and Eye Screening: Diabetic Patients Standpoint, Their Practice, and Barriers; A Cross-Sectional Study. J Clin Med 2022; 11:6351. [PMID: 36362578 PMCID: PMC9654427 DOI: 10.3390/jcm11216351] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2022] [Revised: 10/14/2022] [Accepted: 10/25/2022] [Indexed: 08/31/2023] Open
Abstract
Diabetes mellites (DM) is one of the most common systemic disorders in Saudi Arabia and worldwide. Diabetic retinopathy (DR) is a potentially blinding ophthalmic consequence of uncontrolled DM. The early detection of DR leads to an earlier intervention, which might be sight-saving. Our aim in this cross-sectional study is to assess patients' knowledge and practices regarding DR, and to detect the barriers for eye screening and receiving a check-up from an ophthalmologist. The study included 386 diabetic patients. One hundred and thirty-one patients (33.9%) had T1DM and 188 (48.7%) had T2DM. Most of the diabetic patients (73.3%) know that they must have an eye check-up regardless of their blood sugar level. DM was agreed to affect the retina in 80.3% of the patients, 56% of patients agree that DM complications are always symptomatic, and 84.5% know that DM could affect their eyes. The fact that blindness is a complication of diabetic retinopathy was known by 65% of the diabetic patients. A better knowledge was detected among patients older than 50 years of age (54.9%) compared to those aged less than 35 years (40.9%), which was statistically significant (p = 0.030). Additionally, 61.2% of diabetic patients who were university graduates had a significantly better knowledge in comparison to 33.3% of illiterate patients (p = 0.006). Considering the barriers to not getting one's eyes screened earlier, a lack of knowledge was reported by 38.3% of the patients, followed by lack of access to eye care (24.4%). In conclusion, there is a remarkable increase in the awareness of DR among the Saudi population. This awareness might lead to an earlier detection and management of DR.
Collapse
Affiliation(s)
- Naif Mamdouh Alali
- Division of Ophthalmology, Department of Surgery, Faculty of Medicine, University of Tabuk, Tabuk 47512, Saudi Arabia
| | - Alanuad Albazei
- Medical Education Department, King Khaled Eye Specialized Hospital, Riyadh 11462, Saudi Arabia
| | - Horia Mohammed Alotaibi
- Ophthalmology Department, Imam Abdulrahman bun Faisal University, Damman 34212, Saudi Arabia
| | | | | | - Turki Saleh Alanazi
- Internal Medicine Department, King Salam Armed Forces Hospital, Tabuk 47512, Saudi Arabia
| | - Badriah Jariad Alshammri
- Obstetrics and Gynecology Department, King Salam Armed Forces Hospital, Tabuk 47512, Saudi Arabia
| | | | - Moustafa Magliyah
- Ophthalmology Department, Prince Mohammed Medical City, Sakakah 42421, Saudi Arabia
| | - Shaker Alreshidi
- Ophthalmology Department, Almajmaah University, Almajmaah 15341, Saudi Arabia
| | - Hani B. Albalawi
- Division of Ophthalmology, Department of Surgery, Faculty of Medicine, University of Tabuk, Tabuk 47512, Saudi Arabia
| |
Collapse
|
243
|
Influence of Different Types of Retinal Cameras on the Performance of Deep Learning Algorithms in Diabetic Retinopathy Screening. Life (Basel) 2022; 12:life12101610. [PMID: 36295045 PMCID: PMC9604597 DOI: 10.3390/life12101610] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2022] [Revised: 10/12/2022] [Accepted: 10/13/2022] [Indexed: 11/17/2022] Open
Abstract
Background: The aim of this study was to assess the performance of regional graders and artificial intelligence algorithms across retinal cameras with different specifications in classifying an image as gradable and ungradable. Methods: Study subjects were included from a community-based nationwide diabetic retinopathy screening program in Thailand. Various non-mydriatic fundus cameras were used for image acquisition, including Kowa Nonmyd, Kowa Nonmyd α-DⅢ, Kowa Nonmyd 7, Kowa Nonmyd WX, Kowa VX 10 α, Kowa VX 20 and Nidek AFC 210. All retinal photographs were graded by deep learning algorithms and human graders and compared with a standard reference. Results: Images were divided into two categories as gradable and ungradable images. Four thousand eight hundred fifty-two participants with 19,408 fundus images were included, of which 15,351 (79.09%) were gradable images and the remaining 4057 (20.90%) were ungradable images. Conclusions: The deep learning (DL) algorithm demonstrated better sensitivity, specificity and kappa than the human graders for all eight types of non-mydriatic fundus cameras. The deep learning system showed, more consistent diagnostic performance than the human graders across images of varying quality and camera types.
Collapse
|
244
|
Ferro Desideri L, Rutigliani C, Corazza P, Nastasi A, Roda M, Nicolo M, Traverso CE, Vagge A. The upcoming role of Artificial Intelligence (AI) for retinal and glaucomatous diseases. JOURNAL OF OPTOMETRY 2022; 15 Suppl 1:S50-S57. [PMID: 36216736 PMCID: PMC9732476 DOI: 10.1016/j.optom.2022.08.001] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Revised: 08/14/2022] [Accepted: 08/16/2022] [Indexed: 06/16/2023]
Abstract
In recent years, the role of artificial intelligence (AI) and deep learning (DL) models is attracting increasing global interest in the field of ophthalmology. DL models are considered the current state-of-art among the AI technologies. In fact, DL systems have the capability to recognize, quantify and describe pathological clinical features. Their role is currently being investigated for the early diagnosis and management of several retinal diseases and glaucoma. The application of DL models to fundus photographs, visual fields and optical coherence tomography (OCT) imaging has provided promising results in the early detection of diabetic retinopathy (DR), wet age-related macular degeneration (w-AMD), retinopathy of prematurity (ROP) and glaucoma. In this review we analyze the current evidence of AI applied to these ocular diseases, as well as discuss the possible future developments and potential clinical implications, without neglecting the present limitations and challenges in order to adopt AI and DL models as powerful tools in the everyday routine clinical practice.
Collapse
Affiliation(s)
- Lorenzo Ferro Desideri
- University Eye Clinic of Genoa, IRCCS Ospedale Policlinico San Martino, Genoa, Italy; Department of Neurosciences, Rehabilitation, Ophthalmology, Genetics, Maternal and Child Health (DiNOGMI), University of Genoa, Italy.
| | | | - Paolo Corazza
- University Eye Clinic of Genoa, IRCCS Ospedale Policlinico San Martino, Genoa, Italy; Department of Neurosciences, Rehabilitation, Ophthalmology, Genetics, Maternal and Child Health (DiNOGMI), University of Genoa, Italy
| | | | - Matilde Roda
- Ophthalmology Unit, Department of Experimental, Diagnostic and Specialty Medicine (DIMES), Alma Mater Studiorum University of Bologna and S.Orsola-Malpighi Teaching Hospital, Bologna, Italy
| | - Massimo Nicolo
- University Eye Clinic of Genoa, IRCCS Ospedale Policlinico San Martino, Genoa, Italy; Department of Neurosciences, Rehabilitation, Ophthalmology, Genetics, Maternal and Child Health (DiNOGMI), University of Genoa, Italy
| | - Carlo Enrico Traverso
- University Eye Clinic of Genoa, IRCCS Ospedale Policlinico San Martino, Genoa, Italy; Department of Neurosciences, Rehabilitation, Ophthalmology, Genetics, Maternal and Child Health (DiNOGMI), University of Genoa, Italy
| | - Aldo Vagge
- University Eye Clinic of Genoa, IRCCS Ospedale Policlinico San Martino, Genoa, Italy; Department of Neurosciences, Rehabilitation, Ophthalmology, Genetics, Maternal and Child Health (DiNOGMI), University of Genoa, Italy
| |
Collapse
|
245
|
Systematic analysis of the test design and performance of AI/ML-based medical devices approved for triage/detection/diagnosis in the USA and Japan. Sci Rep 2022; 12:16874. [PMID: 36207474 PMCID: PMC9542463 DOI: 10.1038/s41598-022-21426-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2021] [Accepted: 09/27/2022] [Indexed: 11/08/2022] Open
Abstract
The development of computer-aided detection (CAD) using artificial intelligence (AI) and machine learning (ML) is rapidly evolving. Submission of AI/ML-based CAD devices for regulatory approval requires information about clinical trial design and performance criteria, but the requirements vary between countries. This study compares the requirements for AI/ML-based CAD devices approved by the US Food and Drug Administration (FDA) and the Pharmaceuticals and Medical Devices Agency (PMDA) in Japan. A list of 45 FDA-approved and 12 PMDA-approved AI/ML-based CAD devices was compiled. In the USA, devices classified as computer-aided simple triage were approved based on standalone software testing, whereas devices classified as computer-aided detection/diagnosis were approved based on reader study testing. In Japan, however, there was no clear distinction between evaluation methods according to the category. In the USA, a prospective randomized controlled trial was conducted for AI/ML-based CAD devices used for the detection of colorectal polyps, whereas in Japan, such devices were approved based on standalone software testing. This study indicated that the different viewpoints of AI/ML-based CAD in the two countries influenced the selection of different evaluation methods. This study’s findings may be useful for defining a unified global development and approval standard for AI/ML-based CAD.
Collapse
|
246
|
Font O, Torrents-Barrena J, Royo D, García SB, Zarranz-Ventura J, Bures A, Salinas C, Zapata MÁ. Validation of an autonomous artificial intelligence-based diagnostic system for holistic maculopathy screening in a routine occupational health checkup context. Graefes Arch Clin Exp Ophthalmol 2022; 260:3255-3265. [PMID: 35567610 PMCID: PMC9477940 DOI: 10.1007/s00417-022-05653-2] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2021] [Revised: 03/15/2022] [Accepted: 03/31/2022] [Indexed: 02/08/2023] Open
Abstract
PURPOSE This study aims to evaluate the ability of an autonomous artificial intelligence (AI) system for detection of the most common central retinal pathologies in fundus photography. METHODS Retrospective diagnostic test evaluation on a raw dataset of 5918 images (2839 individuals) evaluated with non-mydriatic cameras during routine occupational health checkups. Three camera models were employed: Optomed Aurora (field of view - FOV 50º, 88% of the dataset), ZEISS VISUSCOUT 100 (FOV 40º, 9%), and Optomed SmartScope M5 (FOV 40º, 3%). Image acquisition took 2 min per patient. Ground truth for each image of the dataset was determined by 2 masked retina specialists, and disagreements were resolved by a 3rd retina specialist. The specific pathologies considered for evaluation were "diabetic retinopathy" (DR), "Age-related macular degeneration" (AMD), "glaucomatous optic neuropathy" (GON), and "Nevus." Images with maculopathy signs that did not match the described taxonomy were classified as "Other." RESULTS The combination of algorithms to detect any abnormalities had an area under the curve (AUC) of 0.963 with a sensitivity of 92.9% and a specificity of 86.8%. The algorithms individually obtained are as follows: AMD AUC 0.980 (sensitivity 93.8%; specificity 95.7%), DR AUC 0.950 (sensitivity 81.1%; specificity 94.8%), GON AUC 0.889 (sensitivity 53.6% specificity 95.7%), Nevus AUC 0.931 (sensitivity 86.7%; specificity 90.7%). CONCLUSION Our holistic AI approach reaches high diagnostic accuracy at simultaneous detection of DR, AMD, and Nevus. The integration of pathology-specific algorithms permits higher sensitivities with minimal impact on its specificity. It also reduces the risk of missing incidental findings. Deep learning may facilitate wider screenings of eye diseases.
Collapse
Affiliation(s)
- Octavi Font
- Optretina Image Reading Team, Barcelona, Spain
| | - Jordina Torrents-Barrena
- BCN MedTech, Department of Information and Communication Technologies, Universitat Pompeu Fabra, Barcelona, Spain
| | - Dídac Royo
- Optretina Image Reading Team, Barcelona, Spain
| | - Sandra Banderas García
- Facultat de Cirurgia i Ciències Morfològiques, Universitat Autònoma de Barcelona (UAB), Barcelona, Spain.
- Ophthalmology Department Hospital Vall d'Hebron, Barcelona, Spain.
| | - Javier Zarranz-Ventura
- Institut Clinic of Ophthalmology (ICOF), Hospital Clinic, Barcelona, Spain
- Institut d'Investigacions Biomediques August Pi I Sunyer (IDIBAPS), Barcelona, Spain
| | - Anniken Bures
- Optretina Image Reading Team, Barcelona, Spain
- Instituto de Microcirugía Ocular (IMO), Barcelona, Spain
| | - Cecilia Salinas
- Optretina Image Reading Team, Barcelona, Spain
- Instituto de Microcirugía Ocular (IMO), Barcelona, Spain
| | - Miguel Ángel Zapata
- Optretina Image Reading Team, Barcelona, Spain
- Ophthalmology Department Hospital Vall d'Hebron, Barcelona, Spain
| |
Collapse
|
247
|
Yang Y, Pan J, Yuan M, Lai K, Xie H, Ma L, Xu S, Deng R, Zhao M, Luo Y, Lin X. Performance of the AIDRScreening system in detecting diabetic retinopathy in the fundus photographs of Chinese patients: a prospective, multicenter, clinical study. ANNALS OF TRANSLATIONAL MEDICINE 2022; 10:1088. [PMID: 36388839 PMCID: PMC9652560 DOI: 10.21037/atm-22-350] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/20/2022] [Accepted: 07/15/2022] [Indexed: 01/21/2023]
Abstract
Background Diabetic retinopathy (DR) is the leading cause of blindness in the working-age population worldwide, and there is a large unmet need for DR screening in China. This observational, prospective, multicenter, gold standard-controlled study sought to evaluate the effectiveness and safety of the AIDRScreening system (v. 1.0), which is an artificial intelligence (AI)-enabled system that detects DR in the Chinese population based on fundus photographs. Methods Participants with diabetes mellitus (DM) were recruited. Fundus photographs (field 1 and field 2) of 1 eye in each participant were graded by the AIDRScreening system (v. 1.0) to detect referable DR (RDR). The results were compared to those of the masked manual grading (gold standard) system by the Zhongshan Image Reading Center. The primary outcomes were the sensitivity and specificity of the AIDRScreening system in detecting RDR. The other outcomes evaluated included the system's diagnostic accuracy, positive predictive value, negative predictive value, diagnostic accuracy gain rate, and average diagnostic time gain rate. Results Among the 1,001 enrolled participants with DM, 962 (96.1%) were included in the final analyses. The participants had a median age of 60.61 years (range: 20.18-85.78 years), and 48.2% were men. The manual grading system detected RDR in 399 (41.48%) participants. The AIDRScreening system had a sensitivity of 86.72% (95% CI: 83.39-90.05%) and a specificity of 96.09% (95% CI: 94.14-97.54%) in the detection of RDR, and a false-positive rate of 3.91%. The diagnostic accuracy gain rate of the AIDRScreening system was 16.57% higher than that of the investigator, while the average diagnostic time gain rate was -37.32% lower. Conclusions The automated AIDRScreening system can detect RDR with high accuracy, but cannot detect maculopathy. The implementation of the AIDRScreening system may increase the efficiency of DR screening.
Collapse
Affiliation(s)
- Yao Yang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, China
| | - Jianying Pan
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, China
| | - Miner Yuan
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, China
| | - Kunbei Lai
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, China
| | - Huirui Xie
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, China
| | - Li Ma
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, China
| | - Suzhong Xu
- Eye Hospital, Wenzhou Medical University, Wenzhou, China
| | - Ruzhi Deng
- Eye Hospital, Wenzhou Medical University, Wenzhou, China
| | - Mingwei Zhao
- Department of Ophthalmology, Peking University People’s Hospital, Beijing, China
| | - Yan Luo
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, China
| | - Xiaofeng Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, China
| |
Collapse
|
248
|
Warman R, Warman A, Warman P, Degnan A, Blickman J, Chowdhary V, Dash D, Sangal R, Vadhan J, Bueso T, Windisch T, Neves G. Deep Learning System Boosts Radiologist Detection of Intracranial Hemorrhage. Cureus 2022; 14:e30264. [PMID: 36381767 PMCID: PMC9653089 DOI: 10.7759/cureus.30264] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/13/2022] [Indexed: 01/25/2023] Open
Abstract
BACKGROUND Intracranial hemorrhage (ICH) requires emergent medical treatment for positive outcomes. While previous artificial intelligence (AI) solutions achieved rapid diagnostics, none were shown to improve the performance of radiologists in detecting ICHs. Here, we show that the Caire ICH artificial intelligence system enhances a radiologist's ICH diagnosis performance. METHODS A dataset of non-contrast-enhanced axial cranial computed tomography (CT) scans (n=532) were labeled for the presence or absence of an ICH. If an ICH was detected, its ICH subtype was identified. After a washout period, the three radiologists reviewed the same dataset with the assistance of the Caire ICH system. Performance was measured with respect to reader agreement, accuracy, sensitivity, and specificity when compared to the ground truth, defined as reader consensus. RESULTS Caire ICH improved the inter-reader agreement on average by 5.76% in a dataset with an ICH prevalence of 74.3%. Further, radiologists using Caire ICH detected an average of 18 more ICHs and significantly increased their accuracy by 6.15%, their sensitivity by 4.6%, and their specificity by 10.62%. The Caire ICH system also improved the radiologist's ability to accurately identify the ICH subtypes present. CONCLUSION The Caire ICH device significantly improves the performance of a cohort of radiologists. Such a device has the potential to be a tool that can improve patient outcomes and reduce misdiagnosis of ICH.
Collapse
Affiliation(s)
| | | | | | - Andrew Degnan
- Radiology, University of Pittsburgh Medical Center (UPMC) Children's Hospital of Pittsburgh, Pittsburgh, USA
| | | | | | - Dev Dash
- Emergency Medicine, Stanford University, Stanford, USA
| | - Rohit Sangal
- Emergency Medicine, Yale School of Medicine, New Haven, USA
| | - Jason Vadhan
- Emergency Medicine, The University of Texas Southwestern (UTSW), Dallas, USA
| | - Tulio Bueso
- Neurology, The Texas Tech University Health Sciences Center (TTUHSC), Lubbock, USA
| | | | - Gabriel Neves
- Neurology, The Texas Tech University Health Sciences Center (TTUHSC), Lubbock, USA
| |
Collapse
|
249
|
Pareek A, Lungren MP, Halabi SS. The requirements for performing artificial-intelligence-related research and model development. Pediatr Radiol 2022; 52:2094-2100. [PMID: 35996023 DOI: 10.1007/s00247-022-05483-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/25/2022] [Revised: 07/06/2022] [Accepted: 08/09/2022] [Indexed: 11/25/2022]
Abstract
Artificial intelligence research in health care has undergone tremendous growth in the last several years thanks to the explosion of digital health care data and systems that can leverage large amounts of data to learn patterns that can be applied to clinical tasks. In addition, given broad acceleration in machine learning across industries like transportation, media and commerce, there has been a significant growth in demand for machine-learning practitioners such as engineers and data scientists, who have skill sets that can be applied to health care use cases but who simultaneously lack important health care domain expertise. The purpose of this paper is to discuss the requirements of building an artificial-intelligence research enterprise including the research team, technical software/hardware, and procurement and curation of health care data.
Collapse
Affiliation(s)
- Anuj Pareek
- Stanford AIMI Center, Stanford University, 1701 Page Mill Road, Palo Alto, CA, 94304, USA.
| | - Matthew P Lungren
- Stanford AIMI Center, Stanford University, 1701 Page Mill Road, Palo Alto, CA, 94304, USA
| | - Safwan S Halabi
- Department of Medical Imaging, Ann & Robert H. Lurie Children's Hospital of Chicago, Chicago, IL, USA
| |
Collapse
|
250
|
Lin S, Li L, Zou H, Xu Y, Lu L. Medical Staff and Resident Preferences for Using Deep Learning in Eye Disease Screening: Discrete Choice Experiment. J Med Internet Res 2022; 24:e40249. [PMID: 36125854 PMCID: PMC9533207 DOI: 10.2196/40249] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2022] [Revised: 08/08/2022] [Accepted: 09/02/2022] [Indexed: 11/17/2022] Open
Abstract
Background Deep learning–assisted eye disease diagnosis technology is increasingly applied in eye disease screening. However, no research has suggested the prerequisites for health care service providers and residents willing to use it. Objective The aim of this paper is to reveal the preferences of health care service providers and residents for using artificial intelligence (AI) in community-based eye disease screening, particularly their preference for accuracy. Methods Discrete choice experiments for health care providers and residents were conducted in Shanghai, China. In total, 34 medical institutions with adequate AI-assisted screening experience participated. A total of 39 medical staff and 318 residents were asked to answer the questionnaire and make a trade-off among alternative screening strategies with different attributes, including missed diagnosis rate, overdiagnosis rate, screening result feedback efficiency, level of ophthalmologist involvement, organizational form, cost, and screening result feedback form. Conditional logit models with the stepwise selection method were used to estimate the preferences. Results Medical staff preferred high accuracy: The specificity of deep learning models should be more than 90% (odds ratio [OR]=0.61 for 10% overdiagnosis; P<.001), which was much higher than the Food and Drug Administration standards. However, accuracy was not the residents’ preference. Rather, they preferred to have the doctors involved in the screening process. In addition, when compared with a fully manual diagnosis, AI technology was more favored by the medical staff (OR=2.08 for semiautomated AI model and OR=2.39 for fully automated AI model; P<.001), while the residents were in disfavor of the AI technology without doctors’ supervision (OR=0.24; P<.001). Conclusions Deep learning model under doctors’ supervision is strongly recommended, and the specificity of the model should be more than 90%. In addition, digital transformation should help medical staff move away from heavy and repetitive work and spend more time on communicating with residents.
Collapse
Affiliation(s)
- Senlin Lin
- Shanghai Eye Disease Prevention and Treatment Center, Shanghai Eye Hospital, Shanghai, China.,Shanghai Key Laboratory of Ocular Fundus Diseases, Shanghai General Hospital, Shanghai Engineering Center for Visual Science and Photomedicine, Shanghai, China
| | - Liping Li
- Shanghai Hongkou Center for Disease Control and Prevention, Shanghai, China
| | - Haidong Zou
- Shanghai Eye Disease Prevention and Treatment Center, Shanghai Eye Hospital, Shanghai, China.,Shanghai Key Laboratory of Ocular Fundus Diseases, Shanghai General Hospital, Shanghai Engineering Center for Visual Science and Photomedicine, Shanghai, China
| | - Yi Xu
- Shanghai Eye Disease Prevention and Treatment Center, Shanghai Eye Hospital, Shanghai, China.,Shanghai Key Laboratory of Ocular Fundus Diseases, Shanghai General Hospital, Shanghai Engineering Center for Visual Science and Photomedicine, Shanghai, China
| | - Lina Lu
- Shanghai Eye Disease Prevention and Treatment Center, Shanghai Eye Hospital, Shanghai, China.,Shanghai Key Laboratory of Ocular Fundus Diseases, Shanghai General Hospital, Shanghai Engineering Center for Visual Science and Photomedicine, Shanghai, China
| |
Collapse
|