1
|
Niestrata M, Radia M, Jackson J, Allan B. Global review of publicly available image datasets for the anterior segment of the eye. J Cataract Refract Surg 2024; 50:1184-1190. [PMID: 39150312 DOI: 10.1097/j.jcrs.0000000000001538] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2024] [Accepted: 08/03/2024] [Indexed: 08/17/2024]
Abstract
This study comprehensively reviewed publicly available image datasets for the anterior segment, with a focus on cataract, refractive, and corneal surgeries. The goal was to assess characteristics of existing datasets and identify areas for improvement. PubMED and Google searches were performed using the search terms "refractive surgery," "anterior segment," "cornea," "corneal," "cataract" AND "database," with the related word of "imaging." Results of each of these searches were collated, identifying 26 publicly available anterior segment image datasets. Imaging modalities included optical coherence tomography, photography, and confocal microscopy. Most datasets were small, 80% originated in the U.S., China, or Europe. Over 50% of images were from normal eyes. Disease states represented included keratoconus, corneal ulcers, and Fuchs dystrophy. Most of the datasets were incompletely described. To promote accessibility going forward to 2030, the ESCRS Digital Health Special Interest Group will annually update a list of available image datasets for anterior segment at www.escrs.org .
Collapse
Affiliation(s)
- Magdalena Niestrata
- From the NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom (Niestrata, Allan); Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom (Radia, Allan); Data and Statistics Department, University of East London, London, United Kingdom (Jackson)
| | | | | | | |
Collapse
|
2
|
Kalaw FGP, Cavichini M, Zhang J, Wen B, Lin AC, Heinke A, Nguyen T, An C, Bartsch DUG, Cheng L, Freeman WR. Ultra-wide field and new wide field composite retinal image registration with AI-enabled pipeline and 3D distortion correction algorithm. Eye (Lond) 2024; 38:1189-1195. [PMID: 38114568 PMCID: PMC11009222 DOI: 10.1038/s41433-023-02868-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Revised: 11/07/2023] [Accepted: 11/22/2023] [Indexed: 12/21/2023] Open
Abstract
PURPOSE This study aimed to compare a new Artificial Intelligence (AI) method to conventional mathematical warping in accurately overlaying peripheral retinal vessels from two different imaging devices: confocal scanning laser ophthalmoscope (cSLO) wide-field images and SLO ultra-wide field images. METHODS Images were captured using the Heidelberg Spectralis 55-degree field-of-view and Optos ultra-wide field. The conventional mathematical warping was performed using Random Sample Consensus-Sample and Consensus sets (RANSAC-SC). This was compared to an AI alignment algorithm based on a one-way forward registration procedure consisting of full Convolutional Neural Networks (CNNs) with Outlier Rejection (OR CNN), as well as an iterative 3D camera pose optimization process (OR CNN + Distortion Correction [DC]). Images were provided in a checkerboard pattern, and peripheral vessels were graded in four quadrants based on alignment to the adjacent box. RESULTS A total of 660 boxes were analysed from 55 eyes. Dice scores were compared between the three methods (RANSAC-SC/OR CNN/OR CNN + DC): 0.3341/0.4665/4784 for fold 1-2 and 0.3315/0.4494/4596 for fold 2-1 in composite images. The images composed using the OR CNN + DC have a median rating of 4 (out of 5) versus 2 using RANSAC-SC. The odds of getting a higher grading level are 4.8 times higher using our OR CNN + DC than RANSAC-SC (p < 0.0001). CONCLUSION Peripheral retinal vessel alignment performed better using our AI algorithm than RANSAC-SC. This may help improve co-localizing retinal anatomy and pathology with our algorithm.
Collapse
Affiliation(s)
- Fritz Gerald P Kalaw
- Jacobs Retina Center, University of California, San Diego, CA, USA
- Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California, San Diego, CA, USA
- Division of Ophthalmology Informatics and Data Science, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California, San Diego, CA, USA
| | - Melina Cavichini
- Jacobs Retina Center, University of California, San Diego, CA, USA
- Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California, San Diego, CA, USA
| | - Junkang Zhang
- Department of Electrical and Computer Engineering, University of California, San Diego, CA, USA
| | - Bo Wen
- Department of Electrical and Computer Engineering, University of California, San Diego, CA, USA
| | - Andrew C Lin
- Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California, San Diego, CA, USA
| | - Anna Heinke
- Jacobs Retina Center, University of California, San Diego, CA, USA
- Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California, San Diego, CA, USA
| | - Truong Nguyen
- Department of Electrical and Computer Engineering, University of California, San Diego, CA, USA
| | - Cheolhong An
- Department of Electrical and Computer Engineering, University of California, San Diego, CA, USA
| | | | - Lingyun Cheng
- Jacobs Retina Center, University of California, San Diego, CA, USA
| | - William R Freeman
- Jacobs Retina Center, University of California, San Diego, CA, USA.
- Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California, San Diego, CA, USA.
- Division of Ophthalmology Informatics and Data Science, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California, San Diego, CA, USA.
- Department of Electrical and Computer Engineering, University of California, San Diego, CA, USA.
| |
Collapse
|
3
|
Tsai MC, Yen HH, Tsai HY, Huang YK, Luo YS, Kornelius E, Sung WW, Lin CC, Tseng MH, Wang CC. Artificial intelligence system for the detection of Barrett's esophagus. World J Gastroenterol 2023; 29:6198-6207. [PMID: 38186865 PMCID: PMC10768395 DOI: 10.3748/wjg.v29.i48.6198] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/03/2023] [Revised: 11/13/2023] [Accepted: 12/12/2023] [Indexed: 12/27/2023] Open
Abstract
BACKGROUND Barrett's esophagus (BE), which has increased in prevalence worldwide, is a precursor for esophageal adenocarcinoma. Although there is a gap in the detection rates between endoscopic BE and histological BE in current research, we trained our artificial intelligence (AI) system with images of endoscopic BE and tested the system with images of histological BE. AIM To assess whether an AI system can aid in the detection of BE in our setting. METHODS Endoscopic narrow-band imaging (NBI) was collected from Chung Shan Medical University Hospital and Changhua Christian Hospital, resulting in 724 cases, with 86 patients having pathological results. Three senior endoscopists, who were instructing physicians of the Digestive Endoscopy Society of Taiwan, independently annotated the images in the development set to determine whether each image was classified as an endoscopic BE. The test set consisted of 160 endoscopic images of 86 cases with histological results. RESULTS Six pre-trained models were compared, and EfficientNetV2B2 (accuracy [ACC]: 0.8) was selected as the backbone architecture for further evaluation due to better ACC results. In the final test, the AI system correctly identified 66 of 70 cases of BE and 85 of 90 cases without BE, resulting in an ACC of 94.37%. CONCLUSION Our AI system, which was trained by NBI of endoscopic BE, can adequately predict endoscopic images of histological BE. The ACC, sensitivity, and specificity are 94.37%, 94.29%, and 94.44%, respectively.
Collapse
Affiliation(s)
- Ming-Chang Tsai
- Division of Gastroenterology and Hepatology, Department of Internal Medicine, Chung Shan Medical University Hospital, Taichung 402, Taiwan
- School of Medicine, Chung Shan Medical University, Taichung 402, Taiwan
| | - Hsu-Heng Yen
- Division of Gastroenterology, Changhua Christian Hospital, Changhua 500, Taiwan
- Artificial Intelligence Development Center, Changhua Christian Hospital, Changhua 500, Taiwan
- Department of Post-Baccalaureate Medicine, College of Medicine, National Chung Hsing University, Taichung 400, Taiwan
| | - Hui-Yu Tsai
- Department of Medical Informatics, Chung Shan Medical University, Taichung 402, Taiwan
| | - Yu-Kai Huang
- Department of Internal Medicine, Chung Shan Medical University Hospital, Taichung 402, Taiwan
| | - Yu-Sin Luo
- Department of Internal Medicine, Chung Shan Medical University Hospital, Taichung 402, Taiwan
| | - Edy Kornelius
- School of Medicine, Chung Shan Medical University, Taichung 402, Taiwan
- Department of Endocrinology and Metabolism, Chung-Shan Medical University Hospital, Taichung 402, Taiwan
| | - Wen-Wei Sung
- School of Medicine, Chung Shan Medical University, Taichung 402, Taiwan
- Department of Urology, Chung Shan Medical University Hospital, Taichung 402, Taiwan
| | - Chun-Che Lin
- Division of Gastroenterology and Hepatology, Department of Internal Medicine, Chung Shan Medical University Hospital, Taichung 402, Taiwan
- School of Medicine, Chung Shan Medical University, Taichung 402, Taiwan
| | - Ming-Hseng Tseng
- Department of Medical Informatics, Chung Shan Medical University, Taichung 402, Taiwan
- Information Technology Office, Chung Shan Medical University Hospital, Taichung 402, Taiwan
| | - Chi-Chih Wang
- Division of Gastroenterology and Hepatology, Department of Internal Medicine, Chung Shan Medical University Hospital, Taichung 402, Taiwan
- School of Medicine, Chung Shan Medical University, Taichung 402, Taiwan
| |
Collapse
|
4
|
Delavari P, Ozturan G, Yuan L, Yilmaz Ö, Oruc I. Artificial intelligence, explainability, and the scientific method: A proof-of-concept study on novel retinal biomarker discovery. PNAS NEXUS 2023; 2:pgad290. [PMID: 37746328 PMCID: PMC10517742 DOI: 10.1093/pnasnexus/pgad290] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/20/2023] [Accepted: 08/28/2023] [Indexed: 09/26/2023]
Abstract
We present a structured approach to combine explainability of artificial intelligence (AI) with the scientific method for scientific discovery. We demonstrate the utility of this approach in a proof-of-concept study where we uncover biomarkers from a convolutional neural network (CNN) model trained to classify patient sex in retinal images. This is a trait that is not currently recognized by diagnosticians in retinal images, yet, one successfully classified by CNNs. Our methodology consists of four phases: In Phase 1, CNN development, we train a visual geometry group (VGG) model to recognize patient sex in retinal images. In Phase 2, Inspiration, we review visualizations obtained from post hoc interpretability tools to make observations, and articulate exploratory hypotheses. Here, we listed 14 hypotheses retinal sex differences. In Phase 3, Exploration, we test all exploratory hypotheses on an independent dataset. Out of 14 exploratory hypotheses, nine revealed significant differences. In Phase 4, Verification, we re-tested the nine flagged hypotheses on a new dataset. Five were verified, revealing (i) significantly greater length, (ii) more nodes, and (iii) more branches of retinal vasculature, (iv) greater retinal area covered by the vessels in the superior temporal quadrant, and (v) darker peripapillary region in male eyes. Finally, we trained a group of ophthalmologists (N = 26 ) to recognize the novel retinal features for sex classification. While their pretraining performance was not different from chance level or the performance of a nonexpert group (N = 31 ), after training, their performance increased significantly (p < 0.001 , d = 2.63 ). These findings showcase the potential for retinal biomarker discovery through CNN applications, with the added utility of empowering medical practitioners with new diagnostic capabilities to enhance their clinical toolkit.
Collapse
Affiliation(s)
- Parsa Delavari
- Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, V5Z 0A6 BC, Canada
- Neuroscience, University of British Columbia, Djavad Mowafaghian Centre for Brain Health, Vancouver, V6T 1Z3 BC, Canada
| | - Gulcenur Ozturan
- Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, V5Z 0A6 BC, Canada
| | - Lei Yuan
- Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, V5Z 0A6 BC, Canada
| | - Özgür Yilmaz
- Mathematics, University of British Columbia, Vancouver, V6T 1Z2 BC, Canada
| | - Ipek Oruc
- Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, V5Z 0A6 BC, Canada
- Neuroscience, University of British Columbia, Djavad Mowafaghian Centre for Brain Health, Vancouver, V6T 1Z3 BC, Canada
| |
Collapse
|
5
|
Gu B, Sidhu S, Weinreb RN, Christopher M, Zangwill LM, Baxter SL. Review of Visualization Approaches in Deep Learning Models of Glaucoma. Asia Pac J Ophthalmol (Phila) 2023; 12:392-401. [PMID: 37523431 DOI: 10.1097/apo.0000000000000619] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Accepted: 05/11/2023] [Indexed: 08/02/2023] Open
Abstract
Glaucoma is a major cause of irreversible blindness worldwide. As glaucoma often presents without symptoms, early detection and intervention are important in delaying progression. Deep learning (DL) has emerged as a rapidly advancing tool to help achieve these objectives. In this narrative review, data types and visualization approaches for presenting model predictions, including models based on tabular data, functional data, and/or structural data, are summarized, and the importance of data source diversity for improving the utility and generalizability of DL models is explored. Examples of innovative approaches to understanding predictions of artificial intelligence (AI) models and alignment with clinicians are provided. In addition, methods to enhance the interpretability of clinical features from tabular data used to train AI models are investigated. Examples of published DL models that include interfaces to facilitate end-user engagement and minimize cognitive and time burdens are highlighted. The stages of integrating AI models into existing clinical workflows are reviewed, and challenges are discussed. Reviewing these approaches may help inform the generation of user-friendly interfaces that are successfully integrated into clinical information systems. This review details key principles regarding visualization approaches in DL models of glaucoma. The articles reviewed here focused on usability, explainability, and promotion of clinician trust to encourage wider adoption for clinical use. These studies demonstrate important progress in addressing visualization and explainability issues required for successful real-world implementation of DL models in glaucoma.
Collapse
Affiliation(s)
- Byoungyoung Gu
- Division of Ophthalmology Informatics and Data Science and Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, CA, US
- Division of Biomedical Informatics, Department of Medicine, University of California San Diego, La Jolla, CA, US
| | - Sophia Sidhu
- Division of Ophthalmology Informatics and Data Science and Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, CA, US
- Division of Biomedical Informatics, Department of Medicine, University of California San Diego, La Jolla, CA, US
| | - Robert N Weinreb
- Division of Ophthalmology Informatics and Data Science and Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, CA, US
| | - Mark Christopher
- Division of Ophthalmology Informatics and Data Science and Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, CA, US
| | - Linda M Zangwill
- Division of Ophthalmology Informatics and Data Science and Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, CA, US
| | - Sally L Baxter
- Division of Ophthalmology Informatics and Data Science and Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California San Diego, La Jolla, CA, US
- Division of Biomedical Informatics, Department of Medicine, University of California San Diego, La Jolla, CA, US
| |
Collapse
|
6
|
Ren X, Feng W, Ran R, Gao Y, Lin Y, Fu X, Tao Y, Wang T, Wang B, Ju L, Chen Y, He L, Xi W, Liu X, Ge Z, Zhang M. Artificial intelligence to distinguish retinal vein occlusion patients using color fundus photographs. Eye (Lond) 2023; 37:2026-2032. [PMID: 36302974 PMCID: PMC10333217 DOI: 10.1038/s41433-022-02239-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2022] [Revised: 08/04/2022] [Accepted: 09/02/2022] [Indexed: 11/09/2022] Open
Abstract
PURPOSE Our aim is to establish an AI model for distinguishing color fundus photographs (CFP) of RVO patients from normal individuals. METHODS The training dataset included 2013 CFP from fellow eyes of RVO patients and 8536 age- and gender-matched normal CFP. Model performance was assessed in two independent testing datasets. We evaluated the performance of the AI model using the area under the receiver operating characteristic curve (AUC), accuracy, precision, specificity, sensitivity, and confusion matrices. We further explained the probable clinical relevance of the AI by extracting and comparing features of the retinal images. RESULTS Our model achieved an average AUC was 0.9866 (95% CI: 0.9805-0.9918), accuracy was 0.9534 (95% CI: 0.9421-0.9639), precision was 0.9123 (95% CI: 0.8784-9453), specificity was 0.9810 (95% CI: 0.9729-0.9884), and sensitivity was 0.8367 (95% CI: 0.7953-0.8756) for identifying fundus images of RVO patients in training dataset. In independent external datasets 1, the AUC of the RVO group was 0.8102 (95% CI: 0.7979-0.8226), the accuracy of 0.7752 (95% CI: 0.7633-0.7875), the precision of 0.7041 (95% CI: 0.6873-0.7211), specificity of 0.6499 (95% CI: 0.6305-0.6679) and sensitivity of 0.9124 (95% CI: 0.9004-0.9241) for RVO group. There were significant differences in retinal arteriovenous ratio, optic cup to optic disc ratio, and optic disc tilt angle (p = 0.001, p = 0.0001, and p = 0.0001, respectively) between the two groups in training dataset. CONCLUSION We trained an AI model to classify color fundus photographs of RVO patients with stable performance both in internal and external datasets. This may be of great importance for risk prediction in patients with retinal venous occlusion.
Collapse
Affiliation(s)
- Xiang Ren
- Department of Ophthalmology, Ophthalmic Laboratory, West China Hospital, Sichuan University, Chengdu, Sichuan, 610041, P. R. China
- Research Laboratory of Ophthalmology and Vision Sciences, State Key Laboratory of Biotherapy, West China Hospital, Sichuan University, Chengdu, Sichuan, 610041, P. R. China
| | - Wei Feng
- Beijing Airdoc Technology Co Ltd, Beijing, China
| | - Ruijin Ran
- Department of Ophthalmology, Ophthalmic Laboratory, West China Hospital, Sichuan University, Chengdu, Sichuan, 610041, P. R. China
- Minda Hospital of Hubei Minzu University, Enshi, China
| | - Yunxia Gao
- Department of Ophthalmology, Ophthalmic Laboratory, West China Hospital, Sichuan University, Chengdu, Sichuan, 610041, P. R. China
| | - Yu Lin
- Department of Ophthalmology, Ophthalmic Laboratory, West China Hospital, Sichuan University, Chengdu, Sichuan, 610041, P. R. China
- Research Laboratory of Ophthalmology and Vision Sciences, State Key Laboratory of Biotherapy, West China Hospital, Sichuan University, Chengdu, Sichuan, 610041, P. R. China
| | - Xiangyu Fu
- Department of Ophthalmology, Ophthalmic Laboratory, West China Hospital, Sichuan University, Chengdu, Sichuan, 610041, P. R. China
- Research Laboratory of Ophthalmology and Vision Sciences, State Key Laboratory of Biotherapy, West China Hospital, Sichuan University, Chengdu, Sichuan, 610041, P. R. China
| | - Yunhan Tao
- Department of Ophthalmology, Ophthalmic Laboratory, West China Hospital, Sichuan University, Chengdu, Sichuan, 610041, P. R. China
| | - Ting Wang
- Department of Ophthalmology, Ophthalmic Laboratory, West China Hospital, Sichuan University, Chengdu, Sichuan, 610041, P. R. China
- Research Laboratory of Ophthalmology and Vision Sciences, State Key Laboratory of Biotherapy, West China Hospital, Sichuan University, Chengdu, Sichuan, 610041, P. R. China
| | - Bin Wang
- Beijing Airdoc Technology Co Ltd, Beijing, China
| | - Lie Ju
- Beijing Airdoc Technology Co Ltd, Beijing, China
- ECSE, Faculty of Engineering, Monash University, Melbourne, VIC, Australia
| | - Yuzhong Chen
- Beijing Airdoc Technology Co Ltd, Beijing, China
| | - Lanqing He
- Beijing Airdoc Technology Co Ltd, Beijing, China
| | - Wu Xi
- Chengdu Ikangguobin Health Examination Center Ltd, Chengdu, China
| | - Xiaorong Liu
- Chengdu Ikangguobin Health Examination Center Ltd, Chengdu, China
| | - Zongyuan Ge
- ECSE, Faculty of Engineering, Monash University, Melbourne, VIC, Australia
- eResearch Centre, Monash University, Melbourne, VIC, Australia
| | - Ming Zhang
- Department of Ophthalmology, Ophthalmic Laboratory, West China Hospital, Sichuan University, Chengdu, Sichuan, 610041, P. R. China.
| |
Collapse
|
7
|
Cui C, Yang H, Wang Y, Zhao S, Asad Z, Coburn LA, Wilson KT, Landman BA, Huo Y. Deep multimodal fusion of image and non-image data in disease diagnosis and prognosis: a review. PROGRESS IN BIOMEDICAL ENGINEERING (BRISTOL, ENGLAND) 2023; 5:10.1088/2516-1091/acc2fe. [PMID: 37360402 PMCID: PMC10288577 DOI: 10.1088/2516-1091/acc2fe] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/28/2023]
Abstract
The rapid development of diagnostic technologies in healthcare is leading to higher requirements for physicians to handle and integrate the heterogeneous, yet complementary data that are produced during routine practice. For instance, the personalized diagnosis and treatment planning for a single cancer patient relies on various images (e.g. radiology, pathology and camera images) and non-image data (e.g. clinical data and genomic data). However, such decision-making procedures can be subjective, qualitative, and have large inter-subject variabilities. With the recent advances in multimodal deep learning technologies, an increasingly large number of efforts have been devoted to a key question: how do we extract and aggregate multimodal information to ultimately provide more objective, quantitative computer-aided clinical decision making? This paper reviews the recent studies on dealing with such a question. Briefly, this review will include the (a) overview of current multimodal learning workflows, (b) summarization of multimodal fusion methods, (c) discussion of the performance, (d) applications in disease diagnosis and prognosis, and (e) challenges and future directions.
Collapse
Affiliation(s)
- Can Cui
- Department of Computer Science, Vanderbilt University, Nashville, TN 37235, United States of America
| | - Haichun Yang
- Department of Pathology, Microbiology and Immunology, Vanderbilt University Medical Center, Nashville, TN 37215, United States of America
| | - Yaohong Wang
- Department of Pathology, Microbiology and Immunology, Vanderbilt University Medical Center, Nashville, TN 37215, United States of America
| | - Shilin Zhao
- Department of Biostatistics, Vanderbilt University Medical Center, Nashville, TN 37215, United States of America
| | - Zuhayr Asad
- Department of Computer Science, Vanderbilt University, Nashville, TN 37235, United States of America
| | - Lori A Coburn
- Division of Gastroenterology Hepatology, and Nutrition, Department of Medicine, Vanderbilt University Medical Center, Nashville, TN 37232, United States of America
- Veterans Affairs Tennessee Valley Healthcare System, Nashville, TN 37212, United States of America
| | - Keith T Wilson
- Department of Pathology, Microbiology and Immunology, Vanderbilt University Medical Center, Nashville, TN 37215, United States of America
- Division of Gastroenterology Hepatology, and Nutrition, Department of Medicine, Vanderbilt University Medical Center, Nashville, TN 37232, United States of America
- Veterans Affairs Tennessee Valley Healthcare System, Nashville, TN 37212, United States of America
| | - Bennett A Landman
- Department of Computer Science, Vanderbilt University, Nashville, TN 37235, United States of America
- Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, TN 37235, United States of America
| | - Yuankai Huo
- Department of Computer Science, Vanderbilt University, Nashville, TN 37235, United States of America
- Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, TN 37235, United States of America
| |
Collapse
|
8
|
Dorali P, Shahmoradi Z, Weng CY, Lee T. Cost-effectiveness Analysis of a Personalized, Teleretinal-Inclusive Screening Policy for Diabetic Retinopathy via Markov Modeling. Ophthalmol Retina 2023:S2468-6530(23)00001-5. [PMID: 36621610 DOI: 10.1016/j.oret.2023.01.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2022] [Revised: 01/02/2023] [Accepted: 01/03/2023] [Indexed: 01/07/2023]
Abstract
PURPOSE Although teleretinal imaging has proved effective in increasing population-level screening for diabetic retinopathy (DR), there is a lack of quantitative understanding of how to incorporate teleretinal imaging into existing screening guidelines. We develop a mathematical model to determine personalized DR screening recommendations that utilize teleretinal imaging and evaluate the cost-effectiveness of the personalized screening policy. DESIGN A partially observable Markov decision process is employed to determine personalized screening recommendations based on patient compliance, willingness to pay, and A1C level. Deterministic sensitivity analysis was conducted to evaluate the impact of patient-specific factors on personalized screening policy. The cost-effectiveness of identified screening policies was evaluated via hidden-Markov chain Monte Carlo simulation on a data-based hypothetical cohort. PARTICIPANTS Screening policies were simulated for a hypothetical cohort of 500 000 patients with parameters based on the literature and electronic medical records of 2457 patients who received teleretinal imaging from 2013 to 2020 from the Harris Health System. METHODS Population-based mathematical modeling study. Interventions included dilated fundus examinations referred to as clinical screening, teleretinal imaging, and wait and watch recommendations. MAIN OUTCOME MEASURES Personalized screening recommendations based on patient-specific factors. Accumulated quality-adjusted life-years (QALYs) and cost (USD) per patient under different screening policies. Incremental cost-effectiveness ratio to compare different policies. RESULTS For the base cohort, on average, teleretinal imaging was recommended 86.7% of the time over each patient's lifetime. The model-based personalized policy dominated other standardized policies, generating more QALY gains and cost savings for at least 57% of the base cohort. Similar outcomes were observed in sensitivity analyses of the base cohort and the Harris Health-specific cohort and rural population scenario analysis. CONCLUSIONS A mathematical model was developed as a decision support tool to identify a personalized screening policy that incorporates both teleretinal imaging and clinical screening and adapts to patient characteristics. Compared with current standardized policies, the model-based policy significantly reduces costs, whereas it is performing comparably, if not better, in terms of QALY gain. A personalized approach to DR screening has significant potential benefits that warrant further exploration. FINANCIAL DISCLOSURE(S) Proprietary or commercial disclosure may be found after the references.
Collapse
Affiliation(s)
- Poria Dorali
- Department of Industrial Engineering, University of Houston, Houston, Texas
| | - Zahed Shahmoradi
- Center for Health Services Research, Department of Management, Policy, and Community Health, UTHealth School of Public Health, Houston, Texas
| | - Christina Y Weng
- Department of Ophthalmology, Ben Taub Hospital, Houston, Texas; Department of Ophthalmology, Baylor College of Medicine, Houston, Texas.
| | - Taewoo Lee
- Department of Industrial Engineering, University of Pittsburgh, Pittsburgh, Pennsylvania
| |
Collapse
|
9
|
Wongchaisuwat P, Thamphithak R, Jitpukdee P, Wongchaisuwat N. Application of Deep Learning for Automated Detection of Polypoidal Choroidal Vasculopathy in Spectral Domain Optical Coherence Tomography. Transl Vis Sci Technol 2022; 11:16. [PMID: 36219163 PMCID: PMC9580222 DOI: 10.1167/tvst.11.10.16] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2022] [Accepted: 08/29/2022] [Indexed: 11/25/2022] Open
Abstract
Objective To develop an automated polypoidal choroidal vasculopathy (PCV) screening model to distinguish PCV from wet age-related macular degeneration (wet AMD). Methods A retrospective review of spectral domain optical coherence tomography (SD-OCT) images was undertaken. The included SD-OCT images were classified into two distinct categories (PCV or wet AMD) prior to the development of the PCV screening model. The automated detection of PCV using the developed model was compared with the results of gold-standard fundus fluorescein angiography and indocyanine green (FFA + ICG) angiography. A framework of SHapley Additive exPlanations was used to interpret the results from the model. Results A total of 2334 SD-OCT images were enrolled for training purposes, and an additional 1171 SD-OCT images were used for external validation. The ResNet attention model yielded superior performance with average area under the curve values of 0.8 and 0.81 for the training and external validation data sets, respectively. The sensitivity/specificity calculated at a patient level was 100%/60% and 85%/71% for the training and external validation data sets, respectively. Conclusions A conventional FFA + ICG investigation to differentiate PCV from wet AMD requires intense health care resources and adversely affects patients. A deep learning algorithm is proposed to automatically distinguish PCV from wet AMD. The developed algorithm exhibited promising performance for further development into an alternative PCV screening tool. Enhancement of the model's performance with additional data is needed prior to implementation of this diagnostic tool in real-world clinical practice. The invisibility of disease signs within SD-OCT images is the main limitation of the proposed model. Translational Relevance Basic research of deep learning algorithms was applied to differentiate PCV from wet AMD based on OCT images, benefiting a diagnosis process and minimizing a risk of ICG angiogram.
Collapse
Affiliation(s)
- Papis Wongchaisuwat
- Department of Industrial Engineering, Faculty of Engineering, Kasetsart University, Bangkok, Thailand
| | - Ranida Thamphithak
- Department of Ophthalmology, Faculty of Medicine Siriraj Hospital, Mahidol University, Bangkok, Thailand
| | - Peerakarn Jitpukdee
- Department of Industrial Engineering, Faculty of Engineering, Kasetsart University, Bangkok, Thailand
| | - Nida Wongchaisuwat
- Department of Ophthalmology, Faculty of Medicine Siriraj Hospital, Mahidol University, Bangkok, Thailand
| |
Collapse
|
10
|
Ittoop SM, Jaccard N, Lanouette G, Kahook MY. The Role of Artificial Intelligence in the Diagnosis and Management of Glaucoma. J Glaucoma 2022; 31:137-146. [PMID: 34930873 DOI: 10.1097/ijg.0000000000001972] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2021] [Accepted: 12/10/2021] [Indexed: 11/26/2022]
Abstract
Glaucomatous optic neuropathy is the leading cause of irreversible blindness worldwide. Diagnosis and monitoring of disease involves integrating information from the clinical examination with subjective data from visual field testing and objective biometric data that includes pachymetry, corneal hysteresis, and optic nerve and retinal imaging. This intricate process is further complicated by the lack of clear definitions for the presence and progression of glaucomatous optic neuropathy, which makes it vulnerable to clinician interpretation error. Artificial intelligence (AI) and AI-enabled workflows have been proposed as a plausible solution. Applications derived from this field of computer science can improve the quality and robustness of insights obtained from clinical data that can enhance the clinician's approach to patient care. This review clarifies key terms and concepts used in AI literature, discusses the current advances of AI in glaucoma, elucidates the clinical advantages and challenges to implementing this technology, and highlights potential future applications.
Collapse
Affiliation(s)
- Sabita M Ittoop
- The George Washington University Medical Faculty Associates, Washington, DC
| | | | | | - Malik Y Kahook
- Sue Anschutz-Rodgers Eye Center, The University of Colorado School of Medicine, Aurora, CO
| |
Collapse
|
11
|
Wang C, Calle P, Tran Ton NB, Zhang Z, Yan F, Donaldson AM, Bradley NA, Yu Z, Fung KM, Pan C, Tang Q. Deep-learning-aided forward optical coherence tomography endoscope for percutaneous nephrostomy guidance. BIOMEDICAL OPTICS EXPRESS 2021; 12:2404-2418. [PMID: 33996237 PMCID: PMC8086467 DOI: 10.1364/boe.421299] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/01/2021] [Revised: 03/18/2021] [Accepted: 03/19/2021] [Indexed: 05/18/2023]
Abstract
Percutaneous renal access is the critical initial step in many medical settings. In order to obtain the best surgical outcome with minimum patient morbidity, an improved method for access to the renal calyx is needed. In our study, we built a forward-view optical coherence tomography (OCT) endoscopic system for percutaneous nephrostomy (PCN) guidance. Porcine kidneys were imaged in our experiment to demonstrate the feasibility of the imaging system. Three tissue types of porcine kidneys (renal cortex, medulla, and calyx) can be clearly distinguished due to the morphological and tissue differences from the OCT endoscopic images. To further improve the guidance efficacy and reduce the learning burden of the clinical doctors, a deep-learning-based computer aided diagnosis platform was developed to automatically classify the OCT images by the renal tissue types. Convolutional neural networks (CNN) were developed with labeled OCT images based on the ResNet34, MobileNetv2 and ResNet50 architectures. Nested cross-validation and testing was used to benchmark the classification performance with uncertainty quantification over 10 kidneys, which demonstrated robust performance over substantial biological variability among kidneys. ResNet50-based CNN models achieved an average classification accuracy of 82.6%±3.0%. The classification precisions were 79%±4% for cortex, 85%±6% for medulla, and 91%±5% for calyx and the classification recalls were 68%±11% for cortex, 91%±4% for medulla, and 89%±3% for calyx. Interpretation of the CNN predictions showed the discriminative characteristics in the OCT images of the three renal tissue types. The results validated the technical feasibility of using this novel imaging platform to automatically recognize the images of renal tissue structures ahead of the PCN needle in PCN surgery.
Collapse
Affiliation(s)
- Chen Wang
- Stephenson School of Biomedical Engineering, University of Oklahoma, Norman, OK 73072, USA
- These authors contributed equally to this work
| | - Paul Calle
- Stephenson School of Biomedical Engineering, University of Oklahoma, Norman, OK 73072, USA
- These authors contributed equally to this work
| | - Nu Bao Tran Ton
- Stephenson School of Biomedical Engineering, University of Oklahoma, Norman, OK 73072, USA
| | - Zuyuan Zhang
- School of Computer Science, University of Oklahoma, Norman, OK 73072, USA
| | - Feng Yan
- Stephenson School of Biomedical Engineering, University of Oklahoma, Norman, OK 73072, USA
| | - Anthony M Donaldson
- Stephenson School of Biomedical Engineering, University of Oklahoma, Norman, OK 73072, USA
| | - Nathan A Bradley
- Department of Urology, University of Oklahoma Health Sciences Center, Oklahoma City, OK 73104, USA
| | - Zhongxin Yu
- Children's Hospital, University of Oklahoma Health Sciences Center, Oklahoma City, OK 73104, USA
| | - Kar-Ming Fung
- Department of Pathology, University of Oklahoma Health Sciences Center, Oklahoma City, OK 73104, USA
- Stephenson Cancer Center, University of Oklahoma Health Sciences Center, Oklahoma City, OK 73104, USA
| | - Chongle Pan
- School of Computer Science, University of Oklahoma, Norman, OK 73072, USA
| | - Qinggong Tang
- Stephenson School of Biomedical Engineering, University of Oklahoma, Norman, OK 73072, USA
| |
Collapse
|
12
|
Campbell CG, Ting DSW, Keane PA, Foster PJ. The potential application of artificial intelligence for diagnosis and management of glaucoma in adults. Br Med Bull 2020; 134:21-33. [PMID: 32518944 DOI: 10.1093/bmb/ldaa012] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/03/2020] [Revised: 04/02/2020] [Accepted: 04/02/2020] [Indexed: 12/26/2022]
Abstract
BACKGROUND Glaucoma is the most frequent cause of irreversible blindness worldwide. There is no cure, but early detection and treatment can slow the progression and prevent loss of vision. It has been suggested that artificial intelligence (AI) has potential application for detection and management of glaucoma. SOURCES OF DATA This literature review is based on articles published in peer-reviewed journals. AREAS OF AGREEMENT There have been significant advances in both AI and imaging techniques that are able to identify the early signs of glaucomatous damage. Machine and deep learning algorithms show capabilities equivalent to human experts, if not superior. AREAS OF CONTROVERSY Concerns that the increased reliance on AI may lead to deskilling of clinicians. GROWING POINTS AI has potential to be used in virtual review clinics, telemedicine and as a training tool for junior doctors. Unsupervised AI techniques offer the potential of uncovering currently unrecognized patterns of disease. If this promise is fulfilled, AI may then be of use in challenging cases or where a second opinion is desirable. AREAS TIMELY FOR DEVELOPING RESEARCH There is a need to determine the external validity of deep learning algorithms and to better understand how the 'black box' paradigm reaches results.
Collapse
Affiliation(s)
- Cara G Campbell
- UCL Institute of Ophthalmology, Faculty of Brain Science, University College London, 11-43 Bath Street, London EC1V 9EL, UK
| | - Daniel S W Ting
- Medical Retina Service, Moorfields Eye Hospital NHS Foundation Trust, 162 City Road, London EC1V 2PD, UK
| | - Pearse A Keane
- UCL Institute of Ophthalmology, Faculty of Brain Science, University College London, 11-43 Bath Street, London EC1V 9EL, UK
- Medical Retina Service, Moorfields Eye Hospital NHS Foundation Trust, 162 City Road, London EC1V 2PD, UK
- National Institute for Health Research Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust NHS Foundation Trust, 2/12 Wolfson Building and UCL Institute of Ophthalmology, 11-43 Bath Street, London EC1V 9EL, UK
| | - Paul J Foster
- UCL Institute of Ophthalmology, Faculty of Brain Science, University College London, 11-43 Bath Street, London EC1V 9EL, UK
- Medical Retina Service, Moorfields Eye Hospital NHS Foundation Trust, 162 City Road, London EC1V 2PD, UK
- National Institute for Health Research Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust NHS Foundation Trust, 2/12 Wolfson Building and UCL Institute of Ophthalmology, 11-43 Bath Street, London EC1V 9EL, UK
| |
Collapse
|