1
|
Zhang Q, Zhang P, Chen N, Zhu Z, Li W, Wang Q. Trends and hotspots in the field of diabetic retinopathy imaging research from 2000-2023. Front Med (Lausanne) 2024; 11:1481088. [PMID: 39444814 PMCID: PMC11496202 DOI: 10.3389/fmed.2024.1481088] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2024] [Accepted: 09/27/2024] [Indexed: 10/25/2024] Open
Abstract
Background Diabetic retinopathy (DR) poses a major threat to diabetic patients' vision and is a critical public health issue. Imaging applications for DR have grown since the 21st century, aiding diagnosis, grading, and screening. This study uses bibliometric analysis to assess the field's advancements and key areas of interest. Methods This study performed a bibliometric analysis of DR imaging articles collected from the Web of Science Core Collection database between January 1st, 2000, and December 31st, 2023. The literature information was then analyzed through CiteSpace. Results The United States and China led in the number of publications, with 719 and 609, respectively. The University of London topped the institution list with 139 papers. Tien Yin Wong was the most prolific researcher. Invest. Ophthalmol. Vis. Sci. published the most articles (105). Notable burst keywords were "deep learning," "artificial intelligence," et al. Conclusion The United States is at the forefront of DR research, with the University of London as the top institution and Invest. Ophthalmol. Vis. Sci. as the most published journal. Tien Yin Wong is the most influential researcher. Hotspots like "deep learning," and "artificial intelligence," have seen a significant rise, indicating artificial intelligence's growing role in DR imaging.
Collapse
Affiliation(s)
- Qing Zhang
- The Third Affiliated Hospital of Xinxiang Medical University, Xinxiang Medical University, Xinxiang, China
| | - Ping Zhang
- Shenzhen Eye Institute, Shenzhen Eye Hospital, Jinan University, Shenzhen, China
| | - Naimei Chen
- Department of Ophthalmology, Huaian Hospital of Huaian City, Huaian, China
| | - Zhentao Zhu
- Department of Ophthalmology, Huaian Hospital of Huaian City, Huaian, China
| | - Wangting Li
- Shenzhen Eye Institute, Shenzhen Eye Hospital, Jinan University, Shenzhen, China
| | - Qiang Wang
- Department of Ophthalmology, Third Affiliated Hospital, Wenzhou Medical University, Zhejiang, China
| |
Collapse
|
2
|
Savoy FM, Rao DP, Toh JK, Ong B, Sivaraman A, Sharma A, Das T. Empowering Portable Age-Related Macular Degeneration Screening: Evaluation of a Deep Learning Algorithm for a Smartphone Fundus Camera. BMJ Open 2024; 14:e081398. [PMID: 39237272 PMCID: PMC11381639 DOI: 10.1136/bmjopen-2023-081398] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 09/07/2024] Open
Abstract
OBJECTIVES Despite global research on early detection of age-related macular degeneration (AMD), not enough is being done for large-scale screening. Automated analysis of retinal images captured via smartphone presents a potential solution; however, to our knowledge, such an artificial intelligence (AI) system has not been evaluated. The study aimed to assess the performance of an AI algorithm in detecting referable AMD on images captured on a portable fundus camera. DESIGN, SETTING A retrospective image database from the Age-Related Eye Disease Study (AREDS) and target device was used. PARTICIPANTS The algorithm was trained on two distinct data sets with macula-centric images: initially on 108,251 images (55% referable AMD) from AREDS and then fine-tuned on 1108 images (33% referable AMD) captured on Asian eyes using the target device. The model was designed to indicate the presence of referable AMD (intermediate and advanced AMD). Following the first training step, the test set consisted of 909 images (49% referable AMD). For the fine-tuning step, the test set consisted of 238 (34% referable AMD) images. The reference standard for the AREDS data set was fundus image grading by the central reading centre, and for the target device, it was consensus image grading by specialists. OUTCOME MEASURES Area under receiver operating curve (AUC), sensitivity and specificity of algorithm. RESULTS Before fine-tuning, the deep learning (DL) algorithm exhibited a test set (from AREDS) sensitivity of 93.48% (95% CI: 90.8% to 95.6%), specificity of 82.33% (95% CI: 78.6% to 85.7%) and AUC of 0.965 (95% CI:0.95 to 0.98). After fine-tuning, the DL algorithm displayed a test set (from the target device) sensitivity of 91.25% (95% CI: 82.8% to 96.4%), specificity of 84.18% (95% CI: 77.5% to 89.5%) and AUC 0.947 (95% CI: 0.911 to 0.982). CONCLUSION The DL algorithm shows promising results in detecting referable AMD from a portable smartphone-based imaging system. This approach can potentially bring effective and affordable AMD screening to underserved areas.
Collapse
Affiliation(s)
| | | | - Jun Kai Toh
- Medios Technologies, Remidio Innovative Solutions, Singapore
| | - Bryan Ong
- Medios Technologies, Remidio Innovative Solutions, Singapore
| | - Anand Sivaraman
- Remidio Innovative Solutions Pvt Ltd, Bangalore, Karnataka, India
| | - Ashish Sharma
- Lotus Eye Care Hospital and Institute, Coimbatore, Tamil Nadu, India
| | | |
Collapse
|
3
|
Li Z, Wang Y, Chen K, Qiang W, Zong X, Ding K, Wang S, Yin S, Jiang J, Chen W. Promoting smartphone-based keratitis screening using meta-learning: A multicenter study. J Biomed Inform 2024; 157:104722. [PMID: 39244181 DOI: 10.1016/j.jbi.2024.104722] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2024] [Revised: 08/01/2024] [Accepted: 09/02/2024] [Indexed: 09/09/2024]
Abstract
OBJECTIVE Keratitis is the primary cause of corneal blindness worldwide. Prompt identification and referral of patients with keratitis are fundamental measures to improve patient prognosis. Although deep learning can assist ophthalmologists in automatically detecting keratitis through a slit lamp camera, remote and underserved areas often lack this professional equipment. Smartphones, a widely available device, have recently been found to have potential in keratitis screening. However, given the limited data available from smartphones, employing traditional deep learning algorithms to construct a robust intelligent system presents a significant challenge. This study aimed to propose a meta-learning framework, cosine nearest centroid-based metric learning (CNCML), for developing a smartphone-based keratitis screening model in the case of insufficient smartphone data by leveraging the prior knowledge acquired from slit-lamp photographs. METHODS We developed and assessed CNCML based on 13,009 slit-lamp photographs and 4,075 smartphone photographs that were obtained from 3 independent clinical centers. To mimic real-world scenarios with various degrees of sample scarcity, we used training sets of different sizes (0 to 20 photographs per class) from the HUAWEI smartphone to train CNCML. We evaluated the performance of CNCML not only on an internal test dataset but also on two external datasets that were collected by two different brands of smartphones (VIVO and XIAOMI) in another clinical center. Furthermore, we compared the performance of CNCML with that of traditional deep learning models on these smartphone datasets. The accuracy and macro-average area under the curve (macro-AUC) were utilized to evaluate the performance of models. RESULTS With merely 15 smartphone photographs per class used for training, CNCML reached accuracies of 84.59%, 83.15%, and 89.99% on three smartphone datasets, with corresponding macro-AUCs of 0.96, 0.95, and 0.98, respectively. The accuracies of CNCML on these datasets were 0.56% to 9.65% higher than those of the most competitive traditional deep learning models. CONCLUSIONS CNCML exhibited fast learning capabilities, attaining remarkable performance with a small number of training samples. This approach presents a potential solution for transitioning intelligent keratitis detection from professional devices (e.g., slit-lamp cameras) to more ubiquitous devices (e.g., smartphones), making keratitis screening more convenient and effective.
Collapse
Affiliation(s)
- Zhongwen Li
- Ningbo Key Laboratory of Medical Research on Blinding Eye Diseases, Ningbo Eye Institute, Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315040, China; National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China.
| | - Yangyang Wang
- Ningbo Key Laboratory of Medical Research on Blinding Eye Diseases, Ningbo Eye Institute, Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315040, China; National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China
| | - Kuan Chen
- Department of Ophthalmology, Cangnan Hospital, Wenzhou Medical University, Wenzhou 325000, China
| | - Wei Qiang
- Ningbo Key Laboratory of Medical Research on Blinding Eye Diseases, Ningbo Eye Institute, Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315040, China
| | - Xihang Zong
- Ningbo Key Laboratory of Medical Research on Blinding Eye Diseases, Ningbo Eye Institute, Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315040, China
| | - Ke Ding
- School of Electronic Engineering, Xi'an University of Posts and Telecommunications, Xi'an 710121, China
| | - Shihong Wang
- Ningbo Key Laboratory of Medical Research on Blinding Eye Diseases, Ningbo Eye Institute, Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315040, China
| | - Shiqi Yin
- Ningbo Key Laboratory of Medical Research on Blinding Eye Diseases, Ningbo Eye Institute, Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315040, China
| | - Jiewei Jiang
- School of Electronic Engineering, Xi'an University of Posts and Telecommunications, Xi'an 710121, China.
| | - Wei Chen
- Ningbo Key Laboratory of Medical Research on Blinding Eye Diseases, Ningbo Eye Institute, Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315040, China; National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China.
| |
Collapse
|
4
|
Su C, Wang Z, Dong X, Ma X. Experiences of seeking diabetic eye care among patients with diabetes in China: a community-based convergent mixed methods study. Public Health 2024; 234:24-32. [PMID: 38936116 DOI: 10.1016/j.puhe.2024.05.021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2024] [Revised: 05/14/2024] [Accepted: 05/17/2024] [Indexed: 06/29/2024]
Abstract
OBJECTIVES This study aimed to characterize the most updated utilization of eye care services and obtain a holistic understanding of barriers among patients with diabetes in China. STUDY DESIGN This was a convergent mixed methods study. METHODS A convergent triangulation mixed methods approach was used, with a quantitative cross-sectional survey of patients with diabetes and semistructured interviews involving patients and health workers. Following the conceptual framework of the World Health Organization Determinants of Health Behaviours, multivariate logistic regression for quantitative analysis and thematic analysis for qualitative data were used to examine barriers to seeking eye care among patients with diabetes. Triangulation was used to integrate quantitative and qualitative results. RESULTS Among 1167 surveyed patients who participated in the quantitative component, 29.1% had undergone eye examinations within the last 12 months, and 9.3% had received eye surgery. Awareness that diabetes causes eye diseases (P < 0.001) and knowing laser treatment can treat diabetic retinopathy (DR; P < 0.001) were associated with higher examination rates. In the qualitative component, involving 20 patients and 11 health workers, barriers were identified from individual, social, and cultural environmental factors. Integration of data highlighted the complex interplay of these factors in shaping care-seeking behaviors and the importance of non-economic factors, including patients' information about costs of DR services and cultural environmental factors. CONCLUSIONS Diabetic eye care utilization remains suboptimal in China, emphasizing the impact of cultural and contextual factors. Comprehensive education strategies, along with training for primary health workers and task-shifting, are likely to enhance eye care service utilization in underserved settings.
Collapse
Affiliation(s)
- C Su
- School of Public Health, Peking University, Beijing 100191, China; China Centre for Health Development Studies, Peking University, Beijing 100191, China
| | - Z Wang
- Faculty of Medicine and Health Sciences, McGill University, Montreal, Quebec H3S 1Z1, Canada
| | - X Dong
- School of Public Health, Peking University, Beijing 100191, China; China Centre for Health Development Studies, Peking University, Beijing 100191, China
| | - X Ma
- China Centre for Health Development Studies, Peking University, Beijing 100191, China.
| |
Collapse
|
5
|
Li Z, Wang L, Qiang W, Chen K, Wang Z, Zhang Y, Xie H, Wu S, Jiang J, Chen W. DeepMonitoring: a deep learning-based monitoring system for assessing the quality of cornea images captured by smartphones. Front Cell Dev Biol 2024; 12:1447067. [PMID: 39258227 PMCID: PMC11385315 DOI: 10.3389/fcell.2024.1447067] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2024] [Accepted: 08/19/2024] [Indexed: 09/12/2024] Open
Abstract
Smartphone-based artificial intelligence (AI) diagnostic systems could assist high-risk patients to self-screen for corneal diseases (e.g., keratitis) instead of detecting them in traditional face-to-face medical practices, enabling the patients to proactively identify their own corneal diseases at an early stage. However, AI diagnostic systems have significantly diminished performance in low-quality images which are unavoidable in real-world environments (especially common in patient-recorded images) due to various factors, hindering the implementation of these systems in clinical practice. Here, we construct a deep learning-based image quality monitoring system (DeepMonitoring) not only to discern low-quality cornea images created by smartphones but also to identify the underlying factors contributing to the generation of such low-quality images, which can guide operators to acquire high-quality images in a timely manner. This system performs well across validation, internal, and external testing sets, with AUCs ranging from 0.984 to 0.999. DeepMonitoring holds the potential to filter out low-quality cornea images produced by smartphones, facilitating the application of smartphone-based AI diagnostic systems in real-world clinical settings, especially in the context of self-screening for corneal diseases.
Collapse
Affiliation(s)
- Zhongwen Li
- Ningbo Key Laboratory of Medical Research on Blinding Eye Diseases, Ningbo Eye Institute, Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, China
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, China
| | - Lei Wang
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, China
| | - Wei Qiang
- Ningbo Key Laboratory of Medical Research on Blinding Eye Diseases, Ningbo Eye Institute, Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, China
| | - Kuan Chen
- Cangnan Hospital, Wenzhou Medical University, Wenzhou, China
| | - Zhouqian Wang
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, China
| | - Yi Zhang
- School of Electronic Engineering, Xi'an University of Posts and Telecommunications, Xi'an, China
| | - He Xie
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, China
| | - Shanjun Wu
- Ningbo Key Laboratory of Medical Research on Blinding Eye Diseases, Ningbo Eye Institute, Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, China
| | - Jiewei Jiang
- School of Electronic Engineering, Xi'an University of Posts and Telecommunications, Xi'an, China
| | - Wei Chen
- Ningbo Key Laboratory of Medical Research on Blinding Eye Diseases, Ningbo Eye Institute, Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, China
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, China
| |
Collapse
|
6
|
Li F, Wang D, Yang Z, Zhang Y, Jiang J, Liu X, Kong K, Zhou F, Tham CC, Medeiros F, Han Y, Grzybowski A, Zangwill LM, Lam DSC, Zhang X. The AI revolution in glaucoma: Bridging challenges with opportunities. Prog Retin Eye Res 2024; 103:101291. [PMID: 39186968 DOI: 10.1016/j.preteyeres.2024.101291] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2024] [Revised: 08/19/2024] [Accepted: 08/19/2024] [Indexed: 08/28/2024]
Abstract
Recent advancements in artificial intelligence (AI) herald transformative potentials for reshaping glaucoma clinical management, improving screening efficacy, sharpening diagnosis precision, and refining the detection of disease progression. However, incorporating AI into healthcare usages faces significant hurdles in terms of developing algorithms and putting them into practice. When creating algorithms, issues arise due to the intensive effort required to label data, inconsistent diagnostic standards, and a lack of thorough testing, which often limits the algorithms' widespread applicability. Additionally, the "black box" nature of AI algorithms may cause doctors to be wary or skeptical. When it comes to using these tools, challenges include dealing with lower-quality images in real situations and the systems' limited ability to work well with diverse ethnic groups and different diagnostic equipment. Looking ahead, new developments aim to protect data privacy through federated learning paradigms, improving algorithm generalizability by diversifying input data modalities, and augmenting datasets with synthetic imagery. The integration of smartphones appears promising for using AI algorithms in both clinical and non-clinical settings. Furthermore, bringing in large language models (LLMs) to act as interactive tool in medicine may signify a significant change in how healthcare will be delivered in the future. By navigating through these challenges and leveraging on these as opportunities, the field of glaucoma AI will not only have improved algorithmic accuracy and optimized data integration but also a paradigmatic shift towards enhanced clinical acceptance and a transformative improvement in glaucoma care.
Collapse
Affiliation(s)
- Fei Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Deming Wang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Zefeng Yang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Yinhang Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Jiaxuan Jiang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Xiaoyi Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Kangjie Kong
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Fengqi Zhou
- Ophthalmology, Mayo Clinic Health System, Eau Claire, WI, USA.
| | - Clement C Tham
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong SAR, China.
| | - Felipe Medeiros
- Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, FL, USA.
| | - Ying Han
- University of California, San Francisco, Department of Ophthalmology, San Francisco, CA, USA; The Francis I. Proctor Foundation for Research in Ophthalmology, University of California, San Francisco, CA, USA.
| | - Andrzej Grzybowski
- Institute for Research in Ophthalmology, Foundation for Ophthalmology Development, Poznan, Poland.
| | - Linda M Zangwill
- Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology, Shiley Eye Institute, University of California, San Diego, CA, USA.
| | - Dennis S C Lam
- The International Eye Research Institute of the Chinese University of Hong Kong (Shenzhen), Shenzhen, China; The C-MER Dennis Lam & Partners Eye Center, C-MER International Eye Care Group, Hong Kong, China.
| | - Xiulan Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| |
Collapse
|
7
|
Wang Y, Han X, Li C, Luo L, Yin Q, Zhang J, Peng G, Shi D, He M. Impact of Gold-Standard Label Errors on Evaluating Performance of Deep Learning Models in Diabetic Retinopathy Screening: Nationwide Real-World Validation Study. J Med Internet Res 2024; 26:e52506. [PMID: 39141915 PMCID: PMC11358665 DOI: 10.2196/52506] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Revised: 12/30/2023] [Accepted: 03/22/2024] [Indexed: 08/16/2024] Open
Abstract
BACKGROUND For medical artificial intelligence (AI) training and validation, human expert labels are considered the gold standard that represents the correct answers or desired outputs for a given data set. These labels serve as a reference or benchmark against which the model's predictions are compared. OBJECTIVE This study aimed to assess the accuracy of a custom deep learning (DL) algorithm on classifying diabetic retinopathy (DR) and further demonstrate how label errors may contribute to this assessment in a nationwide DR-screening program. METHODS Fundus photographs from the Lifeline Express, a nationwide DR-screening program, were analyzed to identify the presence of referable DR using both (1) manual grading by National Health Service England-certificated graders and (2) a DL-based DR-screening algorithm with validated good lab performance. To assess the accuracy of labels, a random sample of images with disagreement between the DL algorithm and the labels was adjudicated by ophthalmologists who were masked to the previous grading results. The error rates of labels in this sample were then used to correct the number of negative and positive cases in the entire data set, serving as postcorrection labels. The DL algorithm's performance was evaluated against both pre- and postcorrection labels. RESULTS The analysis included 736,083 images from 237,824 participants. The DL algorithm exhibited a gap between the real-world performance and the lab-reported performance in this nationwide data set, with a sensitivity increase of 12.5% (from 79.6% to 92.5%, P<.001) and a specificity increase of 6.9% (from 91.6% to 98.5%, P<.001). In the random sample, 63.6% (560/880) of negative images and 5.2% (140/2710) of positive images were misclassified in the precorrection human labels. High myopia was the primary reason for misclassifying non-DR images as referable DR images, while laser spots were predominantly responsible for misclassified referable cases. The estimated label error rate for the entire data set was 1.2%. The label correction was estimated to bring about a 12.5% enhancement in the estimated sensitivity of the DL algorithm (P<.001). CONCLUSIONS Label errors based on human image grading, although in a small percentage, can significantly affect the performance evaluation of DL algorithms in real-world DR screening.
Collapse
Affiliation(s)
- Yueye Wang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
- School of Optometry, The Hong Kong Polytechnic University, Kowloon, China (Hong Kong)
| | - Xiaotong Han
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Cong Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Lixia Luo
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Qiuxia Yin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Jian Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Guankai Peng
- Guangzhou Vision Tech Medical Technology Co, Ltd, Guangzhou, China
| | - Danli Shi
- School of Optometry, The Hong Kong Polytechnic University, Kowloon, China (Hong Kong)
- Research Centre for SHARP Vision, The Hong Kong Polytechnic University, Kowloon, China (Hong Kong)
| | - Mingguang He
- School of Optometry, The Hong Kong Polytechnic University, Kowloon, China (Hong Kong)
- Research Centre for SHARP Vision, The Hong Kong Polytechnic University, Kowloon, China (Hong Kong)
- Centre for Eye and Vision Research, Hong Kong, China (Hong Kong)
| |
Collapse
|
8
|
Li CP, Dai W, Xiao YP, Qi M, Zhang LX, Gao L, Zhang FL, Lai YK, Liu C, Lu J, Chen F, Chen D, Shi S, Li S, Zeng Q, Chen Y. Two-stage deep neural network for diagnosing fungal keratitis via in vivo confocal microscopy images. Sci Rep 2024; 14:18432. [PMID: 39117709 PMCID: PMC11310506 DOI: 10.1038/s41598-024-68768-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2024] [Accepted: 07/29/2024] [Indexed: 08/10/2024] Open
Abstract
Timely and effective diagnosis of fungal keratitis (FK) is necessary for suitable treatment and avoiding irreversible vision loss for patients. In vivo confocal microscopy (IVCM) has been widely adopted to guide the FK diagnosis. We present a deep learning framework for diagnosing fungal keratitis using IVCM images to assist ophthalmologists. Inspired by the real diagnostic process, our method employs a two-stage deep architecture for diagnostic predictions based on both image-level and sequence-level information. To the best of our knowledge, we collected the largest dataset with 96,632 IVCM images in total with expert labeling to train and evaluate our method. The specificity and sensitivity of our method in diagnosing FK on the unseen test set achieved 96.65% and 97.57%, comparable or better than experienced ophthalmologists. The network can provide image-level, sequence-level and patient-level diagnostic suggestions to physicians. The results show great promise for assisting ophthalmologists in FK diagnosis.
Collapse
Affiliation(s)
- Chun-Peng Li
- Beijing Key Laboratory of Mobile Computing and Pervasive Device, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Weiwei Dai
- Changsha Aier Eye Hospital, Hunan, China
| | - Yun-Peng Xiao
- Beijing Key Laboratory of Mobile Computing and Pervasive Device, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
| | - Mengying Qi
- Wuhan Aier Hankou Eye Hospital, Wuhan, China
| | - Ling-Xiao Zhang
- Beijing Key Laboratory of Mobile Computing and Pervasive Device, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
| | - Lin Gao
- Beijing Key Laboratory of Mobile Computing and Pervasive Device, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Fang-Lue Zhang
- Victoria University of Wellington, Wellington, New Zealand
| | | | - Chang Liu
- Beijing Aier Intech Eye Hospital, Beijing, China
| | - Jing Lu
- Chengdu Aier East Eye Hospital, Chengdu, China
| | - Fen Chen
- Wuhan Aier Hankou Eye Hospital, Wuhan, China
| | - Dan Chen
- Wuhan Aier Hankou Eye Hospital, Wuhan, China
| | - Shuai Shi
- Beijing Aier Intech Eye Hospital, Beijing, China
| | - Shaowei Li
- Beijing Aier Intech Eye Hospital, Beijing, China
| | - Qingyan Zeng
- Wuhan Aier Hankou Eye Hospital, Wuhan, China.
- Aier Eye Hospital of Wuhan University, Wuhan, China.
- Hubei University of Science and Technology, Xianning, China.
- Aier Eye Hospital, Jinan University, Guangzhou, China.
| | - Yiqiang Chen
- Beijing Key Laboratory of Mobile Computing and Pervasive Device, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China.
- University of Chinese Academy of Sciences, Beijing, China.
| |
Collapse
|
9
|
Rodríguez-Miguel A, Arruabarrena C, Allendes G, Olivera M, Zarranz-Ventura J, Teus MA. Hybrid deep learning models for the screening of Diabetic Macular Edema in optical coherence tomography volumes. Sci Rep 2024; 14:17633. [PMID: 39085461 PMCID: PMC11291805 DOI: 10.1038/s41598-024-68489-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2024] [Accepted: 07/24/2024] [Indexed: 08/02/2024] Open
Abstract
Several studies published so far used highly selective image datasets from unclear sources to train computer vision models and that may lead to overestimated results, while those studies conducted in real-life remain scarce. To avoid image selection bias, we stacked convolutional and recurrent neural networks (CNN-RNN) to analyze complete optical coherence tomography (OCT) cubes in a row and predict diabetic macular edema (DME), in a real-world diabetic retinopathy screening program. A retrospective cohort study was carried out. Throughout 4-years, 5314 OCT cubes from 4408 subjects who attended to the diabetic retinopathy (DR) screening program were included. We arranged twenty-two (22) pre-trained CNNs in parallel with a bidirectional RNN layer stacked at the bottom, allowing the model to make a prediction for the whole OCT cube. The staff of retina experts built a ground truth of DME later used to train a set of these CNN-RNN models with different configurations. For each trained CNN-RNN model, we performed threshold tuning to find the optimal cut-off point for binary classification of DME. Finally, the best models were selected according to sensitivity, specificity, and area under the receiver operating characteristics curve (AUROC) with their 95% confidence intervals (95%CI). An ensemble of the best models was also explored. 5188 cubes were non-DME and 126 were DME. Three models achieved an AUROC of 0.94. Among these, sensitivity, and specificity (95%CI) ranged from 84.1-90.5 and 89.7-93.3, respectively, at threshold 1, from 89.7-92.1 and 80-83.1 at threshold 2, and from 80.2-81 and 93.8-97, at threshold 3. The ensemble model improved these results, and lower specificity was observed among subjects with sight-threatening DR. Analysis by age, gender, or grade of DME did not vary the performance of the models. CNN-RNN models showed high diagnostic accuracy for detecting DME in a real-world setting. This engine allowed us to detect extra-foveal DMEs commonly overlooked in other studies, and showed potential for application as the first filter of non-referable patients in an outpatient center within a population-based DR screening program, otherwise ended up in specialized care.
Collapse
Affiliation(s)
| | - Carolina Arruabarrena
- Department of Ophthalmology, Retina Unit, University Hospital "Príncipe de Asturias", 28805, Madrid, Spain
| | - Germán Allendes
- Department of Ophthalmology, Retina Unit, University Hospital "Príncipe de Asturias", 28805, Madrid, Spain
| | | | - Javier Zarranz-Ventura
- Hospital Clínic de Barcelona, University of Barcelona, 08036, Barcelona, Spain
- Institut de Investigacions Biomediques August Pi I Sunyer (IDIBAPS), 08036, Barcelona, Spain
| | - Miguel A Teus
- Department of Surgery, Medical and Social Sciences (Ophthalmology), University of Alcalá, 28871, Madrid, Spain
| |
Collapse
|
10
|
Chen D, Geevarghese A, Lee S, Plovnick C, Elgin C, Zhou R, Oermann E, Aphinyonaphongs Y, Al-Aswad LA. Transparency in Artificial Intelligence Reporting in Ophthalmology-A Scoping Review. OPHTHALMOLOGY SCIENCE 2024; 4:100471. [PMID: 38591048 PMCID: PMC11000111 DOI: 10.1016/j.xops.2024.100471] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Revised: 11/18/2023] [Accepted: 01/12/2024] [Indexed: 04/10/2024]
Abstract
Topic This scoping review summarizes artificial intelligence (AI) reporting in ophthalmology literature in respect to model development and validation. We characterize the state of transparency in reporting of studies prospectively validating models for disease classification. Clinical Relevance Understanding what elements authors currently describe regarding their AI models may aid in the future standardization of reporting. This review highlights the need for transparency to facilitate the critical appraisal of models prior to clinical implementation, to minimize bias and inappropriate use. Transparent reporting can improve effective and equitable use in clinical settings. Methods Eligible articles (as of January 2022) from PubMed, Embase, Web of Science, and CINAHL were independently screened by 2 reviewers. All observational and clinical trial studies evaluating the performance of an AI model for disease classification of ophthalmic conditions were included. Studies were evaluated for reporting of parameters derived from reporting guidelines (CONSORT-AI, MI-CLAIM) and our previously published editorial on model cards. The reporting of these factors, which included basic model and dataset details (source, demographics), and prospective validation outcomes, were summarized. Results Thirty-seven prospective validation studies were included in the scoping review. Eleven additional associated training and/or retrospective validation studies were included if this information could not be determined from the primary articles. These 37 studies validated 27 unique AI models; multiple studies evaluated the same algorithms (EyeArt, IDx-DR, and Medios AI). Details of model development were variably reported; 18 of 27 models described training dataset annotation and 10 of 27 studies reported training data distribution. Demographic information of training data was rarely reported; 7 of the 27 unique models reported age and gender and only 2 reported race and/or ethnicity. At the level of prospective clinical validation, age and gender of populations was more consistently reported (29 and 28 of 37 studies, respectively), but only 9 studies reported race and/or ethnicity data. Scope of use was difficult to discern for the majority of models. Fifteen studies did not state or imply primary users. Conclusion Our scoping review demonstrates variable reporting of information related to both model development and validation. The intention of our study was not to assess the quality of the factors we examined, but to characterize what information is, and is not, regularly reported. Our results suggest the need for greater transparency in the reporting of information necessary to determine the appropriateness and fairness of these tools prior to clinical use. Financial Disclosures Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Collapse
Affiliation(s)
- Dinah Chen
- Department of Ophthalmology, NYU Langone Health, New York, New York
| | | | - Samuel Lee
- Department of Neurosurgery, NYU Grossman School of Medicine, New York, New York
| | | | - Cansu Elgin
- Department of Ophthalmology, Istanbul University-Cerrahpasa, Istanbul, Turkey
| | - Raymond Zhou
- Department of Neurosurgery, Vanderbilt School of Medicine, Nashville, Tennessee
| | - Eric Oermann
- Department of Neurosurgery, NYU Grossman School of Medicine, New York, New York
- Department of Neurosurgery, NYU Langone Health, New York, New York
| | - Yindalon Aphinyonaphongs
- Department of Medicine, NYU Langone Health, New York, New York
- Department of Population Health, NYU Grossman School of Medicine, New York, New York
| | - Lama A. Al-Aswad
- Department of Ophthalmology, NYU Langone Health, New York, New York
- Department of Population Health, NYU Grossman School of Medicine, New York, New York
| |
Collapse
|
11
|
Wang R, Tan Y, Zhong Z, Rao S, Zhou Z, Zhang L, Zhang C, Chen W, Ruan L, Sun X. Deep Learning-Based Vascular Aging Prediction From Retinal Fundus Images. Transl Vis Sci Technol 2024; 13:10. [PMID: 38984914 PMCID: PMC11238877 DOI: 10.1167/tvst.13.7.10] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Accepted: 04/19/2024] [Indexed: 07/11/2024] Open
Abstract
Purpose The purpose of this study was to establish and validate a deep learning model to screen vascular aging using retinal fundus images. Although vascular aging is considered a novel cardiovascular risk factor, the assessment methods are currently limited and often only available in developed regions. Methods We used 8865 retinal fundus images and clinical parameters of 4376 patients from two independent datasets for training a deep learning algorithm. The gold standard for vascular aging was defined as a pulse wave velocity ≥1400 cm/s. The probability of the presence of vascular aging was defined as deep learning retinal vascular aging score, the Reti-aging score. We compared the performance of the deep learning model and clinical parameters by calculating the area under the receiver operating characteristics curve (AUC). We recruited clinical specialists, including ophthalmologists and geriatricians, to assess vascular aging in patients using retinal fundus images, aiming to compare the diagnostic performance between deep learning models and clinical specialists. Finally, the potential of Reti-aging score for identifying new-onset hypertension (NH) and new-onset carotid artery plaque (NCP) in the subsequent three years was examined. Results The Reti-aging score model achieved an AUC of 0.826 (95% confidence interval [CI] = 0.793-0.855) and 0.779 (95% CI = 0.765-0.794) in the internal and external dataset. It showed better performance in predicting vascular aging compared with the prediction with clinical parameters. The average accuracy of ophthalmologists (66.3%) was lower than that of the Reti-aging score model, whereas geriatricians were unable to make predictions based on retinal fundus images. The Reti-aging score was associated with the risk of NH and NCP (P < 0.05). Conclusions The Reti-aging score model might serve as a novel method to predict vascular aging through analysis of retinal fundus images. Reti-aging score provides a novel indicator to predict new-onset cardiovascular diseases. Translational Relevance Given the robust performance of our model, it provides a new and reliable method for screening vascular aging, especially in undeveloped areas.
Collapse
Affiliation(s)
- Ruohong Wang
- Department of Ophthalmology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, People's Republic of China
| | - Yuhe Tan
- Department of Ophthalmology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, People's Republic of China
| | - Zheng Zhong
- Department of Ophthalmology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, People's Republic of China
| | - Suyun Rao
- Department of Ophthalmology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, People's Republic of China
| | - Ziqing Zhou
- Department of Ophthalmology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, People's Republic of China
| | - Lisha Zhang
- Department of Health Management Center, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, People's Republic of China
| | - Cuntai Zhang
- Department of Geriatrics, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, People's Republic of China
| | - Wei Chen
- Department of Computer Center, Tongji Hospital affiliated to Tongji Medical College of Huazhong University of Science and Technology, Wuhan, People's Republic of China
| | - Lei Ruan
- Department of Geriatrics, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, People's Republic of China
| | - Xufang Sun
- Department of Ophthalmology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, People's Republic of China
| |
Collapse
|
12
|
Feng X, Xu K, Luo MJ, Chen H, Yang Y, He Q, Song C, Li R, Wu Y, Wang H, Tham YC, Ting DSW, Lin H, Wong TY, Lam DSC. Latest developments of generative artificial intelligence and applications in ophthalmology. Asia Pac J Ophthalmol (Phila) 2024; 13:100090. [PMID: 39128549 DOI: 10.1016/j.apjo.2024.100090] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2024] [Revised: 07/30/2024] [Accepted: 08/07/2024] [Indexed: 08/13/2024] Open
Abstract
The emergence of generative artificial intelligence (AI) has revolutionized various fields. In ophthalmology, generative AI has the potential to enhance efficiency, accuracy, personalization and innovation in clinical practice and medical research, through processing data, streamlining medical documentation, facilitating patient-doctor communication, aiding in clinical decision-making, and simulating clinical trials. This review focuses on the development and integration of generative AI models into clinical workflows and scientific research of ophthalmology. It outlines the need for development of a standard framework for comprehensive assessments, robust evidence, and exploration of the potential of multimodal capabilities and intelligent agents. Additionally, the review addresses the risks in AI model development and application in clinical service and research of ophthalmology, including data privacy, data bias, adaptation friction, over interdependence, and job replacement, based on which we summarized a risk management framework to mitigate these concerns. This review highlights the transformative potential of generative AI in enhancing patient care, improving operational efficiency in the clinical service and research in ophthalmology. It also advocates for a balanced approach to its adoption.
Collapse
Affiliation(s)
- Xiaoru Feng
- School of Biomedical Engineering, Tsinghua Medicine, Tsinghua University, Beijing, China; Institute for Hospital Management, Tsinghua Medicine, Tsinghua University, Beijing, China
| | - Kezheng Xu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Ming-Jie Luo
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Haichao Chen
- School of Clinical Medicine, Beijing Tsinghua Changgung Hospital, Tsinghua Medicine, Tsinghua University, Beijing, China
| | - Yangfan Yang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Qi He
- Research Centre of Big Data and Artificial Research for Medicine, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, China
| | - Chenxin Song
- Research Centre of Big Data and Artificial Research for Medicine, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, China
| | - Ruiyao Li
- Research Centre of Big Data and Artificial Research for Medicine, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, China
| | - You Wu
- Institute for Hospital Management, Tsinghua Medicine, Tsinghua University, Beijing, China; School of Basic Medical Sciences, Tsinghua Medicine, Tsinghua University, Beijing, China; Department of Health Policy and Management, Bloomberg School of Public Health, Johns Hopkins University, Baltimore, MD, USA.
| | - Haibo Wang
- Research Centre of Big Data and Artificial Research for Medicine, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, China.
| | - Yih Chung Tham
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore; Centre for Innovation and Precision Eye Health, Yong Loo Lin School of Medicine, National University of Singapore, Singapore; Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Ophthalmology and Visual Science Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Daniel Shu Wei Ting
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Ophthalmology and Visual Science Academic Clinical Program, Duke-NUS Medical School, Singapore; Byers Eye Institute, Stanford University, Palo Alto, CA, USA
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China; Center for Precision Medicine and Department of Genetics and Biomedical Informatics, Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, China; Hainan Eye Hospital and Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Haikou, China
| | - Tien Yin Wong
- School of Clinical Medicine, Beijing Tsinghua Changgung Hospital, Tsinghua Medicine, Tsinghua University, Beijing, China; Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Tsinghua Medicine, Tsinghua University, Beijing, China
| | - Dennis Shun-Chiu Lam
- The International Eye Research Institute, The Chinese University of Hong Kong (Shenzhen), Shenzhen, China; The C-MER International Eye Care Group, Hong Kong, Hong Kong, China
| |
Collapse
|
13
|
Poh SSJ, Sia JT, Yip MYT, Tsai ASH, Lee SY, Tan GSW, Weng CY, Kadonosono K, Kim M, Yonekawa Y, Ho AC, Toth CA, Ting DSW. Artificial Intelligence, Digital Imaging, and Robotics Technologies for Surgical Vitreoretinal Diseases. Ophthalmol Retina 2024; 8:633-645. [PMID: 38280425 DOI: 10.1016/j.oret.2024.01.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Revised: 01/14/2024] [Accepted: 01/19/2024] [Indexed: 01/29/2024]
Abstract
OBJECTIVE To review recent technological advancement in imaging, surgical visualization, robotics technology, and the use of artificial intelligence in surgical vitreoretinal (VR) diseases. BACKGROUND Technological advancements in imaging enhance both preoperative and intraoperative management of surgical VR diseases. Widefield imaging in fundal photography and OCT can improve assessment of peripheral retinal disorders such as retinal detachments, degeneration, and tumors. OCT angiography provides a rapid and noninvasive imaging of the retinal and choroidal vasculature. Surgical visualization has also improved with intraoperative OCT providing a detailed real-time assessment of retinal layers to guide surgical decisions. Heads-up display and head-mounted display utilize 3-dimensional technology to provide surgeons with enhanced visual guidance and improved ergonomics during surgery. Intraocular robotics technology allows for greater surgical precision and is shown to be useful in retinal vein cannulation and subretinal drug delivery. In addition, deep learning techniques leverage on diverse data including widefield retinal photography and OCT for better predictive accuracy in classification, segmentation, and prognostication of many surgical VR diseases. CONCLUSION This review article summarized the latest updates in these areas and highlights the importance of continuous innovation and improvement in technology within the field. These advancements have the potential to reshape management of surgical VR diseases in the very near future and to ultimately improve patient care. FINANCIAL DISCLOSURE(S) Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Collapse
Affiliation(s)
- Stanley S J Poh
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore; Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Josh T Sia
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore
| | - Michelle Y T Yip
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore
| | - Andrew S H Tsai
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore; Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Shu Yen Lee
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore; Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Gavin S W Tan
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore; Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore
| | - Christina Y Weng
- Department of Ophthalmology, Baylor College of Medicine, Houston, Texas
| | | | - Min Kim
- Department of Ophthalmology, Gangnam Severance Hospital, Yonsei University College of Medicine, Seoul, South Korea
| | - Yoshihiro Yonekawa
- Wills Eye Hospital, Mid Atlantic Retina, Thomas Jefferson University, Philadelphia, Pennsylvania
| | - Allen C Ho
- Wills Eye Hospital, Mid Atlantic Retina, Thomas Jefferson University, Philadelphia, Pennsylvania
| | - Cynthia A Toth
- Departments of Ophthalmology and Biomedical Engineering, Duke University, Durham, North Carolina
| | - Daniel S W Ting
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore; Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore; Byers Eye Institute, Stanford University, Palo Alto, California.
| |
Collapse
|
14
|
Wu G, Zhang X, Borchert GA, Zheng C, Liang Y, Wang Y, Du Z, Huang Y, Shang X, Yang X, Hu Y, Yu H, Zhu Z. Association of retinal age gap with chronic kidney disease and subsequent cardiovascular disease sequelae: a cross-sectional and longitudinal study from the UK Biobank. Clin Kidney J 2024; 17:sfae088. [PMID: 38989278 PMCID: PMC11233993 DOI: 10.1093/ckj/sfae088] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2023] [Indexed: 07/12/2024] Open
Abstract
Background Chronic kidney disease (CKD) increases the risk of cardiovascular disease (CVD) and is more prevalent in older adults. Retinal age gap, a biomarker of aging based on fundus images, has been previously developed and validated. This study aimed to investigate the association of retinal age gap with CKD and subsequent CVD complications. Methods A deep learning model was trained to predict the retinal age using 19 200 fundus images of 11 052 participants without any medical history at baseline. Retinal age gap, calculated as retinal age predicted minus chronological age, was calculated for the remaining 35 906 participants. Logistic regression models and Cox proportional hazards regression models were used for the association analysis. Results A total of 35 906 participants (56.75 ± 8.04 years, 55.68% female) were included in this study. In the cross-sectional analysis, each 1-year increase in retinal age gap was associated with a 2% increase in the risk of CKD prevalence [odds ratio 1.02, 95% confidence interval (CI) 1.01-1.04, P = .012]. A longitudinal analysis of 35 039 participants demonstrated that 2.87% of them developed CKD in follow-up, and each 1-year increase in retinal age gap was associated with a 3% increase in the risk of CKD incidence (hazard ratio 1.03, 95% CI 1.01-1.05, P = .004). In addition, a total of 111 CKD patients (15.81%) developed CVD in follow-up, and each 1-year increase in retinal age gap was associated with a 10% increase in the risk of incident CVD (hazard ratio 1.10, 95% CI 1.03-1.17, P = .005). Conclusions We found that retinal age gap was independently associated with the prevalence and incidence of CKD, and also associated with CVD complications in CKD patients. This supports the use of this novel biomarker in identifying individuals at high risk of CKD and CKD patients with increased risk of CVD.
Collapse
Affiliation(s)
- Guanrong Wu
- Guangdong Eye Institute, Department of Ophthalmology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
- School of Medicine, South China University of Technology, Guangzhou, China
| | - Xiayin Zhang
- Guangdong Eye Institute, Department of Ophthalmology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
| | - Grace A Borchert
- Ophthalmology, Centre for Eye Research Australia, University of Melbourne, Melbourne, Australia
| | - Chunwen Zheng
- Guangdong Eye Institute, Department of Ophthalmology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
- Shantou University Medical College, Shantou, China
| | - Yingying Liang
- Guangdong Eye Institute, Department of Ophthalmology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
| | - Yaxin Wang
- Guangdong Eye Institute, Department of Ophthalmology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
| | - Zijing Du
- Guangdong Eye Institute, Department of Ophthalmology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
| | - Yu Huang
- Guangdong Eye Institute, Department of Ophthalmology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
| | - Xianwen Shang
- Guangdong Eye Institute, Department of Ophthalmology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
| | - Xiaohong Yang
- Guangdong Eye Institute, Department of Ophthalmology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
| | - Yijun Hu
- Guangdong Eye Institute, Department of Ophthalmology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
| | - Honghua Yu
- Guangdong Eye Institute, Department of Ophthalmology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, China
| | - Zhuoting Zhu
- Guangdong Eye Institute, Department of Ophthalmology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
- Ophthalmology, Centre for Eye Research Australia, University of Melbourne, Melbourne, Australia
| |
Collapse
|
15
|
Musleh AM, AlRyalat SA, Abid MN, Salem Y, Hamila HM, Sallam AB. Diagnostic accuracy of artificial intelligence in detecting retinitis pigmentosa: A systematic review and meta-analysis. Surv Ophthalmol 2024; 69:411-417. [PMID: 38042377 DOI: 10.1016/j.survophthal.2023.11.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2023] [Revised: 11/20/2023] [Accepted: 11/27/2023] [Indexed: 12/04/2023]
Abstract
Retinitis pigmentosa (RP) is often undetected in its early stages. Artificial intelligence (AI) has emerged as a promising tool in medical diagnostics. Therefore, we conducted a systematic review and meta-analysis to evaluate the diagnostic accuracy of AI in detecting RP using various ophthalmic images. We conducted a systematic search on PubMed, Scopus, and Web of Science databases on December 31, 2022. We included studies in the English language that used any ophthalmic imaging modality, such as OCT or fundus photography, used any AI technologies, had at least an expert in ophthalmology as a reference standard, and proposed an AI algorithm able to distinguish between images with and without retinitis pigmentosa features. We considered the sensitivity, specificity, and area under the curve (AUC) as the main measures of accuracy. We had a total of 14 studies in the qualitative analysis and 10 studies in the quantitative analysis. In total, the studies included in the meta-analysis dealt with 920,162 images. Overall, AI showed an excellent performance in detecting RP with pooled sensitivity and specificity of 0.985 [95%CI: 0.948-0.996], 0.993 [95%CI: 0.982-0.997] respectively. The area under the receiver operating characteristic (AUROC), using a random-effect model, was calculated to be 0.999 [95%CI: 0.998-1.000; P < 0.001]. The Zhou and Dendukuri I² test revealed a low level of heterogeneity between the studies, with [I2 = 19.94%] for sensitivity and [I2 = 21.07%] for specificity. The bivariate I² [20.33%] also suggested a low degree of heterogeneity. We found evidence supporting the accuracy of AI in the detection of RP; however, the level of heterogeneity between the studies was low.
Collapse
Affiliation(s)
| | - Saif Aldeen AlRyalat
- Department of Ophthalmology, The University of Jordan, Amman, Jordan; Department of Ophthalmology, Houston Methodist Hospital, Houston, TX, USA.
| | - Mohammad Naim Abid
- Marka Specialty Hospital, Amman, Jordan; Valley Retina Institute, P.A., McAllen, TX, USA
| | - Yahia Salem
- Faculty of Medicine, The University of Jordan, Amman, Jordan
| | | | - Ahmed B Sallam
- Harvey and Bernice Jones Eye Institute at the University of Arkansas for Medical Sciences (UAMS), Little Rock, AR, USA
| |
Collapse
|
16
|
Wu X, Wu Y, Tu Z, Cao Z, Xu M, Xiang Y, Lin D, Jin L, Zhao L, Zhang Y, Liu Y, Yan P, Hu W, Liu J, Liu L, Wang X, Wang R, Chen J, Xiao W, Shang Y, Xie P, Wang D, Zhang X, Dongye M, Wang C, Ting DSW, Liu Y, Pan R, Lin H. Cost-effectiveness and cost-utility of a digital technology-driven hierarchical healthcare screening pattern in China. Nat Commun 2024; 15:3650. [PMID: 38688925 PMCID: PMC11061155 DOI: 10.1038/s41467-024-47211-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Accepted: 03/25/2024] [Indexed: 05/02/2024] Open
Abstract
Utilization of digital technologies for cataract screening in primary care is a potential solution for addressing the dilemma between the growing aging population and unequally distributed resources. Here, we propose a digital technology-driven hierarchical screening (DH screening) pattern implemented in China to promote the equity and accessibility of healthcare. It consists of home-based mobile artificial intelligence (AI) screening, community-based AI diagnosis, and referral to hospitals. We utilize decision-analytic Markov models to evaluate the cost-effectiveness and cost-utility of different cataract screening strategies (no screening, telescreening, AI screening and DH screening). A simulated cohort of 100,000 individuals from age 50 is built through a total of 30 1-year Markov cycles. The primary outcomes are incremental cost-effectiveness ratio and incremental cost-utility ratio. The results show that DH screening dominates no screening, telescreening and AI screening in urban and rural China. Annual DH screening emerges as the most economically effective strategy with 341 (338 to 344) and 1326 (1312 to 1340) years of blindness avoided compared with telescreening, and 37 (35 to 39) and 140 (131 to 148) years compared with AI screening in urban and rural settings, respectively. The findings remain robust across all sensitivity analyses conducted. Here, we report that DH screening is cost-effective in urban and rural China, and the annual screening proves to be the most cost-effective option, providing an economic rationale for policymakers promoting public eye health in low- and middle-income countries.
Collapse
Affiliation(s)
- Xiaohang Wu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Yuxuan Wu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Zhenjun Tu
- School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Zizheng Cao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Miaohong Xu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Yifan Xiang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Duoru Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Ling Jin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Lanqin Zhao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Yingzhe Zhang
- Department of Epidemiology, Harvard T.H. Chan School of Public Health, Boston, Massachusetts, USA
| | - Yu Liu
- School of Public Health and Management, Guangzhou University of Chinese Medicine, Guangzhou, Guangdong, China
| | - Pisong Yan
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Weiling Hu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Jiali Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Lixue Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Xun Wang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Ruixin Wang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Jieying Chen
- School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Wei Xiao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Yuanjun Shang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Peichen Xie
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Dongni Wang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Xulin Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Meimei Dongye
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Chenxinqi Wang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Daniel Shu Wei Ting
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Duke-National University of Singapore Medical School, Singapore, Singapore
| | - Yizhi Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China.
| | - Rong Pan
- School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, Guangdong, China.
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China.
- Hainan Eye Hospital and Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Haikou, Hainan, China.
- Center for Precision Medicine and Department of Genetics and Biomedical Informatics, Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China.
| |
Collapse
|
17
|
Kalaw FGP, Cavichini M, Zhang J, Wen B, Lin AC, Heinke A, Nguyen T, An C, Bartsch DUG, Cheng L, Freeman WR. Ultra-wide field and new wide field composite retinal image registration with AI-enabled pipeline and 3D distortion correction algorithm. Eye (Lond) 2024; 38:1189-1195. [PMID: 38114568 PMCID: PMC11009222 DOI: 10.1038/s41433-023-02868-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Revised: 11/07/2023] [Accepted: 11/22/2023] [Indexed: 12/21/2023] Open
Abstract
PURPOSE This study aimed to compare a new Artificial Intelligence (AI) method to conventional mathematical warping in accurately overlaying peripheral retinal vessels from two different imaging devices: confocal scanning laser ophthalmoscope (cSLO) wide-field images and SLO ultra-wide field images. METHODS Images were captured using the Heidelberg Spectralis 55-degree field-of-view and Optos ultra-wide field. The conventional mathematical warping was performed using Random Sample Consensus-Sample and Consensus sets (RANSAC-SC). This was compared to an AI alignment algorithm based on a one-way forward registration procedure consisting of full Convolutional Neural Networks (CNNs) with Outlier Rejection (OR CNN), as well as an iterative 3D camera pose optimization process (OR CNN + Distortion Correction [DC]). Images were provided in a checkerboard pattern, and peripheral vessels were graded in four quadrants based on alignment to the adjacent box. RESULTS A total of 660 boxes were analysed from 55 eyes. Dice scores were compared between the three methods (RANSAC-SC/OR CNN/OR CNN + DC): 0.3341/0.4665/4784 for fold 1-2 and 0.3315/0.4494/4596 for fold 2-1 in composite images. The images composed using the OR CNN + DC have a median rating of 4 (out of 5) versus 2 using RANSAC-SC. The odds of getting a higher grading level are 4.8 times higher using our OR CNN + DC than RANSAC-SC (p < 0.0001). CONCLUSION Peripheral retinal vessel alignment performed better using our AI algorithm than RANSAC-SC. This may help improve co-localizing retinal anatomy and pathology with our algorithm.
Collapse
Affiliation(s)
- Fritz Gerald P Kalaw
- Jacobs Retina Center, University of California, San Diego, CA, USA
- Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California, San Diego, CA, USA
- Division of Ophthalmology Informatics and Data Science, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California, San Diego, CA, USA
| | - Melina Cavichini
- Jacobs Retina Center, University of California, San Diego, CA, USA
- Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California, San Diego, CA, USA
| | - Junkang Zhang
- Department of Electrical and Computer Engineering, University of California, San Diego, CA, USA
| | - Bo Wen
- Department of Electrical and Computer Engineering, University of California, San Diego, CA, USA
| | - Andrew C Lin
- Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California, San Diego, CA, USA
| | - Anna Heinke
- Jacobs Retina Center, University of California, San Diego, CA, USA
- Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California, San Diego, CA, USA
| | - Truong Nguyen
- Department of Electrical and Computer Engineering, University of California, San Diego, CA, USA
| | - Cheolhong An
- Department of Electrical and Computer Engineering, University of California, San Diego, CA, USA
| | | | - Lingyun Cheng
- Jacobs Retina Center, University of California, San Diego, CA, USA
| | - William R Freeman
- Jacobs Retina Center, University of California, San Diego, CA, USA.
- Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California, San Diego, CA, USA.
- Division of Ophthalmology Informatics and Data Science, Viterbi Family Department of Ophthalmology and Shiley Eye Institute, University of California, San Diego, CA, USA.
- Department of Electrical and Computer Engineering, University of California, San Diego, CA, USA.
| |
Collapse
|
18
|
Zhou T, Gu S, Shao F, Li P, Wu Y, Xiong J, Wang B, Zhou C, Gao P, Hua X. Prediction of preeclampsia from retinal fundus images via deep learning in singleton pregnancies: a prospective cohort study. J Hypertens 2024; 42:701-710. [PMID: 38230614 PMCID: PMC10906188 DOI: 10.1097/hjh.0000000000003658] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2023] [Revised: 12/01/2023] [Accepted: 12/30/2023] [Indexed: 01/18/2024]
Abstract
INTRODUCTION Early prediction of preeclampsia (PE) is of universal importance in controlling the disease process. Our study aimed to assess the feasibility of using retinal fundus images to predict preeclampsia via deep learning in singleton pregnancies. METHODS This prospective cohort study was conducted at Shanghai First Maternity and Infant Hospital, Tongji University School of Medicine. Eligible participants included singleton pregnancies who presented for prenatal visits before 14 weeks of gestation from September 1, 2020, to February 1, 2022. Retinal fundus images were obtained using a nonmydriatic digital retinal camera during their initial prenatal visit upon admission before 20 weeks of gestation. In addition, we generated fundus scores, which indicated the predictive value of hypertension, using a hypertension detection model. To evaluate the predictive value of the retinal fundus image-based deep learning algorithm for preeclampsia, we conducted stratified analyses and measured the area under the curve (AUC), sensitivity, and specificity. We then conducted sensitivity analyses for validation. RESULTS Our study analyzed a total of 1138 women, 92 pregnancies developed into hypertension disorders of pregnancy (HDP), including 26 cases of gestational hypertension and 66 cases of preeclampsia. The adjusted odds ratio (aOR) of the fundus scores was 2.582 (95% CI, 1.883-3.616; P < 0.001). Otherwise, in the categories of prepregnancy BMI less than 28.0 and at least 28.0, the aORs were 3.073 (95%CI, 2.265-4.244; P < 0.001) and 5.866 (95% CI, 3.292-11.531; P < 0.001). In the categories of maternal age less than 35.0 and at least 35.0, the aORs were 2.845 (95% CI, 1.854-4.463; P < 0.001) and 2.884 (95% CI, 1.794-4.942; P < 0.001). The AUC of the fundus score combined with risk factors was 0.883 (sensitivity, 0.722; specificity, 0.934; 95% CI, 0.834-0.932) for predicting preeclampsia. CONCLUSION Our study demonstrates that the use of deep learning algorithm-based retinal fundus images offers promising predictive value for the early detection of preeclampsia.
Collapse
Affiliation(s)
- Tianfan Zhou
- Department of Obstetrics, Shanghai Key Laboratory of Maternal Fetal Medicine, Shanghai Institute of Maternal-Fetal Medicine and Gynecologic Oncology, Shanghai First Maternity and Infant Hospital, School of Medicine, Tongji University
| | - Shengyi Gu
- Department of Obstetrics, Shanghai Key Laboratory of Maternal Fetal Medicine, Shanghai Institute of Maternal-Fetal Medicine and Gynecologic Oncology, Shanghai First Maternity and Infant Hospital, School of Medicine, Tongji University
| | - Feixue Shao
- Department of Obstetrics, Shanghai Key Laboratory of Maternal Fetal Medicine, Shanghai Institute of Maternal-Fetal Medicine and Gynecologic Oncology, Shanghai First Maternity and Infant Hospital, School of Medicine, Tongji University
| | - Ping Li
- Department of Ophthalmology, Shanghai Tenth People's Hospital, School of Medicine, Tongji University
| | - Yuelin Wu
- Department of Obstetrics, Shanghai Key Laboratory of Maternal Fetal Medicine, Shanghai Institute of Maternal-Fetal Medicine and Gynecologic Oncology, Shanghai First Maternity and Infant Hospital, School of Medicine, Tongji University
| | | | - Bin Wang
- Beijing Airdoc Technology Co., Ltd., Beijing, China
| | - Chenchen Zhou
- Department of Obstetrics, Shanghai Key Laboratory of Maternal Fetal Medicine, Shanghai Institute of Maternal-Fetal Medicine and Gynecologic Oncology, Shanghai First Maternity and Infant Hospital, School of Medicine, Tongji University
| | - Peng Gao
- Department of Ophthalmology, Shanghai Tenth People's Hospital, School of Medicine, Tongji University
| | - Xiaolin Hua
- Department of Obstetrics, Shanghai Key Laboratory of Maternal Fetal Medicine, Shanghai Institute of Maternal-Fetal Medicine and Gynecologic Oncology, Shanghai First Maternity and Infant Hospital, School of Medicine, Tongji University
| |
Collapse
|
19
|
Li Q, Tan J, Xie H, Zhang X, Dai Q, Li Z, Yan LL, Chen W. Evaluating the accuracy of the Ophthalmologist Robot for multiple blindness-causing eye diseases: a multicentre, prospective study protocol. BMJ Open 2024; 14:e077859. [PMID: 38431298 PMCID: PMC10910653 DOI: 10.1136/bmjopen-2023-077859] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Accepted: 01/12/2024] [Indexed: 03/05/2024] Open
Abstract
INTRODUCTION Early eye screening and treatment can reduce the incidence of blindness by detecting and addressing eye diseases at an early stage. The Ophthalmologist Robot is an automated device that can simultaneously capture ocular surface and fundus images without the need for ophthalmologists, making it highly suitable for primary application. However, the accuracy of the device's screening capabilities requires further validation. This study aims to evaluate and compare the screening accuracies of ophthalmologists and deep learning models using images captured by the Ophthalmologist Robot, in order to identify a screening method that is both highly accurate and cost-effective. Our findings may provide valuable insights into the potential applications of remote eye screening. METHODS AND ANALYSIS This is a multicentre, prospective study that will recruit approximately 1578 participants from 3 hospitals. All participants will undergo ocular surface and fundus images taken by the Ophthalmologist Robot. Additionally, 695 participants will have their ocular surface imaged with a slit lamp. Relevant information from outpatient medical records will be collected. The primary objective is to evaluate the accuracy of ophthalmologists' screening for multiple blindness-causing eye diseases using device images through receiver operating characteristic curve analysis. The targeted diseases include keratitis, corneal scar, cataract, diabetic retinopathy, age-related macular degeneration, glaucomatous optic neuropathy and pathological myopia. The secondary objective is to assess the accuracy of deep learning models in disease screening. Furthermore, the study aims to compare the consistency between the Ophthalmologist Robot and the slit lamp in screening for keratitis and corneal scar using the Kappa test. Additionally, the cost-effectiveness of three eye screening methods, based on non-telemedicine screening, ophthalmologist-telemedicine screening and artificial intelligence-telemedicine screening, will be assessed by constructing Markov models. ETHICS AND DISSEMINATION The study has obtained approval from the ethics committee of the Ophthalmology and Optometry Hospital of Wenzhou Medical University (reference: 2023-026 K-21-01). This work will be disseminated by peer-review publications, abstract presentations at national and international conferences and data sharing with other researchers. TRIAL REGISTRATION NUMBER ChiCTR2300070082.
Collapse
Affiliation(s)
- Qixin Li
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China
| | - Jie Tan
- Global Health Research Center, Duke Kunshan University, Kunshan, China
- School of Public Health, Wuhan University, Wuhan, China
| | - He Xie
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China
| | - Xiaoyu Zhang
- School of Public Health and Management, Wenzhou Medical University, Wenzhou, China
| | - Qi Dai
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China
| | - Zhongwen Li
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China
| | - Lijing L Yan
- Global Health Research Center, Duke Kunshan University, Kunshan, China
- School of Public Health, Wuhan University, Wuhan, China
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China
- Peking University Institute for Global Health and Development, Peking University, Beijing, China
| | - Wei Chen
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China
| |
Collapse
|
20
|
Liu Y, Xie H, Zhao X, Tang J, Yu Z, Wu Z, Tian R, Chen Y, Chen M, Ntentakis DP, Du Y, Chen T, Hu Y, Zhang S, Lei B, Zhang G. Automated detection of nine infantile fundus diseases and conditions in retinal images using a deep learning system. EPMA J 2024; 15:39-51. [PMID: 38463622 PMCID: PMC10923762 DOI: 10.1007/s13167-024-00350-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2023] [Accepted: 01/21/2024] [Indexed: 03/12/2024]
Abstract
Purpose We developed an Infant Retinal Intelligent Diagnosis System (IRIDS), an automated system to aid early diagnosis and monitoring of infantile fundus diseases and health conditions to satisfy urgent needs of ophthalmologists. Methods We developed IRIDS by combining convolutional neural networks and transformer structures, using a dataset of 7697 retinal images (1089 infants) from four hospitals. It identifies nine fundus diseases and conditions, namely, retinopathy of prematurity (ROP) (mild ROP, moderate ROP, and severe ROP), retinoblastoma (RB), retinitis pigmentosa (RP), Coats disease, coloboma of the choroid, congenital retinal fold (CRF), and normal. IRIDS also includes depth attention modules, ResNet-18 (Res-18), and Multi-Axis Vision Transformer (MaxViT). Performance was compared to that of ophthalmologists using 450 retinal images. The IRIDS employed a five-fold cross-validation approach to generate the classification results. Results Several baseline models achieved the following metrics: accuracy, precision, recall, F1-score (F1), kappa, and area under the receiver operating characteristic curve (AUC) with best values of 94.62% (95% CI, 94.34%-94.90%), 94.07% (95% CI, 93.32%-94.82%), 90.56% (95% CI, 88.64%-92.48%), 92.34% (95% CI, 91.87%-92.81%), 91.15% (95% CI, 90.37%-91.93%), and 99.08% (95% CI, 99.07%-99.09%), respectively. In comparison, IRIDS showed promising results compared to ophthalmologists, demonstrating an average accuracy, precision, recall, F1, kappa, and AUC of 96.45% (95% CI, 96.37%-96.53%), 95.86% (95% CI, 94.56%-97.16%), 94.37% (95% CI, 93.95%-94.79%), 95.03% (95% CI, 94.45%-95.61%), 94.43% (95% CI, 93.96%-94.90%), and 99.51% (95% CI, 99.51%-99.51%), respectively, in multi-label classification on the test dataset, utilizing the Res-18 and MaxViT models. These results suggest that, particularly in terms of AUC, IRIDS achieved performance that warrants further investigation for the detection of retinal abnormalities. Conclusions IRIDS identifies nine infantile fundus diseases and conditions accurately. It may aid non-ophthalmologist personnel in underserved areas in infantile fundus disease screening. Thus, preventing severe complications. The IRIDS serves as an example of artificial intelligence integration into ophthalmology to achieve better outcomes in predictive, preventive, and personalized medicine (PPPM / 3PM) in the treatment of infantile fundus diseases. Supplementary Information The online version contains supplementary material available at 10.1007/s13167-024-00350-y.
Collapse
Affiliation(s)
- Yaling Liu
- Shenzhen Eye Hospital, Shenzhen Eye Institute, Jinan University, Shenzhen, 518040 China
| | - Hai Xie
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| | - Xinyu Zhao
- Shenzhen Eye Hospital, Shenzhen Eye Institute, Jinan University, Shenzhen, 518040 China
| | - Jiannan Tang
- Shenzhen Eye Hospital, Shenzhen Eye Institute, Jinan University, Shenzhen, 518040 China
| | - Zhen Yu
- Shenzhen Eye Hospital, Shenzhen Eye Institute, Jinan University, Shenzhen, 518040 China
| | - Zhenquan Wu
- Shenzhen Eye Hospital, Shenzhen Eye Institute, Jinan University, Shenzhen, 518040 China
| | - Ruyin Tian
- Shenzhen Eye Hospital, Shenzhen Eye Institute, Jinan University, Shenzhen, 518040 China
| | - Yi Chen
- Shenzhen Eye Hospital, Shenzhen Eye Institute, Jinan University, Shenzhen, 518040 China
- Guizhou Medical University, Guiyang, Guizhou China
| | - Miaohong Chen
- Shenzhen Eye Hospital, Shenzhen Eye Institute, Jinan University, Shenzhen, 518040 China
- Guizhou Medical University, Guiyang, Guizhou China
| | - Dimitrios P. Ntentakis
- Retina Service, Ines and Fred Yeatts Retina Research Laboratory, Angiogenesis Laboratory, Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School, Boston, MA USA
| | - Yueshanyi Du
- Shenzhen Eye Hospital, Shenzhen Eye Institute, Jinan University, Shenzhen, 518040 China
| | - Tingyi Chen
- Shenzhen Eye Hospital, Shenzhen Eye Institute, Jinan University, Shenzhen, 518040 China
- Guizhou Medical University, Guiyang, Guizhou China
| | - Yarou Hu
- Shenzhen Eye Hospital, Shenzhen Eye Institute, Jinan University, Shenzhen, 518040 China
| | - Sifan Zhang
- Guizhou Medical University, Guiyang, Guizhou China
- Southern University of Science and Technology School of Medicine, Shenzhen, China
| | - Baiying Lei
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| | - Guoming Zhang
- Shenzhen Eye Hospital, Shenzhen Eye Institute, Jinan University, Shenzhen, 518040 China
- Guizhou Medical University, Guiyang, Guizhou China
| |
Collapse
|
21
|
Gu C, Wang Y, Jiang Y, Xu F, Wang S, Liu R, Yuan W, Abudureyimu N, Wang Y, Lu Y, Li X, Wu T, Dong L, Chen Y, Wang B, Zhang Y, Wei WB, Qiu Q, Zheng Z, Liu D, Chen J. Application of artificial intelligence system for screening multiple fundus diseases in Chinese primary healthcare settings: a real-world, multicentre and cross-sectional study of 4795 cases. Br J Ophthalmol 2024; 108:424-431. [PMID: 36878715 PMCID: PMC10894824 DOI: 10.1136/bjo-2022-322940] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Accepted: 02/19/2023] [Indexed: 03/08/2023]
Abstract
BACKGROUND/AIMS This study evaluates the performance of the Airdoc retinal artificial intelligence system (ARAS) for detecting multiple fundus diseases in real-world scenarios in primary healthcare settings and investigates the fundus disease spectrum based on ARAS. METHODS This real-world, multicentre, cross-sectional study was conducted in Shanghai and Xinjiang, China. Six primary healthcare settings were included in this study. Colour fundus photographs were taken and graded by ARAS and retinal specialists. The performance of ARAS is described by its accuracy, sensitivity, specificity and positive and negative predictive values. The spectrum of fundus diseases in primary healthcare settings has also been investigated. RESULTS A total of 4795 participants were included. The median age was 57.0 (IQR 39.0-66.0) years, and 3175 (66.2%) participants were female. The accuracy, specificity and negative predictive value of ARAS for detecting normal fundus and 14 retinal abnormalities were high, whereas the sensitivity and positive predictive value varied in detecting different abnormalities. The proportion of retinal drusen, pathological myopia and glaucomatous optic neuropathy was significantly higher in Shanghai than in Xinjiang. Moreover, the percentages of referable diabetic retinopathy, retinal vein occlusion and macular oedema in middle-aged and elderly people in Xinjiang were significantly higher than in Shanghai. CONCLUSION This study demonstrated the dependability of ARAS for detecting multiple retinal diseases in primary healthcare settings. Implementing the AI-assisted fundus disease screening system in primary healthcare settings might be beneficial in reducing regional disparities in medical resources. However, the ARAS algorithm must be improved to achieve better performance. TRIAL REGISTRATION NUMBER NCT04592068.
Collapse
Affiliation(s)
- Chufeng Gu
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine; National Clinical Research Center for Eye Diseases; Key Laboratory of Ocular Fundus Diseases; Engineering Center for Visual Science and Photomedicine; Engineering Center for Precise Diagnosis and Treatment of Eye Diseases, Shanghai, China
| | - Yujie Wang
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine; National Clinical Research Center for Eye Diseases; Key Laboratory of Ocular Fundus Diseases; Engineering Center for Visual Science and Photomedicine; Engineering Center for Precise Diagnosis and Treatment of Eye Diseases, Shanghai, China
| | - Yan Jiang
- Department of Ophthalmology, Shibei Hospital of Jing'an District, Shanghai, China
| | - Feiping Xu
- Department of Ophthalmology, Shibei Hospital of Jing'an District, Shanghai, China
| | - Shasha Wang
- Department of Ophthalmology, Shibei Hospital of Jing'an District, Shanghai, China
| | - Rui Liu
- Department of Ophthalmology, Shibei Hospital of Jing'an District, Shanghai, China
| | - Wen Yuan
- Department of Ophthalmology, Shibei Hospital of Jing'an District, Shanghai, China
| | - Nurbiyimu Abudureyimu
- Department of Ophthalmology, Bachu County Traditional Chinese Medicine Hospital of Kashgar, Xinjiang, China
| | - Ying Wang
- Department of Ophthalmology, Bachu Country People's Hospital of Kashgar, Xinjiang, China
| | - Yulan Lu
- Department of Ophthalmology, Linfen Community Health Service Center of Jing'an District, Shanghai, China
| | - Xiaolong Li
- Department of Ophthalmology, Pengpu New Village Community Health Service Center of Jing'an District, Shanghai, China
| | - Tao Wu
- Department of Ophthalmology, Pengpu Town Community Health Service Center of Jing'an District, Shanghai, China
| | - Li Dong
- Beijing Tongren Eye Center, Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Capital Medical University, Beijing, China
| | - Yuzhong Chen
- Beijing Airdoc Technology Co., Ltd, Beijing, China
| | - Bin Wang
- Beijing Airdoc Technology Co., Ltd, Beijing, China
| | | | - Wen Bin Wei
- Beijing Tongren Eye Center, Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Capital Medical University, Beijing, China
| | - Qinghua Qiu
- Department of Ophthalmology, Tong Ren Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Zhi Zheng
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine; National Clinical Research Center for Eye Diseases; Key Laboratory of Ocular Fundus Diseases; Engineering Center for Visual Science and Photomedicine; Engineering Center for Precise Diagnosis and Treatment of Eye Diseases, Shanghai, China
| | - Deng Liu
- Bachu Country People's Hospital of Kashgar, Xinjiang, China
- Shanghai No. 3 Rehabilitation Hospital, Shanghai, China
| | - Jili Chen
- Department of Ophthalmology, Shibei Hospital of Jing'an District, Shanghai, China
| |
Collapse
|
22
|
Wang Y, Liu C, Hu W, Luo L, Shi D, Zhang J, Yin Q, Zhang L, Han X, He M. Economic evaluation for medical artificial intelligence: accuracy vs. cost-effectiveness in a diabetic retinopathy screening case. NPJ Digit Med 2024; 7:43. [PMID: 38383738 PMCID: PMC10881978 DOI: 10.1038/s41746-024-01032-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2023] [Accepted: 02/05/2024] [Indexed: 02/23/2024] Open
Abstract
Artificial intelligence (AI) models have shown great accuracy in health screening. However, for real-world implementation, high accuracy may not guarantee cost-effectiveness. Improving AI's sensitivity finds more high-risk patients but may raise medical costs while increasing specificity reduces unnecessary referrals but may weaken detection capability. To evaluate the trade-off between AI model performance and the long-running cost-effectiveness, we conducted a cost-effectiveness analysis in a nationwide diabetic retinopathy (DR) screening program in China, comprising 251,535 participants with diabetes over 30 years. We tested a validated AI model in 1100 different diagnostic performances (presented as sensitivity/specificity pairs) and modeled annual screening scenarios. The status quo was defined as the scenario with the most accurate AI performance. The incremental cost-effectiveness ratio (ICER) was calculated for other scenarios against the status quo as cost-effectiveness metrics. Compared to the status quo (sensitivity/specificity: 93.3%/87.7%), six scenarios were cost-saving and seven were cost-effective. To achieve cost-saving or cost-effective, the AI model should reach a minimum sensitivity of 88.2% and specificity of 80.4%. The most cost-effective AI model exhibited higher sensitivity (96.3%) and lower specificity (80.4%) than the status quo. In settings with higher DR prevalence and willingness-to-pay levels, the AI needed higher sensitivity for optimal cost-effectiveness. Urban regions and younger patient groups also required higher sensitivity in AI-based screening. In real-world DR screening, the most accurate AI model may not be the most cost-effective. Cost-effectiveness should be independently evaluated, which is most likely to be affected by the AI's sensitivity.
Collapse
Affiliation(s)
- Yueye Wang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Chi Liu
- Faculty of Data Science, City University of Macau, Macao SAR, China
| | - Wenyi Hu
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, VIC, Australia
| | - Lixia Luo
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Danli Shi
- School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong
| | - Jian Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Qiuxia Yin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Lei Zhang
- Clinical Medical Research Center, Children's Hospital of Nanjing Medical University, Nanjing, Jiangsu, 210008, China.
- Melbourne Sexual Health Centre, Alfred Health, Melbourne, VIC, Australia.
- Central Clinical School, Faculty of Medicine, Nursing and Health Sciences, Monash University, Melbourne, VIC, Australia.
| | - Xiaotong Han
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China.
| | - Mingguang He
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China.
- School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong.
- Research Centre for SHARP Vision, The Hong Kong Polytechnic University, Kowloon, Hong Kong.
- Centre for Eye and Vision Research (CEVR), 17W Hong Kong Science Park, Shatin, Hong Kong.
| |
Collapse
|
23
|
Yao Y, Yang J, Sun H, Kong H, Wang S, Xu K, Dai W, Jiang S, Bai Q, Xing S, Yuan J, Liu X, Lu F, Chen Z, Qu J, Su J. DeepGraFT: A novel semantic segmentation auxiliary ROI-based deep learning framework for effective fundus tessellation classification. Comput Biol Med 2024; 169:107881. [PMID: 38159401 DOI: 10.1016/j.compbiomed.2023.107881] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2023] [Revised: 12/04/2023] [Accepted: 12/18/2023] [Indexed: 01/03/2024]
Abstract
Fundus tessellation (FT) is a prevalent clinical feature associated with myopia and has implications in the development of myopic maculopathy, which causes irreversible visual impairment. Accurate classification of FT in color fundus photo can help predict the disease progression and prognosis. However, the lack of precise detection and classification tools has created an unmet medical need, underscoring the importance of exploring the clinical utility of FT. Thus, to address this gap, we introduce an automatic FT grading system (called DeepGraFT) using classification-and-segmentation co-decision models by deep learning. ConvNeXt, utilizing transfer learning from pretrained ImageNet weights, was employed for the classification algorithm, aligning with a region of interest based on the ETDRS grading system to boost performance. A segmentation model was developed to detect FT exits, complementing the classification for improved grading accuracy. The training set of DeepGraFT was from our in-house cohort (MAGIC), and the validation sets consisted of the rest part of in-house cohort and an independent public cohort (UK Biobank). DeepGraFT demonstrated a high performance in the training stage and achieved an impressive accuracy in validation phase (in-house cohort: 86.85 %; public cohort: 81.50 %). Furthermore, our findings demonstrated that DeepGraFT surpasses machine learning-based classification models in FT classification, achieving a 5.57 % increase in accuracy. Ablation analysis revealed that the introduced modules significantly enhanced classification effectiveness and elevated accuracy from 79.85 % to 86.85 %. Further analysis using the results provided by DeepGraFT unveiled a significant negative association between FT and spherical equivalent (SE) in the UK Biobank cohort. In conclusion, DeepGraFT accentuates potential benefits of the deep learning model in automating the grading of FT and allows for potential utility as a clinical-decision support tool for predicting progression of pathological myopia.
Collapse
Affiliation(s)
- Yinghao Yao
- Oujiang Laboratory (Zhejiang Lab for Regenerative Medicine, Vision and Brain Health), Eye Hospital, Wenzhou Medical University, Wenzhou, 325011, Zhejiang, China; National Engineering Research Center of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, Zhejiang, China
| | - Jiaying Yang
- Oujiang Laboratory (Zhejiang Lab for Regenerative Medicine, Vision and Brain Health), Eye Hospital, Wenzhou Medical University, Wenzhou, 325011, Zhejiang, China; National Engineering Research Center of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, Zhejiang, China
| | - Haojun Sun
- Oujiang Laboratory (Zhejiang Lab for Regenerative Medicine, Vision and Brain Health), Eye Hospital, Wenzhou Medical University, Wenzhou, 325011, Zhejiang, China; National Engineering Research Center of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, Zhejiang, China
| | - Hengte Kong
- Oujiang Laboratory (Zhejiang Lab for Regenerative Medicine, Vision and Brain Health), Eye Hospital, Wenzhou Medical University, Wenzhou, 325011, Zhejiang, China; National Engineering Research Center of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, Zhejiang, China
| | - Sheng Wang
- Oujiang Laboratory (Zhejiang Lab for Regenerative Medicine, Vision and Brain Health), Eye Hospital, Wenzhou Medical University, Wenzhou, 325011, Zhejiang, China; National Engineering Research Center of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, Zhejiang, China
| | - Ke Xu
- National Engineering Research Center of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, Zhejiang, China
| | - Wei Dai
- National Engineering Research Center of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, Zhejiang, China
| | - Siyi Jiang
- Oujiang Laboratory (Zhejiang Lab for Regenerative Medicine, Vision and Brain Health), Eye Hospital, Wenzhou Medical University, Wenzhou, 325011, Zhejiang, China; National Engineering Research Center of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, Zhejiang, China
| | - QingShi Bai
- Oujiang Laboratory (Zhejiang Lab for Regenerative Medicine, Vision and Brain Health), Eye Hospital, Wenzhou Medical University, Wenzhou, 325011, Zhejiang, China; National Engineering Research Center of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, Zhejiang, China
| | - Shilai Xing
- Institute of PSI Genomics, Wenzhou Global Eye & Vision Innovation Center, Wenzhou, 325024, China
| | - Jian Yuan
- National Engineering Research Center of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, Zhejiang, China
| | - Xinting Liu
- National Engineering Research Center of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, Zhejiang, China; National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China
| | - Fan Lu
- Oujiang Laboratory (Zhejiang Lab for Regenerative Medicine, Vision and Brain Health), Eye Hospital, Wenzhou Medical University, Wenzhou, 325011, Zhejiang, China; National Engineering Research Center of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, Zhejiang, China; National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China
| | - Zhenhui Chen
- National Engineering Research Center of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, Zhejiang, China; National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China.
| | - Jia Qu
- Oujiang Laboratory (Zhejiang Lab for Regenerative Medicine, Vision and Brain Health), Eye Hospital, Wenzhou Medical University, Wenzhou, 325011, Zhejiang, China; National Engineering Research Center of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, Zhejiang, China; National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China.
| | - Jianzhong Su
- Oujiang Laboratory (Zhejiang Lab for Regenerative Medicine, Vision and Brain Health), Eye Hospital, Wenzhou Medical University, Wenzhou, 325011, Zhejiang, China; National Engineering Research Center of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, Zhejiang, China; National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China.
| |
Collapse
|
24
|
Alves VM, dos Santos Cardoso J, Gama J. Classification of Pulmonary Nodules in 2-[ 18F]FDG PET/CT Images with a 3D Convolutional Neural Network. Nucl Med Mol Imaging 2024; 58:9-24. [PMID: 38261899 PMCID: PMC10796312 DOI: 10.1007/s13139-023-00821-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Revised: 05/17/2023] [Accepted: 08/08/2023] [Indexed: 01/25/2024] Open
Abstract
Purpose 2-[18F]FDG PET/CT plays an important role in the management of pulmonary nodules. Convolutional neural networks (CNNs) automatically learn features from images and have the potential to improve the discrimination between malignant and benign pulmonary nodules. The purpose of this study was to develop and validate a CNN model for classification of pulmonary nodules from 2-[18F]FDG PET images. Methods One hundred thirteen participants were retrospectively selected. One nodule per participant. The 2-[18F]FDG PET images were preprocessed and annotated with the reference standard. The deep learning experiment entailed random data splitting in five sets. A test set was held out for evaluation of the final model. Four-fold cross-validation was performed from the remaining sets for training and evaluating a set of candidate models and for selecting the final model. Models of three types of 3D CNNs architectures were trained from random weight initialization (Stacked 3D CNN, VGG-like and Inception-v2-like models) both in original and augmented datasets. Transfer learning, from ImageNet with ResNet-50, was also used. Results The final model (Stacked 3D CNN model) obtained an area under the ROC curve of 0.8385 (95% CI: 0.6455-1.0000) in the test set. The model had a sensibility of 80.00%, a specificity of 69.23% and an accuracy of 73.91%, in the test set, for an optimised decision threshold that assigns a higher cost to false negatives. Conclusion A 3D CNN model was effective at distinguishing benign from malignant pulmonary nodules in 2-[18F]FDG PET images. Supplementary Information The online version contains supplementary material available at 10.1007/s13139-023-00821-6.
Collapse
Affiliation(s)
- Victor Manuel Alves
- Faculty of Economics, University of Porto, Rua Dr. Roberto Frias, Porto, 4200-464 Porto, Portugal
- Department of Nuclear Medicine, University Hospital Center of São João, Alameda Prof. Hernâni Monteiro, 4200-319 Porto, Portugal
| | - Jaime dos Santos Cardoso
- Faculty of Engineering, University of Porto, Rua Dr. Roberto Frias, 4200-465 Porto, Portugal
- Institute for Systems and Computer Engineering, Technology and Science (INESC TEC), Rua Dr. Roberto Frias, 4200-465 Porto, Portugal
| | - João Gama
- Faculty of Economics, University of Porto, Rua Dr. Roberto Frias, Porto, 4200-464 Porto, Portugal
- Institute for Systems and Computer Engineering, Technology and Science (INESC TEC), Rua Dr. Roberto Frias, 4200-465 Porto, Portugal
| |
Collapse
|
25
|
Tripathi S, Tabari A, Mansur A, Dabbara H, Bridge CP, Daye D. From Machine Learning to Patient Outcomes: A Comprehensive Review of AI in Pancreatic Cancer. Diagnostics (Basel) 2024; 14:174. [PMID: 38248051 PMCID: PMC10814554 DOI: 10.3390/diagnostics14020174] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2023] [Revised: 12/28/2023] [Accepted: 12/29/2023] [Indexed: 01/23/2024] Open
Abstract
Pancreatic cancer is a highly aggressive and difficult-to-detect cancer with a poor prognosis. Late diagnosis is common due to a lack of early symptoms, specific markers, and the challenging location of the pancreas. Imaging technologies have improved diagnosis, but there is still room for improvement in standardizing guidelines. Biopsies and histopathological analysis are challenging due to tumor heterogeneity. Artificial Intelligence (AI) revolutionizes healthcare by improving diagnosis, treatment, and patient care. AI algorithms can analyze medical images with precision, aiding in early disease detection. AI also plays a role in personalized medicine by analyzing patient data to tailor treatment plans. It streamlines administrative tasks, such as medical coding and documentation, and provides patient assistance through AI chatbots. However, challenges include data privacy, security, and ethical considerations. This review article focuses on the potential of AI in transforming pancreatic cancer care, offering improved diagnostics, personalized treatments, and operational efficiency, leading to better patient outcomes.
Collapse
Affiliation(s)
- Satvik Tripathi
- Department of Radiology, Massachusetts General Hospital, Boston, MA 02114, USA; (S.T.); (A.T.); (A.M.); (C.P.B.)
- Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA 02129, USA
- Harvard Medical School, Boston, MA 02115, USA
| | - Azadeh Tabari
- Department of Radiology, Massachusetts General Hospital, Boston, MA 02114, USA; (S.T.); (A.T.); (A.M.); (C.P.B.)
- Harvard Medical School, Boston, MA 02115, USA
| | - Arian Mansur
- Department of Radiology, Massachusetts General Hospital, Boston, MA 02114, USA; (S.T.); (A.T.); (A.M.); (C.P.B.)
- Harvard Medical School, Boston, MA 02115, USA
| | - Harika Dabbara
- Boston University Chobanian & Avedisian School of Medicine, Boston, MA 02118, USA;
| | - Christopher P. Bridge
- Department of Radiology, Massachusetts General Hospital, Boston, MA 02114, USA; (S.T.); (A.T.); (A.M.); (C.P.B.)
- Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA 02129, USA
- Harvard Medical School, Boston, MA 02115, USA
| | - Dania Daye
- Department of Radiology, Massachusetts General Hospital, Boston, MA 02114, USA; (S.T.); (A.T.); (A.M.); (C.P.B.)
- Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA 02129, USA
- Harvard Medical School, Boston, MA 02115, USA
| |
Collapse
|
26
|
Li B, Chen H, Yu W, Zhang M, Lu F, Ma J, Hao Y, Li X, Hu B, Shen L, Mao J, He X, Wang H, Ding D, Li X, Chen Y. The performance of a deep learning system in assisting junior ophthalmologists in diagnosing 13 major fundus diseases: a prospective multi-center clinical trial. NPJ Digit Med 2024; 7:8. [PMID: 38212607 PMCID: PMC10784504 DOI: 10.1038/s41746-023-00991-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Accepted: 12/11/2023] [Indexed: 01/13/2024] Open
Abstract
Artificial intelligence (AI)-based diagnostic systems have been reported to improve fundus disease screening in previous studies. This multicenter prospective self-controlled clinical trial aims to evaluate the diagnostic performance of a deep learning system (DLS) in assisting junior ophthalmologists in detecting 13 major fundus diseases. A total of 1493 fundus images from 748 patients were prospectively collected from five tertiary hospitals in China. Nine junior ophthalmologists were trained and annotated the images with or without the suggestions proposed by the DLS. The diagnostic performance was evaluated among three groups: DLS-assisted junior ophthalmologist group (test group), junior ophthalmologist group (control group) and DLS group. The diagnostic consistency was 84.9% (95%CI, 83.0% ~ 86.9%), 72.9% (95%CI, 70.3% ~ 75.6%) and 85.5% (95%CI, 83.5% ~ 87.4%) in the test group, control group and DLS group, respectively. With the help of the proposed DLS, the diagnostic consistency of junior ophthalmologists improved by approximately 12% (95% CI, 9.1% ~ 14.9%) with statistical significance (P < 0.001). For the detection of 13 diseases, the test group achieved significant higher sensitivities (72.2% ~ 100.0%) and comparable specificities (90.8% ~ 98.7%) comparing with the control group (sensitivities, 50% ~ 100%; specificities 96.7 ~ 99.8%). The DLS group presented similar performance to the test group in the detection of any fundus abnormality (sensitivity, 95.7%; specificity, 87.2%) and each of the 13 diseases (sensitivity, 83.3% ~ 100.0%; specificity, 89.0 ~ 98.0%). The proposed DLS provided a novel approach for the automatic detection of 13 major fundus diseases with high diagnostic consistency and assisted to improve the performance of junior ophthalmologists, resulting especially in reducing the risk of missed diagnoses. ClinicalTrials.gov NCT04723160.
Collapse
Affiliation(s)
- Bing Li
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China
- Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing, China
| | - Huan Chen
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China
- Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing, China
| | - Weihong Yu
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China
- Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing, China
| | - Ming Zhang
- Department of Ophthalmology, West China Hospital, Sichuan University, Chengdu, China
| | - Fang Lu
- Department of Ophthalmology, West China Hospital, Sichuan University, Chengdu, China
| | - Jingxue Ma
- Department of Ophthalmology, Second Hospital of Hebei Medical University, Shijiazhuang, China
| | - Yuhua Hao
- Department of Ophthalmology, Second Hospital of Hebei Medical University, Shijiazhuang, China
| | - Xiaorong Li
- Department of Retina, Tianjin Medical University Eye Hospital, Tianjin, China
| | - Bojie Hu
- Department of Retina, Tianjin Medical University Eye Hospital, Tianjin, China
| | - Lijun Shen
- Department of Retina Center, Affiliated Eye Hospital of Wenzhou Medical University, Hangzhou, Zhejiang Province, China
| | - Jianbo Mao
- Department of Retina Center, Affiliated Eye Hospital of Wenzhou Medical University, Hangzhou, Zhejiang Province, China
| | - Xixi He
- School of Information Science and Technology, North China University of Technology, Beijing, China
- Beijing Key Laboratory on Integration and Analysis of Large-scale Stream Data, Beijing, China
| | - Hao Wang
- Visionary Intelligence Ltd., Beijing, China
| | | | - Xirong Li
- MoE Key Lab of DEKE, Renmin University of China, Beijing, China
| | - Youxin Chen
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China.
- Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing, China.
| |
Collapse
|
27
|
Liu L, Li M, Lin D, Yun D, Lin Z, Zhao L, Pang J, Li L, Wu Y, Shang Y, Lin H, Wu X. Protocol to analyze fundus images for multidimensional quality grading and real-time guidance using deep learning techniques. STAR Protoc 2023; 4:102565. [PMID: 37733597 PMCID: PMC10519839 DOI: 10.1016/j.xpro.2023.102565] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2023] [Revised: 08/09/2023] [Accepted: 08/18/2023] [Indexed: 09/23/2023] Open
Abstract
Data quality issues have been acknowledged as one of the greatest obstacles in medical artificial intelligence research. Here, we present DeepFundus, which employs deep learning techniques to perform multidimensional classification of fundus image quality and provide real-time guidance for on-site image acquisition. We describe steps for data preparation, model training, model inference, model evaluation, and the visualization of results using heatmaps. This protocol can be implemented in Python using either the suggested dataset or a customized dataset. For complete details on the use and execution of this protocol, please refer to Liu et al.1.
Collapse
Affiliation(s)
- Lixue Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Mingyuan Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Duoru Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Dongyuan Yun
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Zhenzhe Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Lanqin Zhao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Jianyu Pang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Longhui Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Yuxuan Wu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Yuanjun Shang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China; Hainan Eye Hospital and Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Haikou, Hainan, China; Center for Precision Medicine and Department of Genetics and Biomedical Informatics, Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China.
| | - Xiaohang Wu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China.
| |
Collapse
|
28
|
Rakers M, van de Vijver S, Bossio P, Moens N, Rauws M, Orera M, Shen H, Hallensleben C, Brakema E, Guldemond N, Chavannes NH, Villalobos-Quesada M. SERIES: eHealth in primary care. Part 6: Global perspectives: Learning from eHealth for low-resource primary care settings and across high-, middle- and low-income countries. Eur J Gen Pract 2023; 29:2241987. [PMID: 37615720 PMCID: PMC10453992 DOI: 10.1080/13814788.2023.2241987] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2022] [Revised: 07/13/2023] [Accepted: 07/20/2023] [Indexed: 08/25/2023] Open
Abstract
BACKGROUND eHealth offers opportunities to improve health and healthcare systems and overcome primary care challenges in low-resource settings (LRS). LRS has been typically associated with low- and middle-income countries (LMIC), but they can be found in high-income countries (HIC) when human, physical or financial resources are constrained. Adopting a concept of LRS that applies to LMIC and HIC can facilitate knowledge interchange between eHealth initiatives while improving healthcare provision for socioeconomically disadvantaged groups across the globe. OBJECTIVES To outline the contributions and challenges of eHealth in low-resource primary care settings. STRATEGY We adopt a socio-ecological understanding of LRS, making LRS relevant to LMIC and HIC. To assess the potential of eHealth in primary care settings, we discuss four case studies according to the WHO 'building blocks for strengthening healthcare systems'. RESULTS AND DISCUSSION The case studies illustrate eHealth's potential to improve the provision of healthcare by i) improving the delivery of healthcare (using AI-generated chats); ii) supporting the workforce (using telemedicine platforms); iii) strengthening the healthcare information system (through patient-centred healthcare information systems), and iv) improving system-related elements of healthcare (through a mobile health financing platform). Nevertheless, we found that development and implementation are hindered by user-related, technical, financial, regulatory and evaluation challenges. We formulated six recommendations to help anticipate or overcome these challenges: 1) evaluate eHealth's appropriateness, 2) know the end users, 3) establish evaluation methods, 4) prioritise the human component, 5) profit from collaborations, ensure sustainable financing and local ownership, 6) and contextualise and evaluate the implementation strategies.
Collapse
Affiliation(s)
- Margot Rakers
- Department of Public Health and Primary Care, Leiden University Medical Centre, Leiden, the Netherlands
- National eHealth Living Lab (NELL), Leiden, the Netherlands
| | | | - Paz Bossio
- Universidad Nacional de Jujuy, San Salvador de Jujuy, Argentina
| | - Nic Moens
- Africa eHealth Foundation, Veenendaal, the Netherlands
| | | | | | - Hongxia Shen
- Department of Public Health and Primary Care, Leiden University Medical Centre, Leiden, the Netherlands
- School of Nursing Guangzhou, Guangzhou Medical University, Guangdong, China
| | - Cynthia Hallensleben
- Department of Public Health and Primary Care, Leiden University Medical Centre, Leiden, the Netherlands
- National eHealth Living Lab (NELL), Leiden, the Netherlands
| | - Evelyn Brakema
- Department of Public Health and Primary Care, Leiden University Medical Centre, Leiden, the Netherlands
- National eHealth Living Lab (NELL), Leiden, the Netherlands
| | | | - Niels H. Chavannes
- Department of Public Health and Primary Care, Leiden University Medical Centre, Leiden, the Netherlands
- National eHealth Living Lab (NELL), Leiden, the Netherlands
| | - María Villalobos-Quesada
- Department of Public Health and Primary Care, Leiden University Medical Centre, Leiden, the Netherlands
- National eHealth Living Lab (NELL), Leiden, the Netherlands
| |
Collapse
|
29
|
Liao X, Yao C, Zhang J, Liu LZ. Recent advancement in integrating artificial intelligence and information technology with real-world data for clinical decision-making in China: A scoping review. J Evid Based Med 2023; 16:534-546. [PMID: 37772921 DOI: 10.1111/jebm.12549] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/24/2023] [Accepted: 08/31/2023] [Indexed: 09/30/2023]
Abstract
OBJECTIVE Striking innovations and advancements have been achieved with the use of artificial intelligence and healthcare information technology being integrated into clinical real-world data. The current scoping review aimed to provide an overview of the current status of artificial intelligence-/information technology-based clinical decision support tools in China. METHODS PubMed/MEDLINE, Embase, China National Knowledge Internet, and Wanfang data were searched for both English and Chinese literature. The gray literature search was conducted for commercially available tools. Original studies that focused on clinical decision support tools driven by artificial intelligence or information technology in China and were published between 2010 and February 2022 were included. Information extracted from each article was further synthesized by themes based on three types of clinical decision-making. RESULTS A total of 37 peer-reviewed publications and 13 commercially available tools were included in the final analysis. Among them, 32.0% were developed for disease diagnosis, 54.0% for risk prediction and classification, and 14.0% for disease management. Chronic diseases were the most popular therapeutic areas of exploration, with particular emphasis on cardiovascular and cerebrovascular diseases. Single-center electronic medical records were the mainstream data sources leveraged to inform clinical decision-making, with internal validation being predominately used for model evaluation. CONCLUSIONS To effectively promote the extensive use of real-world data and drive a paradigm shift in clinical decision-making in China, multidisciplinary collaboration of key stakeholders is urgently needed.
Collapse
Affiliation(s)
- Xiwen Liao
- Peking University Clinical Research Institute, Peking University First Hospital, Beijing, China
| | - Chen Yao
- Peking University Clinical Research Institute, Peking University First Hospital, Beijing, China
- Hainan Institute of Real World Data, Qionghai, Hainan, China
| | - Jun Zhang
- Center for Observational and Real-world Evidence (CORE), MSD R&D (China) Co., Ltd., Beijing, China
| | - Larry Z Liu
- Center for Observational and Real-world Evidence (CORE), Merck & Co Inc, Rahway, Rahway, New Jersey, USA
- Department of Population Health Sciences, Weill Cornell Medical College, New York City, New York, USA
| |
Collapse
|
30
|
Cui T, Lin D, Yu S, Zhao X, Lin Z, Zhao L, Xu F, Yun D, Pang J, Li R, Xie L, Zhu P, Huang Y, Huang H, Hu C, Huang W, Liang X, Lin H. Deep Learning Performance of Ultra-Widefield Fundus Imaging for Screening Retinal Lesions in Rural Locales. JAMA Ophthalmol 2023; 141:1045-1051. [PMID: 37856107 PMCID: PMC10587822 DOI: 10.1001/jamaophthalmol.2023.4650] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Accepted: 08/27/2023] [Indexed: 10/20/2023]
Abstract
Importance Retinal diseases are the leading cause of irreversible blindness worldwide, and timely detection contributes to prevention of permanent vision loss, especially for patients in rural areas with limited medical resources. Deep learning systems (DLSs) based on fundus images with a 45° field of view have been extensively applied in population screening, while the feasibility of using ultra-widefield (UWF) fundus image-based DLSs to detect retinal lesions in patients in rural areas warrants exploration. Objective To explore the performance of a DLS for multiple retinal lesion screening using UWF fundus images from patients in rural areas. Design, Setting, and Participants In this diagnostic study, a previously developed DLS based on UWF fundus images was used to screen for 5 retinal lesions (retinal exudates or drusen, glaucomatous optic neuropathy, retinal hemorrhage, lattice degeneration or retinal breaks, and retinal detachment) in 24 villages of Yangxi County, China, between November 17, 2020, and March 30, 2021. Interventions The captured images were analyzed by the DLS and ophthalmologists. Main Outcomes and Measures The performance of the DLS in rural screening was compared with that of the internal validation in the previous model development stage. The image quality, lesion proportion, and complexity of lesion composition were compared between the model development stage and the rural screening stage. Results A total of 6222 eyes in 3149 participants (1685 women [53.5%]; mean [SD] age, 70.9 [9.1] years) were screened. The DLS achieved a mean (SD) area under the receiver operating characteristic curve (AUC) of 0.918 (0.021) (95% CI, 0.892-0.944) for detecting 5 retinal lesions in the entire data set when applied for patients in rural areas, which was lower than that reported at the model development stage (AUC, 0.998 [0.002] [95% CI, 0.995-1.000]; P < .001). Compared with the fundus images in the model development stage, the fundus images in this rural screening study had an increased frequency of poor quality (13.8% [860 of 6222] vs 0%), increased variation in lesion proportions (0.1% [6 of 6222]-36.5% [2271 of 6222] vs 14.0% [2793 of 19 891]-21.3% [3433 of 16 138]), and an increased complexity of lesion composition. Conclusions and Relevance This diagnostic study suggests that the DLS exhibited excellent performance using UWF fundus images as a screening tool for 5 retinal lesions in patients in a rural setting. However, poor image quality, diverse lesion proportions, and a complex set of lesions may have reduced the performance of the DLS; these factors in targeted screening scenarios should be taken into consideration in the model development stage to ensure good performance.
Collapse
Affiliation(s)
- Tingxin Cui
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Duoru Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Shanshan Yu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Xinyu Zhao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Zhenzhe Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Lanqin Zhao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Fabao Xu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
- Department of Ophthalmology, Qilu Hospital, Shandong University, Jinan, China
| | - Dongyuan Yun
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
- School of Biomedical Engineering, Sun Yat-sen University, Guangzhou, China
| | - Jianyu Pang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
- School of Biomedical Engineering, Sun Yat-sen University, Guangzhou, China
| | - Ruiyang Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Liqiong Xie
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Pengzhi Zhu
- Greater Bay Area Center for Medical Device Evaluation and Inspection of National Medical Products Administration, Shenzhen, China
| | - Yuzhe Huang
- Guangdong Medical Devices Quality Surveillance and Test Institute, Guangzhou, China
| | - Hongxin Huang
- Guangdong Medical Devices Quality Surveillance and Test Institute, Guangzhou, China
| | - Changming Hu
- Guangdong Medical Devices Quality Surveillance and Test Institute, Guangzhou, China
| | - Wenyong Huang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Xiaoling Liang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
- School of Biomedical Engineering, Sun Yat-sen University, Guangzhou, China
- Hainan Eye Hospital and Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Haikou, China
- Center for Precision Medicine and Department of Genetics and Biomedical Informatics, Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, China
| |
Collapse
|
31
|
Thomas L, Hyde C, Mullarkey D, Greenhalgh J, Kalsi D, Ko J. Real-world post-deployment performance of a novel machine learning-based digital health technology for skin lesion assessment and suggestions for post-market surveillance. Front Med (Lausanne) 2023; 10:1264846. [PMID: 38020164 PMCID: PMC10645139 DOI: 10.3389/fmed.2023.1264846] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Accepted: 10/10/2023] [Indexed: 12/01/2023] Open
Abstract
Introduction Deep Ensemble for Recognition of Malignancy (DERM) is an artificial intelligence as a medical device (AIaMD) tool for skin lesion assessment. Methods We report prospective real-world performance from its deployment within skin cancer pathways at two National Health Service hospitals (UK) between July 2021 and October 2022. Results A total of 14,500 cases were seen, including patients 18-100 years old with Fitzpatrick skin types I-VI represented. Based on 8,571 lesions assessed by DERM with confirmed outcomes, versions A and B demonstrated very high sensitivity for detecting melanoma (95.0-100.0%) or malignancy (96.0-100.0%). Benign lesion specificity was 40.7-49.4% (DERM-vA) and 70.1-73.4% (DERM-vB). DERM identified 15.0-31.0% of cases as eligible for discharge. Discussion We show DERM performance in-line with sensitivity targets and pre-marketing authorisation research, and it reduced the caseload for hospital specialists in two pathways. Based on our experience we offer suggestions on key elements of post-market surveillance for AIaMDs.
Collapse
Affiliation(s)
- Lucy Thomas
- Chelsea and Westminster Hospital NHS Foundation Trust, London, United Kingdom
| | - Chris Hyde
- Exeter Test Group, Department of Health and Community Sciences, University of Exeter Medical School, Exeter, United Kingdom
| | | | | | | | - Justin Ko
- Department of Dermatology, Stanford Medicine, Stanford, CA, United States
| |
Collapse
|
32
|
Zhao X, Lin Z, Yu S, Xiao J, Xie L, Xu Y, Tsui CK, Cui K, Zhao L, Zhang G, Zhang S, Lu Y, Lin H, Liang X, Lin D. An artificial intelligence system for the whole process from diagnosis to treatment suggestion of ischemic retinal diseases. Cell Rep Med 2023; 4:101197. [PMID: 37734379 PMCID: PMC10591037 DOI: 10.1016/j.xcrm.2023.101197] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2023] [Revised: 05/29/2023] [Accepted: 08/23/2023] [Indexed: 09/23/2023]
Abstract
Ischemic retinal diseases (IRDs) are a series of common blinding diseases that depend on accurate fundus fluorescein angiography (FFA) image interpretation for diagnosis and treatment. An artificial intelligence system (Ai-Doctor) was developed to interpret FFA images. Ai-Doctor performed well in image phase identification (area under the curve [AUC], 0.991-0.999, range), diabetic retinopathy (DR) and branch retinal vein occlusion (BRVO) diagnosis (AUC, 0.979-0.992), and non-perfusion area segmentation (Dice similarity coefficient [DSC], 89.7%-90.1%) and quantification. The segmentation model was expanded to unencountered IRDs (central RVO and retinal vasculitis), with DSCs of 89.2% and 83.6%, respectively. A clinically applicable ischemia index (CAII) was proposed to evaluate ischemic degree; patients with CAII values exceeding 0.17 in BRVO and 0.08 in DR may be associated with increased possibility for laser therapy. Ai-Doctor is expected to achieve accurate FFA image interpretation for IRDs, potentially reducing the reliance on retinal specialists.
Collapse
Affiliation(s)
- Xinyu Zhao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China; Shenzhen Eye Hospital, Jinan University, Shenzhen Eye Institute, Shenzhen 518040, China
| | - Zhenzhe Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China
| | - Shanshan Yu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China
| | - Jun Xiao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China
| | - Liqiong Xie
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China
| | - Yue Xu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China
| | - Ching-Kit Tsui
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China
| | - Kaixuan Cui
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China
| | - Lanqin Zhao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China
| | - Guoming Zhang
- Shenzhen Eye Hospital, Jinan University, Shenzhen Eye Institute, Shenzhen 518040, China
| | - Shaochong Zhang
- Shenzhen Eye Hospital, Jinan University, Shenzhen Eye Institute, Shenzhen 518040, China
| | - Yan Lu
- Foshan Second People's Hospital, Foshan 528001, China
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China; Hainan Eye Hospital and Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Haikou 570311, China; Center for Precision Medicine and Department of Genetics and Biomedical Informatics, Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou 510080, China.
| | - Xiaoling Liang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Duoru Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| |
Collapse
|
33
|
Marsden H, Morgan C, Austin S, DeGiovanni C, Venzi M, Kemos P, Greenhalgh J, Mullarkey D, Palamaras I. Effectiveness of an image analyzing AI-based Digital Health Technology to identify Non-Melanoma Skin Cancer and other skin lesions: results of the DERM-003 study. Front Med (Lausanne) 2023; 10:1288521. [PMID: 37869160 PMCID: PMC10587678 DOI: 10.3389/fmed.2023.1288521] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2023] [Accepted: 09/18/2023] [Indexed: 10/24/2023] Open
Abstract
Introduction Identification of skin cancer by an Artificial Intelligence (AI)-based Digital Health Technology could help improve the triage and management of suspicious skin lesions. Methods The DERM-003 study (NCT04116983) was a prospective, multi-center, single-arm, masked study that aimed to demonstrate the effectiveness of an AI as a Medical Device (AIaMD) to identify Squamous Cell Carcinoma (SCC), Basal Cell Carcinoma (BCC), pre-malignant and benign lesions from dermoscopic images of suspicious skin lesions. Suspicious skin lesions that were suitable for photography were photographed with 3 smartphone cameras (iPhone 6S, iPhone 11, Samsung 10) with a DL1 dermoscopic lens attachment. Dermatologists provided clinical diagnoses and histopathology results were obtained for biopsied lesions. Each image was assessed by the AIaMD and the output compared to the ground truth diagnosis. Results 572 patients (49.5% female, mean age 68.5 years, 96.9% Fitzpatrick skin types I-III) were recruited from 4 UK NHS Trusts, providing images of 611 suspicious lesions. 395 (64.6%) lesions were biopsied; 47 (11%) were diagnosed as SCC and 184 (44%) as BCC. The AIaMD AUROC on images taken by iPhone 6S was 0.88 (95% CI: 0.83-0.93) for SCC and 0.87 (95% CI: 0.84-0.91) for BCC. For Samsung 10 the AUROCs were 0.85 (95% CI: 0.79-0.90) and 0.87 (95% CI, 0.83-0.90), and for the iPhone 11 they were 0.88 (95% CI, 0.84-0.93) and 0.89 (95% CI, 0.86-0.92) for SCC and BCC, respectively. Using pre-determined diagnostic thresholds on images taken on the iPhone 6S the AIaMD achieved a sensitivity and specificity of 98% (95% CI, 88-100%) and 38% (95% CI, 33-44%) for SCC; and 94% (95% CI, 90-97%) and 28% (95 CI, 21-35%) for BCC. All 16 lesions diagnosed as melanoma in the study were correctly classified by the AIaMD. Discussion The AIaMD has the potential to support the timely diagnosis of malignant and premalignant skin lesions.
Collapse
Affiliation(s)
| | - Caroline Morgan
- Dermatology Unit, University Hospitals Dorset, Poole Hospital, Poole, United Kingdom
| | - Stephanie Austin
- Dermatology Unit, University Hospitals Dorset, Poole Hospital, Poole, United Kingdom
| | - Claudia DeGiovanni
- Dermatology Unit, University Hospitals Sussex NHS Foundation Trust, Brighton, United Kingdom
| | | | | | | | | | - Ioulios Palamaras
- Department of Dermatology, Barnet and Chase Farm Hospitals, Royal Free London NHS Foundation Trust, London, United Kingdom
| |
Collapse
|
34
|
Cleland CR, Rwiza J, Evans JR, Gordon I, MacLeod D, Burton MJ, Bascaran C. Artificial intelligence for diabetic retinopathy in low-income and middle-income countries: a scoping review. BMJ Open Diabetes Res Care 2023; 11:e003424. [PMID: 37532460 PMCID: PMC10401245 DOI: 10.1136/bmjdrc-2023-003424] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Accepted: 07/11/2023] [Indexed: 08/04/2023] Open
Abstract
Diabetic retinopathy (DR) is a leading cause of blindness globally. There is growing evidence to support the use of artificial intelligence (AI) in diabetic eye care, particularly for screening populations at risk of sight loss from DR in low-income and middle-income countries (LMICs) where resources are most stretched. However, implementation into clinical practice remains limited. We conducted a scoping review to identify what AI tools have been used for DR in LMICs and to report their performance and relevant characteristics. 81 articles were included. The reported sensitivities and specificities were generally high providing evidence to support use in clinical practice. However, the majority of studies focused on sensitivity and specificity only and there was limited information on cost, regulatory approvals and whether the use of AI improved health outcomes. Further research that goes beyond reporting sensitivities and specificities is needed prior to wider implementation.
Collapse
Affiliation(s)
- Charles R Cleland
- International Centre for Eye Health, Faculty of Infectious and Tropical Diseases, London School of Hygiene & Tropical Medicine, London, UK
- Eye Department, Kilimanjaro Christian Medical Centre, Moshi, United Republic of Tanzania
| | - Justus Rwiza
- Eye Department, Kilimanjaro Christian Medical Centre, Moshi, United Republic of Tanzania
| | - Jennifer R Evans
- International Centre for Eye Health, Faculty of Infectious and Tropical Diseases, London School of Hygiene & Tropical Medicine, London, UK
| | - Iris Gordon
- International Centre for Eye Health, Faculty of Infectious and Tropical Diseases, London School of Hygiene & Tropical Medicine, London, UK
| | - David MacLeod
- Tropical Epidemiology Group, Department of Infectious Disease Epidemiology, London School of Hygiene & Tropical Medicine, London, UK
| | - Matthew J Burton
- International Centre for Eye Health, Faculty of Infectious and Tropical Diseases, London School of Hygiene & Tropical Medicine, London, UK
- National Institute for Health Research Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, UK
| | - Covadonga Bascaran
- International Centre for Eye Health, Faculty of Infectious and Tropical Diseases, London School of Hygiene & Tropical Medicine, London, UK
| |
Collapse
|
35
|
Xie H, Li Z, Wu C, Zhao Y, Lin C, Wang Z, Wang C, Gu Q, Wang M, Zheng Q, Jiang J, Chen W. Deep learning for detecting visually impaired cataracts using fundus images. Front Cell Dev Biol 2023; 11:1197239. [PMID: 37576595 PMCID: PMC10416247 DOI: 10.3389/fcell.2023.1197239] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2023] [Accepted: 07/20/2023] [Indexed: 08/15/2023] Open
Abstract
Purpose: To develop a visual function-based deep learning system (DLS) using fundus images to screen for visually impaired cataracts. Materials and methods: A total of 8,395 fundus images (5,245 subjects) with corresponding visual function parameters collected from three clinical centers were used to develop and evaluate a DLS for classifying non-cataracts, mild cataracts, and visually impaired cataracts. Three deep learning algorithms (DenseNet121, Inception V3, and ResNet50) were leveraged to train models to obtain the best one for the system. The performance of the system was evaluated using the area under the receiver operating characteristic curve (AUC), sensitivity, and specificity. Results: The AUC of the best algorithm (DenseNet121) on the internal test dataset and the two external test datasets were 0.998 (95% CI, 0.996-0.999) to 0.999 (95% CI, 0.998-1.000),0.938 (95% CI, 0.924-0.951) to 0.966 (95% CI, 0.946-0.983) and 0.937 (95% CI, 0.918-0.953) to 0.977 (95% CI, 0.962-0.989), respectively. In the comparison between the system and cataract specialists, better performance was observed in the system for detecting visually impaired cataracts (p < 0.05). Conclusion: Our study shows the potential of a function-focused screening tool to identify visually impaired cataracts from fundus images, enabling timely patient referral to tertiary eye hospitals.
Collapse
Affiliation(s)
- He Xie
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, China
| | - Zhongwen Li
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, China
| | - Chengchao Wu
- School of Electronic Engineering, Xi’an University of Posts and Telecommunications, Xi’an, China
| | - Yitian Zhao
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, China
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
| | - Chengmin Lin
- Department of Ophthalmology, Wenzhou Hospital of Integrated Traditional Chinese and Western Medicine, Wenzhou, China
| | - Zhouqian Wang
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, China
| | - Chenxi Wang
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, China
| | - Qinyi Gu
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, China
| | - Minye Wang
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, China
| | - Qinxiang Zheng
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, China
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, China
| | - Jiewei Jiang
- School of Electronic Engineering, Xi’an University of Posts and Telecommunications, Xi’an, China
| | - Wei Chen
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, China
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, China
| |
Collapse
|
36
|
Li Z, Wang L, Wu X, Jiang J, Qiang W, Xie H, Zhou H, Wu S, Shao Y, Chen W. Artificial intelligence in ophthalmology: The path to the real-world clinic. Cell Rep Med 2023:101095. [PMID: 37385253 PMCID: PMC10394169 DOI: 10.1016/j.xcrm.2023.101095] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Revised: 04/17/2023] [Accepted: 06/07/2023] [Indexed: 07/01/2023]
Abstract
Artificial intelligence (AI) has great potential to transform healthcare by enhancing the workflow and productivity of clinicians, enabling existing staff to serve more patients, improving patient outcomes, and reducing health disparities. In the field of ophthalmology, AI systems have shown performance comparable with or even better than experienced ophthalmologists in tasks such as diabetic retinopathy detection and grading. However, despite these quite good results, very few AI systems have been deployed in real-world clinical settings, challenging the true value of these systems. This review provides an overview of the current main AI applications in ophthalmology, describes the challenges that need to be overcome prior to clinical implementation of the AI systems, and discusses the strategies that may pave the way to the clinical translation of these systems.
Collapse
Affiliation(s)
- Zhongwen Li
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China; School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China.
| | - Lei Wang
- School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China
| | - Xuefang Wu
- Guizhou Provincial People's Hospital, Guizhou University, Guiyang 550002, China
| | - Jiewei Jiang
- School of Electronic Engineering, Xi'an University of Posts and Telecommunications, Xi'an 710121, China
| | - Wei Qiang
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China
| | - He Xie
- School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China
| | - Hongjian Zhou
- Department of Computer Science, University of Oxford, Oxford, Oxfordshire OX1 2JD, UK
| | - Shanjun Wu
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China
| | - Yi Shao
- Department of Ophthalmology, the First Affiliated Hospital of Nanchang University, Nanchang 330006, China.
| | - Wei Chen
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China; School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China.
| |
Collapse
|
37
|
Chłopowiec AR, Karanowski K, Skrzypczak T, Grzesiuk M, Chłopowiec AB, Tabakov M. Counteracting Data Bias and Class Imbalance-Towards a Useful and Reliable Retinal Disease Recognition System. Diagnostics (Basel) 2023; 13:diagnostics13111904. [PMID: 37296756 DOI: 10.3390/diagnostics13111904] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Revised: 05/22/2023] [Accepted: 05/25/2023] [Indexed: 06/12/2023] Open
Abstract
Multiple studies presented satisfactory performances for the treatment of various ocular diseases. To date, there has been no study that describes a multiclass model, medically accurate, and trained on large diverse dataset. No study has addressed a class imbalance problem in one giant dataset originating from multiple large diverse eye fundus image collections. To ensure a real-life clinical environment and mitigate the problem of biased medical image data, 22 publicly available datasets were merged. To secure medical validity only Diabetic Retinopathy (DR), Age-Related Macular Degeneration (AMD) and Glaucoma (GL) were included. The state-of-the-art models ConvNext, RegNet and ResNet were utilized. In the resulting dataset, there were 86,415 normal, 3787 GL, 632 AMD and 34,379 DR fundus images. ConvNextTiny achieved the best results in terms of recognizing most of the examined eye diseases with the most metrics. The overall accuracy was 80.46 ± 1.48. Specific accuracy values were: 80.01 ± 1.10 for normal eye fundus, 97.20 ± 0.66 for GL, 98.14 ± 0.31 for AMD, 80.66 ± 1.27 for DR. A suitable screening model for the most prevalent retinal diseases in ageing societies was designed. The model was developed on a diverse, combined large dataset which made the obtained results less biased and more generalizable.
Collapse
Affiliation(s)
- Adam R Chłopowiec
- Department of Artificial Intelligence, Wroclaw University of Science and Technology, Wybrzeże Wyspianskiego 27, 50-370 Wroclaw, Poland
| | - Konrad Karanowski
- Department of Artificial Intelligence, Wroclaw University of Science and Technology, Wybrzeże Wyspianskiego 27, 50-370 Wroclaw, Poland
| | - Tomasz Skrzypczak
- Faculty of Medicine, Wroclaw Medical University, Wybrzeże Ludwika Pasteura 1, 50-367 Wroclaw, Poland
| | - Mateusz Grzesiuk
- Department of Artificial Intelligence, Wroclaw University of Science and Technology, Wybrzeże Wyspianskiego 27, 50-370 Wroclaw, Poland
| | - Adrian B Chłopowiec
- Department of Artificial Intelligence, Wroclaw University of Science and Technology, Wybrzeże Wyspianskiego 27, 50-370 Wroclaw, Poland
| | - Martin Tabakov
- Department of Artificial Intelligence, Wroclaw University of Science and Technology, Wybrzeże Wyspianskiego 27, 50-370 Wroclaw, Poland
| |
Collapse
|
38
|
Chen X, You G, Chen Q, Zhang X, Wang N, He X, Zhu L, Li Z, Liu C, Yao S, Ge J, Gao W, Yu H. Development and evaluation of an artificial intelligence system for children intussusception diagnosis using ultrasound images. iScience 2023; 26:106456. [PMID: 37063466 PMCID: PMC10090215 DOI: 10.1016/j.isci.2023.106456] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2022] [Revised: 02/16/2023] [Accepted: 03/16/2023] [Indexed: 03/30/2023] Open
Abstract
Accurate identification of intussusception in children is critical for timely non-surgical management. We propose an end-to-end artificial intelligence algorithm, the Children Intussusception Diagnosis Network (CIDNet) system, that utilizes ultrasound images to rapidly diagnose intussusception. 9999 ultrasound images of 4154 pediatric patients were divided into training, validation, test, and independent reader study datasets. The independent reader study cohort was used to compare the diagnostic performance of the CIDNet system to six radiologists. Performance was evaluated using, among others, balance accuracy (BACC) and area under the receiver operating characteristic curve (AUC). The CIDNet system performed the best in diagnosing intussusception with a BACC of 0.8464 and AUC of 0.9716 in the test dataset compared to other deep learning algorithms. The CIDNet system compared favorably with expert radiologists by outstanding identification performance and robustness (BACC:0.9297; AUC:0.9769). CIDNet is a stable and precise technological tool for identifying intussusception in ultrasound scans of children.
Collapse
Affiliation(s)
- Xiong Chen
- Department of Paediatric Urology, Guangzhou Women and Children’s Medical Center, Guangzhou Medical University, Guangzhou 510623, P. R. China
- Department of Paediatric Surgery, Guangzhou Institute of Paediatrics, Guangzhou Medical University, Guangzhou 510623, P. R. China
| | - Guochang You
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou 510080, P. R. China
| | - Qinchang Chen
- Department of Pediatric Cardiology, Guangdong Provincial Key Laboratory of Structural Heart Disease, Guangdong Provincial People’s Hospital, Guangdong Academy of Medical Sciences, Guangdong Cardiovascular Institute, Guangzhou 510080, P. R. China
| | - Xiangxiang Zhang
- Department of Ultrasound, Guangzhou Women and Children’s Medical Center, Guangzhou Medical University, Guangzhou 510623, P. R. China
| | - Na Wang
- Department of Ultrasound, Guangzhou Women and Children’s Medical Center, Guangzhou Medical University, Guangzhou 510623, P. R. China
| | - Xuehua He
- Department of Ultrasound, Guangzhou Women and Children’s Medical Center, Guangzhou Medical University, Guangzhou 510623, P. R. China
| | - Liling Zhu
- Department of Ultrasound, Guangzhou Women and Children’s Medical Center, Guangzhou Medical University, Guangzhou 510623, P. R. China
| | - Zhouzhou Li
- Department of Ultrasound, Guangzhou Women and Children’s Medical Center, Guangzhou Medical University, Guangzhou 510623, P. R. China
| | - Chen Liu
- Department of Ultrasound, Guangzhou Women and Children’s Medical Center, Guangzhou Medical University, Guangzhou 510623, P. R. China
| | - Shixiang Yao
- Department of Ultrasound, Guangzhou Women and Children’s Medical Center, Guangzhou Medical University, Guangzhou 510623, P. R. China
| | - Junshuang Ge
- Clinical Data Center, Guangzhou Women and Children’s Medical Center, Guangzhou Medical University, Guangzhou 510623, P. R. China
| | - Wenjing Gao
- Clinical Data Center, Guangzhou Women and Children’s Medical Center, Guangzhou Medical University, Guangzhou 510623, P. R. China
- Corresponding author
| | - Hongkui Yu
- Department of Ultrasound, Guangzhou Women and Children’s Medical Center, Guangzhou Medical University, Guangzhou 510623, P. R. China
- Corresponding author
| |
Collapse
|
39
|
Sun G, Wang X, Xu L, Li C, Wang W, Yi Z, Luo H, Su Y, Zheng J, Li Z, Chen Z, Zheng H, Chen C. Deep Learning for the Detection of Multiple Fundus Diseases Using Ultra-widefield Images. Ophthalmol Ther 2023; 12:895-907. [PMID: 36565376 PMCID: PMC10011259 DOI: 10.1007/s40123-022-00627-3] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Accepted: 11/27/2022] [Indexed: 12/25/2022] Open
Abstract
INTRODUCTION To design and evaluate a deep learning model based on ultra-widefield images (UWFIs) that can detect several common fundus diseases. METHODS Based on 4574 UWFIs, a deep learning model was trained and validated that can identify normal fundus and eight common fundus diseases, namely referable diabetic retinopathy, retinal vein occlusion, pathologic myopia, retinal detachment, retinitis pigmentosa, age-related macular degeneration, vitreous opacity, and optic neuropathy. The model was tested on three test sets with data volumes of 465, 979, and 525. The performance of the three deep learning networks, EfficientNet-B7, DenseNet, and ResNet-101, was evaluated on the internal test set. Additionally, we compared the performance of the deep learning model with that of doctors in a tertiary referral hospital. RESULTS Compared to the other two deep learning models, EfficientNet-B7 achieved the best performance. The area under the receiver operating characteristic curves of the EfficientNet-B7 model on the internal test set, external test set A and external test set B were 0.9708 (0.8772, 0.9849) to 1.0000 (1.0000, 1.0000), 0.9683 (0.8829, 0.9770) to 1.0000 (0.9975, 1.0000), and 0.8919 (0.7150, 0.9055) to 0.9977 (0.9165, 1.0000), respectively. On a data set of 100 images, the total accuracy of the deep learning model was 93.00%, the average accuracy of three ophthalmologists who had been working for 2 years and three ophthalmologists who had been working in fundus imaging for more than 5 years was 88.00% and 94.00%, respectively. CONCLUSION High performance was achieved on all three test sets using our UWFI multidisease classification model with a small sample size and fast model inference. The performance of the artificial intelligence model was comparable to that of a physician with 2-5 years of experience in fundus diseases at a tertiary referral hospital. The model is expected to be used as an effective aid for fundus disease screening.
Collapse
Affiliation(s)
- Gongpeng Sun
- Eye Center, Renmin Hospital of Wuhan University, No. 9 ZhangZhiDong Street, Wuchang District, Wuhan, 430060, Hubei, China
| | - Xiaoling Wang
- Eye Center, Renmin Hospital of Wuhan University, No. 9 ZhangZhiDong Street, Wuchang District, Wuhan, 430060, Hubei, China
| | - Lizhang Xu
- Wuhan Aiyanbang Technology Co., Ltd, Wuhan, 430073, China
| | - Chang Li
- Tianjin Key Laboratory of Retinal Functions and Diseases, Tianjin International Joint Research and Development Centre of Ophthalmology and Vision Science, Eye Institute and School of Optometry, Tianjin Medical University Eye Hospital, Tianjin, 300384, China
| | - Wenyu Wang
- Eye Center, Renmin Hospital of Wuhan University, No. 9 ZhangZhiDong Street, Wuchang District, Wuhan, 430060, Hubei, China
| | - Zuohuizi Yi
- Eye Center, Renmin Hospital of Wuhan University, No. 9 ZhangZhiDong Street, Wuchang District, Wuhan, 430060, Hubei, China
| | - Huijuan Luo
- The People's Hospital of Yidu, Yidu, 443300, China
| | - Yu Su
- Eye Center, Renmin Hospital of Wuhan University, No. 9 ZhangZhiDong Street, Wuchang District, Wuhan, 430060, Hubei, China
| | - Jian Zheng
- School of Electronic Information and Electric Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Zhiqing Li
- Tianjin Key Laboratory of Retinal Functions and Diseases, Tianjin International Joint Research and Development Centre of Ophthalmology and Vision Science, Eye Institute and School of Optometry, Tianjin Medical University Eye Hospital, Tianjin, 300384, China
| | - Zhen Chen
- Eye Center, Renmin Hospital of Wuhan University, No. 9 ZhangZhiDong Street, Wuchang District, Wuhan, 430060, Hubei, China.
| | - Hongmei Zheng
- Eye Center, Renmin Hospital of Wuhan University, No. 9 ZhangZhiDong Street, Wuchang District, Wuhan, 430060, Hubei, China.
| | - Changzheng Chen
- Eye Center, Renmin Hospital of Wuhan University, No. 9 ZhangZhiDong Street, Wuchang District, Wuhan, 430060, Hubei, China.
| |
Collapse
|
40
|
Liu H, Li R, Zhang Y, Zhang K, Yusufu M, Liu Y, Mou D, Chen X, Tian J, Li H, Fan S, Tang J, Wang N. Economic evaluation of combined population-based screening for multiple blindness-causing eye diseases in China: a cost-effectiveness analysis. Lancet Glob Health 2023; 11:e456-e465. [PMID: 36702141 DOI: 10.1016/s2214-109x(22)00554-x] [Citation(s) in RCA: 25] [Impact Index Per Article: 25.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Revised: 11/24/2022] [Accepted: 12/14/2022] [Indexed: 01/25/2023]
Abstract
BACKGROUND More than 90% of vision impairment is avoidable. However, in China, a routine screening programme is currently unavailable in primary health care. With the dearth of economic evidence on screening programmes for multiple blindness-causing eye diseases, delivery options, and screening frequencies, we aimed to evaluate the costs and benefits of a population-based screening programme for multiple eye diseases in China. METHODS We developed a decision-analytic Markov model for a cohort of individuals aged 50 years and older with a total of 30 1-year cycles. We calculated the cost-effectiveness and cost-utility of screening programmes for multiple major blindness-causing eye diseases in China, including age-related macular degeneration, glaucoma, diabetic retinopathy, cataracts, and pathological myopia, from a societal perspective (including direct and indirect costs). We analysed rural and urban settings separately by different screening delivery options (non-telemedicine [ie, face-to-face] screening, artificial intelligence [AI] telemedicine screening, and non-AI telemedicine screening) and frequencies. We calculated incremental cost-utility ratios (ICURs) using quality-adjusted life-years and incremental cost-effectiveness ratios (ICERs) in terms of the cost per blindness year avoided. One-way deterministic and simulated probabilistic sensitivity analyses were used to assess the robustness of the main outcomes. FINDINGS Compared with no screening, non-telemedicine combined screening of multiple eye diseases satisfied the criterion for a highly cost-effective health intervention, with an ICUR of US$2494 (95% CI 1130 to 2716) and an ICER of $12 487 (8773 to 18 791) in rural settings. In urban areas, the ICUR was $624 (395 to 907), and the ICER was $7251 (4238 to 13 501). Non-AI telemedicine screening could result in fewer costs and greater gains in health benefits (ICUR $2326 [1064 to 2538] and ICER $11 766 [8200 to 18 000] in rural settings; ICUR $581 [368 to 864] and ICER $6920 [3926 to 13 231] in urban settings). AI telemedicine screening dominated no screening in rural settings, and in urban settings the ICUR was $244 (-315 to 1073) and the ICER was $2567 (-4111 to 15 389). Sensitivity analyses showed all results to be robust. By further comparison, annual AI telemedicine screening was the most cost-effective strategy in both rural and urban areas. INTERPRETATION Combined screening of multiple eye diseases is cost-effective in both rural and urban China. AI coupled with teleophthalmology presents an opportunity to promote equity in eye health. FUNDING National Natural Science Foundation of China.
Collapse
Affiliation(s)
- Hanruo Liu
- Beijing Institute of Ophthalmology, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China; School of Medical Technology, Beijing Institute of Technology, Beijing, China; National Institutes of Health Data Science at Peking University, Beijing, China.
| | - Ruyue Li
- Beijing Institute of Ophthalmology, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Yue Zhang
- Beijing Institute of Ophthalmology, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Kaiwen Zhang
- Beijing Institute of Ophthalmology, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Mayinuer Yusufu
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, Department of Surgery (Ophthalmology), The University of Melbourne, Melbourne, VIC, Australia
| | - Yanting Liu
- Beijing Institute of Ophthalmology, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Dapeng Mou
- Beijing Institute of Ophthalmology, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Xiaoniao Chen
- Beijing Institute of Ophthalmology, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Jiaxin Tian
- Beijing Institute of Ophthalmology, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Huiqi Li
- School of Medical Technology, Beijing Institute of Technology, Beijing, China
| | - Sujie Fan
- Handan City Eye Hospital, Handan, China
| | - Jianjun Tang
- School of Agricultural Economics and Rural Development, Renmin University of China, Beijing, China.
| | - Ningli Wang
- Beijing Institute of Ophthalmology, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China; School of Medical Technology, Beijing Institute of Technology, Beijing, China; National Institutes of Health Data Science at Peking University, Beijing, China.
| |
Collapse
|
41
|
Lin S, Ma Y, Xu Y, Lu L, He J, Zhu J, Peng Y, Yu T, Congdon N, Zou H. Artificial Intelligence in Community-Based Diabetic Retinopathy Telemedicine Screening in Urban China: Cost-effectiveness and Cost-Utility Analyses With Real-world Data. JMIR Public Health Surveill 2023; 9:e41624. [PMID: 36821353 PMCID: PMC9999255 DOI: 10.2196/41624] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2022] [Revised: 11/13/2022] [Accepted: 01/12/2023] [Indexed: 02/24/2023] Open
Abstract
BACKGROUND Community-based telemedicine screening for diabetic retinopathy (DR) has been highly recommended worldwide. However, evidence from low- and middle-income countries (LMICs) on the choice between artificial intelligence (AI)-based and manual grading-based telemedicine screening is inadequate for policy making. OBJECTIVE The aim of this study was to test whether the AI model is more worthwhile than manual grading in community-based telemedicine screening for DR in the context of labor costs in urban China. METHODS We conducted cost-effectiveness and cost-utility analyses by using decision-analytic Markov models with 30 one-year cycles from a societal perspective to compare the cost, effectiveness, and utility of 2 scenarios in telemedicine screening for DR: manual grading and an AI model. Sensitivity analyses were performed. Real-world data were obtained mainly from the Shanghai Digital Eye Disease Screening Program. The main outcomes were the incremental cost-effectiveness ratio (ICER) and the incremental cost-utility ratio (ICUR). The ICUR thresholds were set as 1 and 3 times the local gross domestic product per capita. RESULTS The total expected costs for a 65-year-old resident were US $3182.50 and US $3265.40, while the total expected years without blindness were 9.80 years and 9.83 years, and the utilities were 6.748 quality-adjusted life years (QALYs) and 6.753 QALYs in the AI model and manual grading, respectively. The ICER for the AI-assisted model was US $2553.39 per year without blindness, and the ICUR was US $15,216.96 per QALY, which indicated that AI-assisted model was not cost-effective. The sensitivity analysis suggested that if there is an increase in compliance with referrals after the adoption of AI by 7.5%, an increase in on-site screening costs in manual grading by 50%, or a decrease in on-site screening costs in the AI model by 50%, then the AI model could be the dominant strategy. CONCLUSIONS Our study may provide a reference for policy making in planning community-based telemedicine screening for DR in LMICs. Our findings indicate that unless the referral compliance of patients with suspected DR increases, the adoption of the AI model may not improve the value of telemedicine screening compared to that of manual grading in LMICs. The main reason is that in the context of the low labor costs in LMICs, the direct health care costs saved by replacing manual grading with AI are less, and the screening effectiveness (QALYs and years without blindness) decreases. Our study suggests that the magnitude of the value generated by this technology replacement depends primarily on 2 aspects. The first is the extent of direct health care costs reduced by AI, and the second is the change in health care service utilization caused by AI. Therefore, our research can also provide analytical ideas for other health care sectors in their decision to use AI.
Collapse
Affiliation(s)
- Senlin Lin
- Department of Eye Disease Prevention and Control, Shanghai Eye Disease Prevention and Treatment Center/Shanghai Eye Hospital, Shanghai, China.,Shanghai Key Laboratory of Ocular Fundus Diseases, Shanghai General Hospital, Shanghai Engineering Center for Visual Science and Photomedicine, Shanghai, China
| | - Yingyan Ma
- Department of Eye Disease Prevention and Control, Shanghai Eye Disease Prevention and Treatment Center/Shanghai Eye Hospital, Shanghai, China.,Shanghai Key Laboratory of Ocular Fundus Diseases, Shanghai General Hospital, Shanghai Engineering Center for Visual Science and Photomedicine, Shanghai, China
| | - Yi Xu
- Department of Eye Disease Prevention and Control, Shanghai Eye Disease Prevention and Treatment Center/Shanghai Eye Hospital, Shanghai, China.,Shanghai Key Laboratory of Ocular Fundus Diseases, Shanghai General Hospital, Shanghai Engineering Center for Visual Science and Photomedicine, Shanghai, China
| | - Lina Lu
- Department of Eye Disease Prevention and Control, Shanghai Eye Disease Prevention and Treatment Center/Shanghai Eye Hospital, Shanghai, China.,Shanghai Key Laboratory of Ocular Fundus Diseases, Shanghai General Hospital, Shanghai Engineering Center for Visual Science and Photomedicine, Shanghai, China
| | - Jiangnan He
- Department of Eye Disease Prevention and Control, Shanghai Eye Disease Prevention and Treatment Center/Shanghai Eye Hospital, Shanghai, China.,Shanghai Key Laboratory of Ocular Fundus Diseases, Shanghai General Hospital, Shanghai Engineering Center for Visual Science and Photomedicine, Shanghai, China
| | - Jianfeng Zhu
- Department of Eye Disease Prevention and Control, Shanghai Eye Disease Prevention and Treatment Center/Shanghai Eye Hospital, Shanghai, China.,Shanghai Key Laboratory of Ocular Fundus Diseases, Shanghai General Hospital, Shanghai Engineering Center for Visual Science and Photomedicine, Shanghai, China
| | - Yajun Peng
- Department of Eye Disease Prevention and Control, Shanghai Eye Disease Prevention and Treatment Center/Shanghai Eye Hospital, Shanghai, China.,Shanghai Key Laboratory of Ocular Fundus Diseases, Shanghai General Hospital, Shanghai Engineering Center for Visual Science and Photomedicine, Shanghai, China
| | - Tao Yu
- Department of Eye Disease Prevention and Control, Shanghai Eye Disease Prevention and Treatment Center/Shanghai Eye Hospital, Shanghai, China.,Shanghai Key Laboratory of Ocular Fundus Diseases, Shanghai General Hospital, Shanghai Engineering Center for Visual Science and Photomedicine, Shanghai, China
| | - Nathan Congdon
- Centre for Public Health, Queen's University Belfast, Belfast, United Kingdom.,Orbis International, New York, NY, United States.,Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Haidong Zou
- Department of Eye Disease Prevention and Control, Shanghai Eye Disease Prevention and Treatment Center/Shanghai Eye Hospital, Shanghai, China.,Shanghai Key Laboratory of Ocular Fundus Diseases, Shanghai General Hospital, Shanghai Engineering Center for Visual Science and Photomedicine, Shanghai, China
| |
Collapse
|
42
|
Liu L, Wu X, Lin D, Zhao L, Li M, Yun D, Lin Z, Pang J, Li L, Wu Y, Lai W, Xiao W, Shang Y, Feng W, Tan X, Li Q, Liu S, Lin X, Sun J, Zhao Y, Yang X, Ye Q, Zhong Y, Huang X, He Y, Fu Z, Xiang Y, Zhang L, Zhao M, Qu J, Xu F, Lu P, Li J, Xu F, Wei W, Dong L, Dai G, He X, Yan W, Zhu Q, Lu L, Zhang J, Zhou W, Meng X, Li S, Shen M, Jiang Q, Chen N, Zhou X, Li M, Wang Y, Zou H, Zhong H, Yang W, Shou W, Zhong X, Yang Z, Ding L, Hu Y, Tan G, He W, Zhao X, Chen Y, Liu Y, Lin H. DeepFundus: A flow-cytometry-like image quality classifier for boosting the whole life cycle of medical artificial intelligence. Cell Rep Med 2023; 4:100912. [PMID: 36669488 PMCID: PMC9975093 DOI: 10.1016/j.xcrm.2022.100912] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2022] [Revised: 11/01/2022] [Accepted: 12/26/2022] [Indexed: 01/20/2023]
Abstract
Medical artificial intelligence (AI) has been moving from the research phase to clinical implementation. However, most AI-based models are mainly built using high-quality images preprocessed in the laboratory, which is not representative of real-world settings. This dataset bias proves a major driver of AI system dysfunction. Inspired by the design of flow cytometry, DeepFundus, a deep-learning-based fundus image classifier, is developed to provide automated and multidimensional image sorting to address this data quality gap. DeepFundus achieves areas under the receiver operating characteristic curves (AUCs) over 0.9 in image classification concerning overall quality, clinical quality factors, and structural quality analysis on both the internal test and national validation datasets. Additionally, DeepFundus can be integrated into both model development and clinical application of AI diagnostics to significantly enhance model performance for detecting multiple retinopathies. DeepFundus can be used to construct a data-driven paradigm for improving the entire life cycle of medical AI practice.
Collapse
Affiliation(s)
- Lixue Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Xiaohang Wu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China.
| | - Duoru Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Lanqin Zhao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Mingyuan Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Dongyuan Yun
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Zhenzhe Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Jianyu Pang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Longhui Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Yuxuan Wu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Weiyi Lai
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Wei Xiao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Yuanjun Shang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Weibo Feng
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Xiao Tan
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China
| | - Qiang Li
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Shenzhen Liu
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Xinxin Lin
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Jiaxin Sun
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Yiqi Zhao
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Ximei Yang
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Qinying Ye
- Department of Ophthalmology, Second Affiliated Hospital, Guangdong Medical University, Zhanjiang, Guangdong, China
| | - Yuesi Zhong
- Department of Ophthalmology, Third Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Xi Huang
- Department of Ophthalmology, Third Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Yuan He
- Department of Ophthalmology, The Second Affiliated Hospital of Xi'an Medical University, Xi'an, Shaanxi, China
| | - Ziwei Fu
- Department of Ophthalmology, The Second Affiliated Hospital of Xi'an Medical University, Xi'an, Shaanxi, China
| | - Yi Xiang
- Department of Ophthalmology, Central Hospital of Wuhan, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Li Zhang
- Department of Ophthalmology, Central Hospital of Wuhan, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Mingwei Zhao
- Department of Ophthalmology, People's Hospital of Peking University, Beijing, China
| | - Jinfeng Qu
- Department of Ophthalmology, People's Hospital of Peking University, Beijing, China
| | - Fan Xu
- Department of Ophthalmology, People's Hospital of Guangxi Zhuang Autonomous Region, Nanning, Guangxi, China
| | - Peng Lu
- Department of Ophthalmology, People's Hospital of Guangxi Zhuang Autonomous Region, Nanning, Guangxi, China
| | - Jianqiao Li
- Department of Ophthalmology, Qilu Hospital, Shandong University, Jinan, Shandong, China
| | - Fabao Xu
- Department of Ophthalmology, Qilu Hospital, Shandong University, Jinan, Shandong, China
| | - Wenbin Wei
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Li Dong
- Beijing Tongren Eye Center, Beijing Key Laboratory of Intraocular Tumor Diagnosis and Treatment, Beijing Ophthalmology & Visual Sciences Key Lab, Medical Artificial Intelligence Research and Verification Key Laboratory of the Ministry of Industry and Information Technology, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | | | - Xingru He
- School of Public Health, He University, Shenyang, Liaoning, China
| | - Wentao Yan
- The Eye Hospital of Wenzhou Medical University, Wenzhou, Zhejiang, China
| | - Qiaolin Zhu
- The Eye Hospital of Wenzhou Medical University, Wenzhou, Zhejiang, China
| | - Linna Lu
- Department of Ophthalmology, Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Jiaying Zhang
- Department of Ophthalmology, Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Wei Zhou
- Department of Ophthalmology, Tianjin Medical University General Hospital, Tianjin, China
| | - Xiangda Meng
- Department of Ophthalmology, Tianjin Medical University General Hospital, Tianjin, China
| | - Shiying Li
- Department of Ophthalmology, Xiang'an Hospital of Xiamen University, Xiamen, Fujian, China
| | - Mei Shen
- Department of Ophthalmology, Xiang'an Hospital of Xiamen University, Xiamen, Fujian, China
| | - Qin Jiang
- The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, Jiangsu, China
| | - Nan Chen
- The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, Jiangsu, China
| | - Xingtao Zhou
- Department of Ophthalmology, Eye and ENT Hospital, Fudan University, Shanghai, China
| | - Meiyan Li
- Department of Ophthalmology, Eye and ENT Hospital, Fudan University, Shanghai, China
| | - Yan Wang
- Tianjin Eye Hospital, Tianjin Key Lab of Ophthalmology and Visual Science, Tianjin Eye Institute, Nankai University, Tianjin, China
| | - Haohan Zou
- Tianjin Eye Hospital, Tianjin Key Lab of Ophthalmology and Visual Science, Tianjin Eye Institute, Nankai University, Tianjin, China
| | - Hua Zhong
- Department of Ophthalmology, The First Affiliated Hospital of Kunming Medical University, Kunming, Yunnan, China
| | - Wenyan Yang
- Department of Ophthalmology, The First Affiliated Hospital of Kunming Medical University, Kunming, Yunnan, China
| | - Wulin Shou
- Jiaxing Chaoju Eye Hospital, Jiaxing, Zhejiang, China
| | - Xingwu Zhong
- Hainan Eye Hospital and Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Haikou, Hainan, China
| | - Zhenduo Yang
- Hainan Eye Hospital and Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Haikou, Hainan, China
| | - Lin Ding
- Department of Ophthalmology, People's Hospital of Xinjiang Uygur Autonomous Region, Urumqi, Xinjiang, China
| | - Yongcheng Hu
- Bayannur Xudong Eye Hospital, Bayannur, Inner Mongolia, China
| | - Gang Tan
- Department of Ophthalmology, The First Affiliated Hospital, Hengyang Medical School, University of South China, Hengyang, Hunan, China
| | - Wanji He
- Beijing Airdoc Technology Co., Ltd., Beijing, China
| | - Xin Zhao
- Beijing Airdoc Technology Co., Ltd., Beijing, China
| | - Yuzhong Chen
- Beijing Airdoc Technology Co., Ltd., Beijing, China
| | - Yizhi Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China.
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Vision Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, Guangdong, China; Hainan Eye Hospital and Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Haikou, Hainan, China; Center for Precision Medicine and Department of Genetics and Biomedical Informatics, Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China.
| |
Collapse
|
43
|
Towards precision medicine based on a continuous deep learning optimization and ensemble approach. NPJ Digit Med 2023; 6:18. [PMID: 36737644 PMCID: PMC9898519 DOI: 10.1038/s41746-023-00759-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2022] [Accepted: 01/17/2023] [Indexed: 02/05/2023] Open
Abstract
We developed a continuous learning system (CLS) based on deep learning and optimization and ensemble approach, and conducted a retrospective data simulated prospective study using ultrasound images of breast masses for precise diagnoses. We extracted 629 breast masses and 2235 images from 561 cases in the institution to train the model in six stages to diagnose benign and malignant tumors, pathological types, and diseases. We randomly selected 180 out of 3098 cases from two external institutions. The CLS was tested with seven independent datasets and compared with 21 physicians, and the system's diagnostic ability exceeded 20 physicians by training stage six. The optimal integrated method we developed is expected accurately diagnose breast masses. This method can also be extended to the intelligent diagnosis of masses in other organs. Overall, our findings have potential value in further promoting the application of AI diagnosis in precision medicine.
Collapse
|
44
|
An AI-Aided Diagnostic Framework for Hematologic Neoplasms Based on Morphologic Features and Medical Expertise. J Transl Med 2023; 103:100055. [PMID: 36870286 DOI: 10.1016/j.labinv.2022.100055] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Revised: 12/21/2022] [Accepted: 12/27/2022] [Indexed: 01/11/2023] Open
Abstract
A morphologic examination is essential for the diagnosis of hematological diseases. However, its conventional manual operation is time-consuming and laborious. Herein, we attempt to establish an artificial intelligence (AI)-aided diagnostic framework integrating medical expertise. This framework acts as a virtual hematological morphologist (VHM) for diagnosing hematological neoplasms. Two datasets were established as follows: An image dataset was used to train the Faster Region-based Convolutional Neural Network to develop an image-based morphologic feature extraction model. A case dataset containing retrospective morphologic diagnostic data was used to train a support vector machine algorithm to develop a feature-based case identification model based on diagnostic criteria. Integrating these 2 models established a whole-process AI-aided diagnostic framework, namely, VHM, and a 2-stage strategy was applied to practice case diagnosis. The recall and precision of VHM in bone marrow cell classification were 94.65% and 93.95%, respectively. The balanced accuracy, sensitivity, and specificity of VHM were 97.16%, 99.09%, and 92%, respectively, in the differential diagnosis of normal and abnormal cases, and 99.23%, 97.96%, and 100%, respectively, in the precise diagnosis of chronic myelogenous leukemia in chronic phase. This work represents the first attempt, to our knowledge, to extract multimodal morphologic features and to integrate a feature-based case diagnosis model for designing a comprehensive AI-aided morphologic diagnostic framework. The performance of our knowledge-based framework was superior to that of the widely used end-to-end AI-based diagnostic framework in terms of testing accuracy (96.88% vs 68.75%) or generalization ability (97.11% vs 68.75%) in differentiating normal and abnormal cases. The remarkable advantage of VHM is that it follows the logic of clinical diagnostic procedures, making it a reliable and interpretable hematological diagnostic tool.
Collapse
|
45
|
Choo H, Yoo SY, Moon S, Park M, Lee J, Sung KW, Cha WC, Shin SY, Son MH. Deep-learning-based personalized prediction of absolute neutrophil count recovery and comparison with clinicians for validation. J Biomed Inform 2023; 137:104268. [PMID: 36513332 DOI: 10.1016/j.jbi.2022.104268] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2022] [Revised: 11/27/2022] [Accepted: 12/07/2022] [Indexed: 12/14/2022]
Abstract
Neutropenia and its complications are major adverse effects of cytotoxic chemotherapy. The time to recovery from neutropenia varies from patient to patient, and cannot be easily predicted even by experts. Therefore, we trained a deep learning model using data from 525 pediatric patients with solid tumors to predict the day when patients recover from severe neutropenia after high-dose chemotherapy. We validated the model with data from 99 patients and compared its performance to those of clinicians. The accuracy of the model at predicting the recovery day, with a 1-day error, was 76%; its performance was better than those of the specialist group (58.59%) and the resident group (32.33%). In addition, 80% of clinicians changed their initial predictions at least once after the model's prediction was conveyed to them. In total, 86 prediction changes (90.53%) improved the recovery day estimate.
Collapse
Affiliation(s)
- Hyunwoo Choo
- Department of Digital Health, SAIHST, Sungkyunkwan University, Seoul, Republic of Korea; Department of Intelligent Precision Healthcare Convergence, Sungkyunkwan University, Seoul, Republic of Korea
| | - Su Young Yoo
- Department of Digital Health, SAIHST, Sungkyunkwan University, Seoul, Republic of Korea
| | - Suhyeon Moon
- Research Institute for Future Medicine, Samsung Medical Center, Seoul, Republic of Korea
| | - Minsu Park
- Department of Information and Statistics, Chungnam National University, Korea 99 Daehak-ro, Yuseong-gu, Daejeon, Republic of Korea
| | - Jiwon Lee
- Department of Pediatrics, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea
| | - Ki Woong Sung
- Department of Pediatrics, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea
| | - Won Chul Cha
- Department of Emergency Medicine, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea
| | - Soo-Yong Shin
- Department of Digital Health, SAIHST, Sungkyunkwan University, Seoul, Republic of Korea; Department of Intelligent Precision Healthcare Convergence, Sungkyunkwan University, Seoul, Republic of Korea; Research Institute for Future Medicine, Samsung Medical Center, Seoul, Republic of Korea.
| | - Meong Hi Son
- Department of Pediatrics, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea.
| |
Collapse
|
46
|
Zou H, Shi S, Yang X, Ma J, Fan Q, Chen X, Wang Y, Zhang M, Song J, Jiang Y, Li L, He X, Jhanji V, Wang S, Song M, Wang Y. Identification of ocular refraction based on deep learning algorithm as a novel retinoscopy method. Biomed Eng Online 2022; 21:87. [PMID: 36528597 PMCID: PMC9758840 DOI: 10.1186/s12938-022-01057-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2022] [Accepted: 12/05/2022] [Indexed: 12/23/2022] Open
Abstract
BACKGROUND The evaluation of refraction is indispensable in ophthalmic clinics, generally requiring a refractor or retinoscopy under cycloplegia. Retinal fundus photographs (RFPs) supply a wealth of information related to the human eye and might provide a promising approach that is more convenient and objective. Here, we aimed to develop and validate a fusion model-based deep learning system (FMDLS) to identify ocular refraction via RFPs and compare with the cycloplegic refraction. In this population-based comparative study, we retrospectively collected 11,973 RFPs from May 1, 2020 to November 20, 2021. The performance of the regression models for sphere and cylinder was evaluated using mean absolute error (MAE). The accuracy, sensitivity, specificity, area under the receiver operating characteristic curve, and F1-score were used to evaluate the classification model of the cylinder axis. RESULTS Overall, 7873 RFPs were retained for analysis. For sphere and cylinder, the MAE values between the FMDLS and cycloplegic refraction were 0.50 D and 0.31 D, representing an increase of 29.41% and 26.67%, respectively, when compared with the single models. The correlation coefficients (r) were 0.949 and 0.807, respectively. For axis analysis, the accuracy, specificity, sensitivity, and area under the curve value of the classification model were 0.89, 0.941, 0.882, and 0.814, respectively, and the F1-score was 0.88. CONCLUSIONS The FMDLS successfully identified the ocular refraction in sphere, cylinder, and axis, and showed good agreement with the cycloplegic refraction. The RFPs can provide not only comprehensive fundus information but also the refractive state of the eye, highlighting their potential clinical value.
Collapse
Affiliation(s)
- Haohan Zou
- grid.265021.20000 0000 9792 1228Clinical College of Ophthalmology, Tianjin Medical University, Tianjin, China ,grid.216938.70000 0000 9878 7032Tianjin Key Lab of Ophthalmology and Visual Science, Tianjin Eye Institute, Tianjin Eye Hospital, Nankai University Affiliated Eye Hospital, 4 Gansu Road, He-Ping District, Tianjin, 300020 China
| | - Shenda Shi
- grid.31880.320000 0000 8780 1230School of Computer Science, School of National Pilot Software Engineering, Beijing University of Posts and Telecommunications, 10 Xitucheng Road, Hai-Dian District, Beijing, 100876 China ,HuaHui Jian AI Tech Ltd., Tianjin, China
| | - Xiaoyan Yang
- grid.216938.70000 0000 9878 7032Tianjin Key Lab of Ophthalmology and Visual Science, Tianjin Eye Institute, Tianjin Eye Hospital, Nankai University Affiliated Eye Hospital, 4 Gansu Road, He-Ping District, Tianjin, 300020 China ,grid.412729.b0000 0004 1798 646XTianjin Eye Hospital Optometric Center, Tianjin, China
| | - Jiaonan Ma
- grid.216938.70000 0000 9878 7032Tianjin Key Lab of Ophthalmology and Visual Science, Tianjin Eye Institute, Tianjin Eye Hospital, Nankai University Affiliated Eye Hospital, 4 Gansu Road, He-Ping District, Tianjin, 300020 China
| | - Qian Fan
- grid.216938.70000 0000 9878 7032Tianjin Key Lab of Ophthalmology and Visual Science, Tianjin Eye Institute, Tianjin Eye Hospital, Nankai University Affiliated Eye Hospital, 4 Gansu Road, He-Ping District, Tianjin, 300020 China
| | - Xuan Chen
- grid.265021.20000 0000 9792 1228Clinical College of Ophthalmology, Tianjin Medical University, Tianjin, China ,grid.216938.70000 0000 9878 7032Tianjin Key Lab of Ophthalmology and Visual Science, Tianjin Eye Institute, Tianjin Eye Hospital, Nankai University Affiliated Eye Hospital, 4 Gansu Road, He-Ping District, Tianjin, 300020 China
| | - Yibing Wang
- grid.265021.20000 0000 9792 1228Clinical College of Ophthalmology, Tianjin Medical University, Tianjin, China ,grid.216938.70000 0000 9878 7032Tianjin Key Lab of Ophthalmology and Visual Science, Tianjin Eye Institute, Tianjin Eye Hospital, Nankai University Affiliated Eye Hospital, 4 Gansu Road, He-Ping District, Tianjin, 300020 China
| | - Mingdong Zhang
- grid.265021.20000 0000 9792 1228Clinical College of Ophthalmology, Tianjin Medical University, Tianjin, China ,grid.216938.70000 0000 9878 7032Tianjin Key Lab of Ophthalmology and Visual Science, Tianjin Eye Institute, Tianjin Eye Hospital, Nankai University Affiliated Eye Hospital, 4 Gansu Road, He-Ping District, Tianjin, 300020 China
| | - Jiaxin Song
- grid.265021.20000 0000 9792 1228Clinical College of Ophthalmology, Tianjin Medical University, Tianjin, China ,grid.216938.70000 0000 9878 7032Tianjin Key Lab of Ophthalmology and Visual Science, Tianjin Eye Institute, Tianjin Eye Hospital, Nankai University Affiliated Eye Hospital, 4 Gansu Road, He-Ping District, Tianjin, 300020 China
| | - Yanglin Jiang
- grid.216938.70000 0000 9878 7032Tianjin Key Lab of Ophthalmology and Visual Science, Tianjin Eye Institute, Tianjin Eye Hospital, Nankai University Affiliated Eye Hospital, 4 Gansu Road, He-Ping District, Tianjin, 300020 China ,grid.412729.b0000 0004 1798 646XTianjin Eye Hospital Optometric Center, Tianjin, China
| | - Lihua Li
- grid.412729.b0000 0004 1798 646XTianjin Eye Hospital Optometric Center, Tianjin, China
| | - Xin He
- HuaHui Jian AI Tech Ltd., Tianjin, China
| | - Vishal Jhanji
- grid.21925.3d0000 0004 1936 9000UPMC Eye Center, University of Pittsburgh School of Medicine, Pittsburgh, PA USA
| | - Shengjin Wang
- HuaHui Jian AI Tech Ltd., Tianjin, China ,grid.12527.330000 0001 0662 3178Department of Electronic Engineering, Tsinghua University, Beijing, China
| | - Meina Song
- grid.31880.320000 0000 8780 1230School of Computer Science, School of National Pilot Software Engineering, Beijing University of Posts and Telecommunications, 10 Xitucheng Road, Hai-Dian District, Beijing, 100876 China ,HuaHui Jian AI Tech Ltd., Tianjin, China
| | - Yan Wang
- grid.265021.20000 0000 9792 1228Clinical College of Ophthalmology, Tianjin Medical University, Tianjin, China ,grid.216938.70000 0000 9878 7032Tianjin Key Lab of Ophthalmology and Visual Science, Tianjin Eye Institute, Tianjin Eye Hospital, Nankai University Affiliated Eye Hospital, 4 Gansu Road, He-Ping District, Tianjin, 300020 China ,grid.216938.70000 0000 9878 7032Nankai University Eye Institute, Nankai University, Tianjin, China
| |
Collapse
|
47
|
Lin S, Li L, Zou H, Xu Y, Lu L. Medical Staff and Resident Preferences for Using Deep Learning in Eye Disease Screening: Discrete Choice Experiment. J Med Internet Res 2022; 24:e40249. [PMID: 36125854 PMCID: PMC9533207 DOI: 10.2196/40249] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2022] [Revised: 08/08/2022] [Accepted: 09/02/2022] [Indexed: 11/17/2022] Open
Abstract
Background Deep learning–assisted eye disease diagnosis technology is increasingly applied in eye disease screening. However, no research has suggested the prerequisites for health care service providers and residents willing to use it. Objective The aim of this paper is to reveal the preferences of health care service providers and residents for using artificial intelligence (AI) in community-based eye disease screening, particularly their preference for accuracy. Methods Discrete choice experiments for health care providers and residents were conducted in Shanghai, China. In total, 34 medical institutions with adequate AI-assisted screening experience participated. A total of 39 medical staff and 318 residents were asked to answer the questionnaire and make a trade-off among alternative screening strategies with different attributes, including missed diagnosis rate, overdiagnosis rate, screening result feedback efficiency, level of ophthalmologist involvement, organizational form, cost, and screening result feedback form. Conditional logit models with the stepwise selection method were used to estimate the preferences. Results Medical staff preferred high accuracy: The specificity of deep learning models should be more than 90% (odds ratio [OR]=0.61 for 10% overdiagnosis; P<.001), which was much higher than the Food and Drug Administration standards. However, accuracy was not the residents’ preference. Rather, they preferred to have the doctors involved in the screening process. In addition, when compared with a fully manual diagnosis, AI technology was more favored by the medical staff (OR=2.08 for semiautomated AI model and OR=2.39 for fully automated AI model; P<.001), while the residents were in disfavor of the AI technology without doctors’ supervision (OR=0.24; P<.001). Conclusions Deep learning model under doctors’ supervision is strongly recommended, and the specificity of the model should be more than 90%. In addition, digital transformation should help medical staff move away from heavy and repetitive work and spend more time on communicating with residents.
Collapse
Affiliation(s)
- Senlin Lin
- Shanghai Eye Disease Prevention and Treatment Center, Shanghai Eye Hospital, Shanghai, China.,Shanghai Key Laboratory of Ocular Fundus Diseases, Shanghai General Hospital, Shanghai Engineering Center for Visual Science and Photomedicine, Shanghai, China
| | - Liping Li
- Shanghai Hongkou Center for Disease Control and Prevention, Shanghai, China
| | - Haidong Zou
- Shanghai Eye Disease Prevention and Treatment Center, Shanghai Eye Hospital, Shanghai, China.,Shanghai Key Laboratory of Ocular Fundus Diseases, Shanghai General Hospital, Shanghai Engineering Center for Visual Science and Photomedicine, Shanghai, China
| | - Yi Xu
- Shanghai Eye Disease Prevention and Treatment Center, Shanghai Eye Hospital, Shanghai, China.,Shanghai Key Laboratory of Ocular Fundus Diseases, Shanghai General Hospital, Shanghai Engineering Center for Visual Science and Photomedicine, Shanghai, China
| | - Lina Lu
- Shanghai Eye Disease Prevention and Treatment Center, Shanghai Eye Hospital, Shanghai, China.,Shanghai Key Laboratory of Ocular Fundus Diseases, Shanghai General Hospital, Shanghai Engineering Center for Visual Science and Photomedicine, Shanghai, China
| |
Collapse
|
48
|
Sun K, He M, Xu Y, Wu Q, He Z, Li W, Liu H, Pi X. Multi-label classification of fundus images with graph convolutional network and LightGBM. Comput Biol Med 2022; 149:105909. [PMID: 35998479 DOI: 10.1016/j.compbiomed.2022.105909] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2022] [Revised: 07/03/2022] [Accepted: 07/16/2022] [Indexed: 12/01/2022]
Abstract
Early detection and treatment of retinal disorders are critical for avoiding irreversible visual impairment. Given that patients in the clinical setting may have various types of retinal illness, the development of multi-label fundus disease detection models capable of screening for multiple diseases is more in line with clinical needs. This article presented a composite model based on hybrid graph convolution for patient-level multi-label fundus illness identification. The composite model comprised a backbone module, a hybrid graph convolution module, and a classifier module. This article established the relationship between labels via graph convolution and then employed a self-attention mechanism to design a hybrid graph convolution structure. The backbone module extracted features using EfficientNet-B4, whereas the classifier module output multi-label using LightGBM. Additionally, this work investigated the input pattern of binocular images and the influence of label correlation on the model's identification performance. The proposed model MCGL-Net outperformed all other state-of-the-art methods on the publicly available ODIR dataset, with F1 reaching 91.60% on the test set. Ablation experiments were also performed in this paper. Experiments showed that the idea of hybrid graph convolutional structure and composite model designed in this paper promotes the model performance under any backbone CNN. The adoption of hybrid graph convolution can increase the F1 by 2.39% in trials using EfficientNet-B4 as the backbone. The composite model had a higher F1 index by 5.42% than the single EfficientNet-B4 model.
Collapse
Affiliation(s)
- Kai Sun
- Key Laboratory of Biorheological Science and Technology of Ministry of Education, College of Bioengineering, Chongqing University, Chongqing, China
| | - Mengjia He
- Key Laboratory of Biorheological Science and Technology of Ministry of Education, College of Bioengineering, Chongqing University, Chongqing, China
| | - Yao Xu
- Key Laboratory of Biorheological Science and Technology of Ministry of Education, College of Bioengineering, Chongqing University, Chongqing, China
| | - Qinying Wu
- Key Laboratory of Biorheological Science and Technology of Ministry of Education, College of Bioengineering, Chongqing University, Chongqing, China
| | - Zichun He
- Chongqing Red Cross Hospital (People's Hospital of Jiangbei District), Chongqing, China
| | - Wang Li
- School of Pharmacy and Bioengineering, Chongqing University of Technology, Chongqing, China
| | - Hongying Liu
- Key Laboratory of Biorheological Science and Technology of Ministry of Education, College of Bioengineering, Chongqing University, Chongqing, China; Chongqing Engineering Technology Research Center of Medical Electronic, Chongqing, 400030, People's Republic of China.
| | - Xitian Pi
- Key Laboratory of Biorheological Science and Technology of Ministry of Education, College of Bioengineering, Chongqing University, Chongqing, China; Chongqing Engineering Technology Research Center of Medical Electronic, Chongqing, 400030, People's Republic of China.
| |
Collapse
|
49
|
Real-World Translation of Artificial Intelligence in Neuro-Ophthalmology: The Challenges of Making an Artificial Intelligence System Applicable to Clinical Practice. J Neuroophthalmol 2022; 42:287-291. [PMID: 35921610 DOI: 10.1097/wno.0000000000001682] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
50
|
Li F, Pan J, Yang D, Wu J, Ou Y, Li H, Huang J, Xie H, Ou D, Wu X, Wu B, Sun Q, Fang H, Yang Y, Xu Y, Luo Y, Zhang X. A Multicenter Clinical Study of the Automated Fundus Screening Algorithm. Transl Vis Sci Technol 2022; 11:22. [PMID: 35881410 PMCID: PMC9339691 DOI: 10.1167/tvst.11.7.22] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2022] [Accepted: 06/16/2022] [Indexed: 12/25/2022] Open
Abstract
Purpose To evaluate the effectiveness of automated fundus screening software in detecting eye diseases by comparing the reported results against those given by human experts. Results There were 1585 subjects who completed the procedure and yielded qualified images. The prevalence of referable diabetic retinopathy (RDR), glaucoma suspect (GCS), and referable macular diseases (RMD) were 20.4%, 23.2%, and 49.0%, respectively. The overall sensitivity values for RDR, GCS, and RMD diagnosis are 0.948 (95% confidence interval [CI], 0.918-0.967), 0.891 (95% CI, 0.855-0.919), and 0.901 (95% CI-0.878, 0.920), respectively. The overall specificity values for RDR, GCS, and RMD diagnosis are 0.954 (95% CI, 0.915-0.965), 0.993 (95% CI-0.986, 0.996), and 0.955 (95% CI-0.939, 0.968), respectively. Methods We prospectively enrolled 1743 subjects at seven hospitals throughout China. At each hospital, an operator records the subjects' information, takes fundus images, and submits the images to the Image Reading Center of Zhongshan Ophthalmic Center, Sun Yat-Sen University (IRC). The IRC grades the images according to the study protocol. Meanwhile, these images will also be automatically screened by the artificial intelligence algorithm. Then, the analysis results of automated screening algorithm are compared against the grading results of IRC. The end point goals are lower bounds of 95% CI of sensitivity values that are greater than 0.85 for all three target diseases, and lower bounds of 95% CI of specificity values that are greater than 0.90 for RDR and 0.85 for GCS and RMD. Conclusions Automated fundus screening software demonstrated a high sensitivity and specificity in detecting RDR, GCS, and RMD from color fundus imaged captured using various cameras. Translational Relevance These findings suggest that automated software can improve the screening effectiveness for eye diseases, especially in a primary care context, where experienced ophthalmologists are scarce.
Collapse
Affiliation(s)
- Fei Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, China
| | - Jianying Pan
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, China
| | - Dalu Yang
- Intelligent Healthcare Unit, Baidu, Beijing, China
| | | | - Yiling Ou
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, China
| | - Huiting Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, China
| | - Jiamin Huang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, China
| | - Huirui Xie
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, China
| | - Dongmei Ou
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, China
| | - Xiaoyi Wu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, China
| | - Binghong Wu
- Intelligent Healthcare Unit, Baidu, Beijing, China
| | - Qinpei Sun
- Intelligent Healthcare Unit, Baidu, Beijing, China
| | - Huihui Fang
- Intelligent Healthcare Unit, Baidu, Beijing, China
| | - Yehui Yang
- Intelligent Healthcare Unit, Baidu, Beijing, China
| | - Yanwu Xu
- Intelligent Healthcare Unit, Baidu, Beijing, China
| | - Yan Luo
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, China
| | - Xiulan Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, China
| |
Collapse
|