1
|
Li F, Wang D, Yang Z, Zhang Y, Jiang J, Liu X, Kong K, Zhou F, Tham CC, Medeiros F, Han Y, Grzybowski A, Zangwill LM, Lam DSC, Zhang X. The AI revolution in glaucoma: Bridging challenges with opportunities. Prog Retin Eye Res 2024; 103:101291. [PMID: 39186968 DOI: 10.1016/j.preteyeres.2024.101291] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2024] [Revised: 08/19/2024] [Accepted: 08/19/2024] [Indexed: 08/28/2024]
Abstract
Recent advancements in artificial intelligence (AI) herald transformative potentials for reshaping glaucoma clinical management, improving screening efficacy, sharpening diagnosis precision, and refining the detection of disease progression. However, incorporating AI into healthcare usages faces significant hurdles in terms of developing algorithms and putting them into practice. When creating algorithms, issues arise due to the intensive effort required to label data, inconsistent diagnostic standards, and a lack of thorough testing, which often limits the algorithms' widespread applicability. Additionally, the "black box" nature of AI algorithms may cause doctors to be wary or skeptical. When it comes to using these tools, challenges include dealing with lower-quality images in real situations and the systems' limited ability to work well with diverse ethnic groups and different diagnostic equipment. Looking ahead, new developments aim to protect data privacy through federated learning paradigms, improving algorithm generalizability by diversifying input data modalities, and augmenting datasets with synthetic imagery. The integration of smartphones appears promising for using AI algorithms in both clinical and non-clinical settings. Furthermore, bringing in large language models (LLMs) to act as interactive tool in medicine may signify a significant change in how healthcare will be delivered in the future. By navigating through these challenges and leveraging on these as opportunities, the field of glaucoma AI will not only have improved algorithmic accuracy and optimized data integration but also a paradigmatic shift towards enhanced clinical acceptance and a transformative improvement in glaucoma care.
Collapse
Affiliation(s)
- Fei Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Deming Wang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Zefeng Yang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Yinhang Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Jiaxuan Jiang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Xiaoyi Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Kangjie Kong
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Fengqi Zhou
- Ophthalmology, Mayo Clinic Health System, Eau Claire, WI, USA.
| | - Clement C Tham
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong SAR, China.
| | - Felipe Medeiros
- Bascom Palmer Eye Institute, University of Miami Miller School of Medicine, Miami, FL, USA.
| | - Ying Han
- University of California, San Francisco, Department of Ophthalmology, San Francisco, CA, USA; The Francis I. Proctor Foundation for Research in Ophthalmology, University of California, San Francisco, CA, USA.
| | - Andrzej Grzybowski
- Institute for Research in Ophthalmology, Foundation for Ophthalmology Development, Poznan, Poland.
| | - Linda M Zangwill
- Hamilton Glaucoma Center, Viterbi Family Department of Ophthalmology, Shiley Eye Institute, University of California, San Diego, CA, USA.
| | - Dennis S C Lam
- The International Eye Research Institute of the Chinese University of Hong Kong (Shenzhen), Shenzhen, China; The C-MER Dennis Lam & Partners Eye Center, C-MER International Eye Care Group, Hong Kong, China.
| | - Xiulan Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| |
Collapse
|
2
|
Silva-Rodríguez J, Chakor H, Kobbi R, Dolz J, Ben Ayed I. A Foundation Language-Image Model of the Retina (FLAIR): encoding expert knowledge in text supervision. Med Image Anal 2024; 99:103357. [PMID: 39418828 DOI: 10.1016/j.media.2024.103357] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Revised: 05/06/2024] [Accepted: 09/23/2024] [Indexed: 10/19/2024]
Abstract
Foundation vision-language models are currently transforming computer vision, and are on the rise in medical imaging fueled by their very promising generalization capabilities. However, the initial attempts to transfer this new paradigm to medical imaging have shown less impressive performances than those observed in other domains, due to the significant domain shift and the complex, expert domain knowledge inherent to medical-imaging tasks. Motivated by the need for domain-expert foundation models, we present FLAIR, a pre-trained vision-language model for universal retinal fundus image understanding. To this end, we compiled 38 open-access, mostly categorical fundus imaging datasets from various sources, with up to 101 different target conditions and 288,307 images. We integrate the expert's domain knowledge in the form of descriptive textual prompts, during both pre-training and zero-shot inference, enhancing the less-informative categorical supervision of the data. Such a textual expert's knowledge, which we compiled from the relevant clinical literature and community standards, describes the fine-grained features of the pathologies as well as the hierarchies and dependencies between them. We report comprehensive evaluations, which illustrate the benefit of integrating expert knowledge and the strong generalization capabilities of FLAIR under difficult scenarios with domain shifts or unseen categories. When adapted with a lightweight linear probe, FLAIR outperforms fully-trained, dataset-focused models, more so in the few-shot regimes. Interestingly, FLAIR outperforms by a wide margin larger-scale generalist image-language models and retina domain-specific self-supervised networks, which emphasizes the potential of embedding experts' domain knowledge and the limitations of generalist models in medical imaging. The pre-trained model is available at: https://github.com/jusiro/FLAIR.
Collapse
Affiliation(s)
| | | | | | - Jose Dolz
- ÉTS Montréal, Québec, Canada; Centre de Recherche du Centre Hospitalier de l'Université de Montréal (CR-CHUM), Québec, Canada
| | - Ismail Ben Ayed
- ÉTS Montréal, Québec, Canada; Centre de Recherche du Centre Hospitalier de l'Université de Montréal (CR-CHUM), Québec, Canada
| |
Collapse
|
3
|
Jiachu D, Luo L, Xie M, Xie X, Guo J, Ye H, Cai K, Zhou L, Song G, Jiang F, Huang D, Zhang M, Zheng C. A Meta-Learning Approach for Classifying Multimodal Retinal Images of Retinal Vein Occlusion With Limited Data. Transl Vis Sci Technol 2024; 13:22. [PMID: 39297809 PMCID: PMC11421671 DOI: 10.1167/tvst.13.9.22] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/26/2024] Open
Abstract
Purpose To propose and validate a meta-learning approach for detecting retinal vein occlusion (RVO) from multimodal images with only a few samples. Methods In this cross-sectional study, we formulate the problem as meta-learning. The meta-training dataset consists of 1254 color fundus (CF) images from 39 different fundus diseases. Two meta-testing datasets include a public domain dataset and an independent dataset from Kandze Prefecture People's Hospital. The proposed meta-learning models comprise two modules: the feature extraction networks and the prototypical networks (PNs). We use two deep learning models (the ResNet and the Contrastive Language-Image Pre-Training networks [CLIP]) for feature extraction. We evaluate the performance of the algorithms using accuracy, area under the receiver operating characteristic curve (AUCROC), F1-score, and recall. Results CLIP-based PNs outperform across all meta-testing datasets. For the public APTOS dataset, meta-learning algorithms achieve good results with an accuracy of 86.06% and an AUCROC of 0.87 with only 16 training images. In the hospital datasets, meta-learning algorithms show excellent diagnostic capability for detecting RVO with a very low number of shots (AUCROC above 0.99 for n = 4, 8, and 16, respectively). Notably, even though the meta-training dataset does not include fluorescein angiography (FA) images, meta-learning algorithms also have excellent diagnostic capability for detecting RVO from images with a different modality (AUCROC above 0.93 for n = 4, 8, and 16, respectively). Conclusions The proposed meta-learning models excel in detecting RVO, not only on CF images but also on FA images from a different imaging modality. Translational Relevance The proposed meta-learning models could be useful in automatically detecting RVO on CF and FA images.
Collapse
Affiliation(s)
- Danba Jiachu
- Kham Eye Centre, Kandze Prefecture People's Hospital, Kangding, China
| | - Li Luo
- Joint Shantou International Eye Center of Shantou University and the Chinese University of Hong Kong, Shantou University Medical College, Shantou, Guangdong, China
| | - Meng Xie
- Department of Ophthalmology, Xinhua Hospital Affiliated to Shanghai Jiaotong University School of Medicine, Shanghai, China
| | - Xiaoling Xie
- Joint Shantou International Eye Center of Shantou University and the Chinese University of Hong Kong, Shantou University Medical College, Shantou, Guangdong, China
| | - Jinming Guo
- Joint Shantou International Eye Center of Shantou University and the Chinese University of Hong Kong, Shantou University Medical College, Shantou, Guangdong, China
| | - Hehua Ye
- Department of Ophthalmology, Xinhua Hospital Affiliated to Shanghai Jiaotong University School of Medicine, Shanghai, China
| | - Kebo Cai
- Department of Ophthalmology, Xinhua Hospital Affiliated to Shanghai Jiaotong University School of Medicine, Shanghai, China
| | - Lingling Zhou
- Department of Ophthalmology, Xinhua Hospital Affiliated to Shanghai Jiaotong University School of Medicine, Shanghai, China
| | - Gang Song
- Kham Eye Centre, Kandze Prefecture People's Hospital, Kangding, China
| | - Feng Jiang
- Kham Eye Centre, Kandze Prefecture People's Hospital, Kangding, China
| | - Danqing Huang
- Discipline Inspection & Supervision Office, Xinhua Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Mingzhi Zhang
- Joint Shantou International Eye Center of Shantou University and the Chinese University of Hong Kong, Shantou University Medical College, Shantou, Guangdong, China
| | - Ce Zheng
- Department of Ophthalmology, Xinhua Hospital Affiliated to Shanghai Jiaotong University School of Medicine, Shanghai, China
| |
Collapse
|
4
|
Martin E, Cook AG, Frost SM, Turner AW, Chen FK, McAllister IL, Nolde JM, Schlaich MP. Ocular biomarkers: useful incidental findings by deep learning algorithms in fundus photographs. Eye (Lond) 2024; 38:2581-2588. [PMID: 38734746 PMCID: PMC11385472 DOI: 10.1038/s41433-024-03085-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2023] [Revised: 04/03/2024] [Accepted: 04/11/2024] [Indexed: 05/13/2024] Open
Abstract
BACKGROUND/OBJECTIVES Artificial intelligence can assist with ocular image analysis for screening and diagnosis, but it is not yet capable of autonomous full-spectrum screening. Hypothetically, false-positive results may have unrealized screening potential arising from signals persisting despite training and/or ambiguous signals such as from biomarker overlap or high comorbidity. The study aimed to explore the potential to detect clinically useful incidental ocular biomarkers by screening fundus photographs of hypertensive adults using diabetic deep learning algorithms. SUBJECTS/METHODS Patients referred for treatment-resistant hypertension were imaged at a hospital unit in Perth, Australia, between 2016 and 2022. The same 45° colour fundus photograph selected for each of the 433 participants imaged was processed by three deep learning algorithms. Two expert retinal specialists graded all false-positive results for diabetic retinopathy in non-diabetic participants. RESULTS Of the 29 non-diabetic participants misclassified as positive for diabetic retinopathy, 28 (97%) had clinically useful retinal biomarkers. The models designed to screen for fewer diseases captured more incidental disease. All three algorithms showed a positive correlation between severity of hypertensive retinopathy and misclassified diabetic retinopathy. CONCLUSIONS The results suggest that diabetic deep learning models may be responsive to hypertensive and other clinically useful retinal biomarkers within an at-risk, hypertensive cohort. Observing that models trained for fewer diseases captured more incidental pathology increases confidence in signalling hypotheses aligned with using self-supervised learning to develop autonomous comprehensive screening. Meanwhile, non-referable and false-positive outputs of other deep learning screening models could be explored for immediate clinical use in other populations.
Collapse
Affiliation(s)
- Eve Martin
- Commonwealth Scientific and Industrial Research Organisation (CSIRO), Kensington, WA, Australia.
- School of Population and Global Health, The University of Western Australia, Crawley, Australia.
- Dobney Hypertension Centre - Royal Perth Hospital Unit, Medical School, The University of Western Australia, Perth, Australia.
- Australian e-Health Research Centre, Floreat, WA, Australia.
| | - Angus G Cook
- School of Population and Global Health, The University of Western Australia, Crawley, Australia
| | - Shaun M Frost
- Commonwealth Scientific and Industrial Research Organisation (CSIRO), Kensington, WA, Australia
- Australian e-Health Research Centre, Floreat, WA, Australia
| | - Angus W Turner
- Lions Eye Institute, Nedlands, WA, Australia
- Centre for Ophthalmology and Visual Science, The University of Western Australia, Perth, Australia
| | - Fred K Chen
- Lions Eye Institute, Nedlands, WA, Australia
- Centre for Ophthalmology and Visual Science, The University of Western Australia, Perth, Australia
- Centre for Eye Research Australia, The Royal Victorian Eye and Ear Hospital, East Melbourne, VIC, Australia
- Ophthalmology, Department of Surgery, The University of Melbourne, East Melbourne, VIC, Australia
- Ophthalmology Department, Royal Perth Hospital, Perth, Australia
| | - Ian L McAllister
- Lions Eye Institute, Nedlands, WA, Australia
- Centre for Ophthalmology and Visual Science, The University of Western Australia, Perth, Australia
| | - Janis M Nolde
- Dobney Hypertension Centre - Royal Perth Hospital Unit, Medical School, The University of Western Australia, Perth, Australia
- Departments of Cardiology and Nephrology, Royal Perth Hospital, Perth, Australia
| | - Markus P Schlaich
- Dobney Hypertension Centre - Royal Perth Hospital Unit, Medical School, The University of Western Australia, Perth, Australia
- Departments of Cardiology and Nephrology, Royal Perth Hospital, Perth, Australia
| |
Collapse
|
5
|
Ji J, Li J, Zhang W, Geng Y, Dong Y, Huang J, Hong L. Automated Lung and Colon Cancer Classification Using Histopathological Images. Biomed Eng Comput Biol 2024; 15:11795972241271569. [PMID: 39156985 PMCID: PMC11325325 DOI: 10.1177/11795972241271569] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2023] [Accepted: 07/01/2024] [Indexed: 08/20/2024] Open
Abstract
Cancer is the leading cause of mortality in the world. And among all cancers lung and colon cancers are 2 of the most common causes of death and morbidity. The aim of this study was to develop an automated lung and colon cancer classification system using histopathological images. An automated lung and colon classification system was developed using histopathological images from the LC25000 dataset. The algorithm development included data splitting, deep neural network model selection, on the fly image augmentation, training and validation. The core of the algorithm was a Swin Transform V2 model, and 5-fold cross validation was used to evaluate model performance. The model performance was evaluated using Accuracy, Kappa, confusion matrix, precision, recall, and F1. Extensive experiments were conducted to compare the performances of different neural networks including both mainstream convolutional neural networks and vision transformers. The Swin Transform V2 model achieved a 1 (100%) on all metrics, which is the first single model to obtain perfect results on this dataset. The Swin Transformer V2 model has the potential to be used to assist pathologists in classifying lung and colon cancers using histopathology images.
Collapse
Affiliation(s)
- Jie Ji
- Network & Information Center, Shantou University, Shantou, Guangdong, China
| | - Jirui Li
- Guangdong Provincial International Collaborative Center of Molecular Medicine, Laboratory of Molecular Pathology, Shantou University Medical College, Shantou, China
| | - Weifeng Zhang
- Guangdong Provincial International Collaborative Center of Molecular Medicine, Laboratory of Molecular Pathology, Shantou University Medical College, Shantou, China
| | - Yiqun Geng
- Guangdong Provincial International Collaborative Center of Molecular Medicine, Laboratory of Molecular Pathology, Shantou University Medical College, Shantou, China
| | - Yuejiao Dong
- Department of Pathology, the First Affiliated Hospital of Shantou University Medical College, Shantou, China
| | - Jiexiong Huang
- Department of Pathology, the First Affiliated Hospital of Shantou University Medical College, Shantou, China
| | - Liangli Hong
- Department of Pathology, the First Affiliated Hospital of Shantou University Medical College, Shantou, China
| |
Collapse
|
6
|
Grzybowski A, Jin K, Zhou J, Pan X, Wang M, Ye J, Wong TY. Retina Fundus Photograph-Based Artificial Intelligence Algorithms in Medicine: A Systematic Review. Ophthalmol Ther 2024; 13:2125-2149. [PMID: 38913289 PMCID: PMC11246322 DOI: 10.1007/s40123-024-00981-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2024] [Accepted: 04/15/2024] [Indexed: 06/25/2024] Open
Abstract
We conducted a systematic review of research in artificial intelligence (AI) for retinal fundus photographic images. We highlighted the use of various AI algorithms, including deep learning (DL) models, for application in ophthalmic and non-ophthalmic (i.e., systemic) disorders. We found that the use of AI algorithms for the interpretation of retinal images, compared to clinical data and physician experts, represents an innovative solution with demonstrated superior accuracy in identifying many ophthalmic (e.g., diabetic retinopathy (DR), age-related macular degeneration (AMD), optic nerve disorders), and non-ophthalmic disorders (e.g., dementia, cardiovascular disease). There has been a significant amount of clinical and imaging data for this research, leading to the potential incorporation of AI and DL for automated analysis. AI has the potential to transform healthcare by improving accuracy, speed, and workflow, lowering cost, increasing access, reducing mistakes, and transforming healthcare worker education and training.
Collapse
Affiliation(s)
- Andrzej Grzybowski
- Institute for Research in Ophthalmology, Foundation for Ophthalmology Development, Poznań , Poland.
| | - Kai Jin
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China
| | - Jingxin Zhou
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China
| | - Xiangji Pan
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China
| | - Meizhu Wang
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China
| | - Juan Ye
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China.
| | - Tien Y Wong
- School of Clinical Medicine, Tsinghua Medicine, Tsinghua University, Beijing, China
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore, Singapore
| |
Collapse
|
7
|
Messica S, Presil D, Hoch Y, Lev T, Hadad A, Katz O, Owens DR. Enhancing stroke risk and prognostic timeframe assessment with deep learning and a broad range of retinal biomarkers. Artif Intell Med 2024; 154:102927. [PMID: 38991398 DOI: 10.1016/j.artmed.2024.102927] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2024] [Revised: 06/15/2024] [Accepted: 06/25/2024] [Indexed: 07/13/2024]
Abstract
Stroke stands as a major global health issue, causing high death and disability rates and significant social and economic burdens. The effectiveness of existing stroke risk assessment methods is questionable due to their use of inconsistent and varying biomarkers, which may lead to unpredictable risk evaluations. This study introduces an automatic deep learning-based system for predicting stroke risk (both ischemic and hemorrhagic) and estimating the time frame of its occurrence, utilizing a comprehensive set of known retinal biomarkers from fundus images. Our system, tested on the UK Biobank and DRSSW datasets, achieved AUROC scores of 0.83 (95% CI: 0.79-0.85) and 0.93 (95% CI: 0.9-0.95), respectively. These results not only highlight our system's advantage over established benchmarks but also underscore the predictive power of retinal biomarkers in assessing stroke risk and the unique effectiveness of each biomarker. Additionally, the correlation between retinal biomarkers and cardiovascular diseases broadens the potential application of our system, making it a versatile tool for predicting a wide range of cardiovascular conditions.
Collapse
Affiliation(s)
| | - Dan Presil
- NEC Israeli Research Center, Herzliya, Israel
| | - Yaacov Hoch
- NEC Israeli Research Center, Herzliya, Israel
| | - Tsvi Lev
- NEC Israeli Research Center, Herzliya, Israel
| | - Aviel Hadad
- Ophthalmology Department, Soroka University Medical Center, Be'er Sheva, South District, Israel
| | - Or Katz
- NEC Israeli Research Center, Herzliya, Israel
| | - David R Owens
- Swansea University Medical School, Swansea, Wales, UK
| |
Collapse
|
8
|
Wu MN, He K, Yu YB, Zheng B, Zhu SJ, Hong XQ, Xi WQ, Zhang Z. Intelligent diagnostic model for pterygium by combining attention mechanism and MobileNetV2. Int J Ophthalmol 2024; 17:1184-1192. [PMID: 39026919 PMCID: PMC11246929 DOI: 10.18240/ijo.2024.07.02] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2023] [Accepted: 04/01/2024] [Indexed: 07/20/2024] Open
Abstract
AIM To evaluate the application of an intelligent diagnostic model for pterygium. METHODS For intelligent diagnosis of pterygium, the attention mechanisms-SENet, ECANet, CBAM, and Self-Attention-were fused with the lightweight MobileNetV2 model structure to construct a tri-classification model. The study used 1220 images of three types of anterior ocular segments of the pterygium provided by the Eye Hospital of Nanjing Medical University. Conventional classification models-VGG16, ResNet50, MobileNetV2, and EfficientNetB7-were trained on the same dataset for comparison. To evaluate model performance in terms of accuracy, Kappa value, test time, sensitivity, specificity, the area under curve (AUC), and visual heat map, 470 test images of the anterior segment of the pterygium were used. RESULTS The accuracy of the MobileNetV2+Self-Attention model with 281 MB in model size was 92.77%, and the Kappa value of the model was 88.92%. The testing time using the model was 9ms/image in the server and 138ms/image in the local computer. The sensitivity, specificity, and AUC for the diagnosis of pterygium using normal anterior segment images were 99.47%, 100%, and 100%, respectively; using anterior segment images in the observation period were 88.30%, 95.32%, and 96.70%, respectively; and using the anterior segment images in the surgery period were 88.18%, 94.44%, and 97.30%, respectively. CONCLUSION The developed model is lightweight and can be used not only for detection but also for assessing the severity of pterygium.
Collapse
Affiliation(s)
- Mao-Nian Wu
- School of Information Engineering, Huzhou University, Huzhou 313000, Zhejiang Province, China
- Zhejiang Province Key Laboratory of Smart Management and Application of Modern Agricultural Resources, Huzhou University, Huzhou 313000, Zhejiang Province, China
| | - Kai He
- School of Information Engineering, Huzhou University, Huzhou 313000, Zhejiang Province, China
- School of Mathematical Information, Shaoxing University, Shaoxing 312000, Zhejiang Province, China
| | - Yi-Bei Yu
- School of Information Engineering, Huzhou University, Huzhou 313000, Zhejiang Province, China
| | - Bo Zheng
- School of Information Engineering, Huzhou University, Huzhou 313000, Zhejiang Province, China
- Zhejiang Province Key Laboratory of Smart Management and Application of Modern Agricultural Resources, Huzhou University, Huzhou 313000, Zhejiang Province, China
| | - Shao-Jun Zhu
- School of Information Engineering, Huzhou University, Huzhou 313000, Zhejiang Province, China
- Zhejiang Province Key Laboratory of Smart Management and Application of Modern Agricultural Resources, Huzhou University, Huzhou 313000, Zhejiang Province, China
| | - Xiang-Qian Hong
- Shenzhen Eye Institute, Shenzhen Eye Hospital, Jinan University, Shenzhen 518040, Guangdong Province, China
| | - Wen-Qun Xi
- Shenzhen Eye Institute, Shenzhen Eye Hospital, Jinan University, Shenzhen 518040, Guangdong Province, China
| | - Zhe Zhang
- Shenzhen Eye Institute, Shenzhen Eye Hospital, Jinan University, Shenzhen 518040, Guangdong Province, China
| |
Collapse
|
9
|
Salaheldin AM, Abdel Wahed M, Saleh N. A hybrid model for the detection of retinal disorders using artificial intelligence techniques. Biomed Phys Eng Express 2024; 10:055005. [PMID: 38955139 DOI: 10.1088/2057-1976/ad5db2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2024] [Accepted: 07/02/2024] [Indexed: 07/04/2024]
Abstract
The prevalence of vision impairment is increasing at an alarming rate. The goal of the study was to create an automated method that uses optical coherence tomography (OCT) to classify retinal disorders into four categories: choroidal neovascularization, diabetic macular edema, drusen, and normal cases. This study proposed a new framework that combines machine learning and deep learning-based techniques. The utilized classifiers were support vector machine (SVM), K-nearest neighbor (K-NN), decision tree (DT), and ensemble model (EM). A feature extractor, the InceptionV3 convolutional neural network, was also employed. The performance of the models was evaluated against nine criteria using a dataset of 18000 OCT images. For the SVM, K-NN, DT, and EM classifiers, the analysis exhibited state-of-the-art performance, with classification accuracies of 99.43%, 99.54%, 97.98%, and 99.31%, respectively. A promising methodology has been introduced for the automatic identification and classification of retinal disorders, leading to reduced human error and saved time.
Collapse
Affiliation(s)
- Ahmed M Salaheldin
- Systems and Biomedical Engineering Department, Faculty of Engineering, Cairo University, Giza, Egypt
- Systems and Biomedical Engineering Department, Higher Institute of Engineering, EL Shorouk Academy, Cairo, Egypt
| | - Manal Abdel Wahed
- Systems and Biomedical Engineering Department, Faculty of Engineering, Cairo University, Giza, Egypt
| | - Neven Saleh
- Systems and Biomedical Engineering Department, Higher Institute of Engineering, EL Shorouk Academy, Cairo, Egypt
- Electrical Communication and Electronic Systems Engineering Department, Engineering Faculty, October University for Modern Sciences and Arts, Giza, Egypt
| |
Collapse
|
10
|
Chen L, Tseng VS, Tsung TH, Lu DW. A multi-label transformer-based deep learning approach to predict focal visual field progression. Graefes Arch Clin Exp Ophthalmol 2024; 262:2227-2235. [PMID: 38334809 DOI: 10.1007/s00417-024-06393-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2023] [Revised: 01/19/2024] [Accepted: 01/30/2024] [Indexed: 02/10/2024] Open
Abstract
PURPOSE Tracking functional changes in visual fields (VFs) through standard automated perimetry remains a clinical standard for glaucoma diagnosis. This study aims to develop and evaluate a deep learning (DL) model to predict regional VF progression, which has not been explored in prior studies. METHODS The study included 2430 eyes of 1283 patients with four or more consecutive VF examinations from the baseline. A multi-label transformer-based network (MTN) using longitudinal VF data was developed to predict progression in six VF regions mapped to the optic disc. Progression was defined using the mean deviation (MD) slope and calculated for all six VF regions, referred to as clusters. Separate MTN models, trained for focal progression detection and forecasting on various numbers of VFs as model input, were tested on a held-out test set. RESULTS The MTNs overall demonstrated excellent macro-average AUCs above 0.884 in detecting focal VF progression given five or more VFs. With a minimum of 6 VFs, the model demonstrated superior and more stable overall and per-cluster performance, compared to 5 VFs. The MTN given 6 VFs achieved a macro-average AUC of 0.848 for forecasting progression across 8 VF tests. The MTN also achieved excellent performance (AUCs ≥ 0.86, 1.0 sensitivity, and specificity ≥ 0.70) in four out of six clusters for the eyes already with severe VF loss (baseline MD ≤ - 12 dB). CONCLUSION The high prediction accuracy suggested that multi-label DL networks trained with longitudinal VF results may assist in identifying and forecasting progression in VF regions.
Collapse
Affiliation(s)
- Ling Chen
- Institute of Hospital and Health Care Administration, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Vincent S Tseng
- Department of Computer Science, National Yang Ming Chiao Tung University, Hsinchu, Taiwan
| | - Ta-Hsin Tsung
- Department of Ophthalmology, Tri-Service General Hospital, National Defense Medical Center, No.325, Sec.2, Chenggong Rd., Neihu District, Taipei, Taiwan
| | - Da-Wen Lu
- Department of Ophthalmology, Tri-Service General Hospital, National Defense Medical Center, No.325, Sec.2, Chenggong Rd., Neihu District, Taipei, Taiwan.
| |
Collapse
|
11
|
Tan W, Wei Q, Xing Z, Fu H, Kong H, Lu Y, Yan B, Zhao C. Fairer AI in ophthalmology via implicit fairness learning for mitigating sexism and ageism. Nat Commun 2024; 15:4750. [PMID: 38834557 DOI: 10.1038/s41467-024-48972-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2023] [Accepted: 05/21/2024] [Indexed: 06/06/2024] Open
Abstract
The transformative role of artificial intelligence (AI) in various fields highlights the need for it to be both accurate and fair. Biased medical AI systems pose significant potential risks to achieving fair and equitable healthcare. Here, we show an implicit fairness learning approach to build a fairer ophthalmology AI (called FairerOPTH) that mitigates sex (biological attribute) and age biases in AI diagnosis of eye diseases. Specifically, FairerOPTH incorporates the causal relationship between fundus features and eye diseases, which is relatively independent of sensitive attributes such as race, sex, and age. We demonstrate on a large and diverse collected dataset that FairerOPTH significantly outperforms several state-of-the-art approaches in terms of diagnostic accuracy and fairness for 38 eye diseases in ultra-widefield imaging and 16 eye diseases in narrow-angle imaging. This work demonstrates the significant potential of implicit fairness learning in promoting equitable treatment for patients regardless of their sex or age.
Collapse
Affiliation(s)
- Weimin Tan
- School of Computer Science, Shanghai Key Laboratory of Intelligent Information Processing, Fudan University, Shanghai, China
| | - Qiaoling Wei
- Eye Institute and Department of Ophthalmology, Eye & ENT Hospital, Fudan University, Shanghai, China
| | - Zhen Xing
- School of Computer Science, Shanghai Key Laboratory of Intelligent Information Processing, Fudan University, Shanghai, China
| | - Hao Fu
- School of Computer Science, Shanghai Key Laboratory of Intelligent Information Processing, Fudan University, Shanghai, China
| | - Hongyu Kong
- Eye Institute and Department of Ophthalmology, Eye & ENT Hospital, Fudan University, Shanghai, China
| | - Yi Lu
- Eye Institute and Department of Ophthalmology, Eye & ENT Hospital, Fudan University, Shanghai, China.
| | - Bo Yan
- School of Computer Science, Shanghai Key Laboratory of Intelligent Information Processing, Fudan University, Shanghai, China.
| | - Chen Zhao
- Eye Institute and Department of Ophthalmology, Eye & ENT Hospital, Fudan University, Shanghai, China.
| |
Collapse
|
12
|
Swaminathan U, Daigavane S. Unveiling the Potential: A Comprehensive Review of Artificial Intelligence Applications in Ophthalmology and Future Prospects. Cureus 2024; 16:e61826. [PMID: 38975538 PMCID: PMC11227442 DOI: 10.7759/cureus.61826] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2024] [Accepted: 06/06/2024] [Indexed: 07/09/2024] Open
Abstract
Artificial intelligence (AI) has emerged as a transformative force in healthcare, particularly in the field of ophthalmology. This comprehensive review examines the current applications of AI in ophthalmology, highlighting its significant contributions to diagnostic accuracy, treatment efficacy, and patient care. AI technologies, such as deep learning algorithms, have demonstrated exceptional performance in the early detection and diagnosis of various eye conditions, including diabetic retinopathy (DR), age-related macular degeneration (AMD), and glaucoma. Additionally, AI has enhanced the analysis of ophthalmic imaging techniques like optical coherence tomography (OCT) and fundus photography, facilitating more precise disease monitoring and management. The review also explores AI's role in surgical assistance, predictive analytics, and personalized treatment plans, showcasing its potential to revolutionize clinical practice and improve patient outcomes. Despite these advancements, challenges such as data privacy, regulatory hurdles, and ethical considerations remain. The review underscores the need for continued research and collaboration among clinicians, researchers, technology developers, and policymakers to address these challenges and fully harness the potential of AI in improving eye health worldwide. By integrating AI with teleophthalmology and developing AI-driven wearable devices, the future of ophthalmic care promises enhanced accessibility, efficiency, and efficacy, ultimately reducing the global burden of visual impairment and blindness.
Collapse
Affiliation(s)
- Uma Swaminathan
- Ophthalmology, Jawaharlal Nehru Medical College, Datta Meghe Institute of Higher Education and Research, Wardha, IND
| | - Sachin Daigavane
- Ophthalmology, Jawaharlal Nehru Medical College, Datta Meghe Institute of Higher Education and Research, Wardha, IND
| |
Collapse
|
13
|
Usmani E, Bacchi S, Zhang H, Guymer C, Kraczkowska A, Qinfeng Shi J, Gilhotra J, Chan WO. Prediction of vitreomacular traction syndrome outcomes with deep learning: A pilot study. Eur J Ophthalmol 2024:11206721241258253. [PMID: 38809664 DOI: 10.1177/11206721241258253] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/31/2024]
Abstract
PURPOSE To investigate the potential of an Optical Coherence Tomography (OCT) based Deep-Learning (DL) model in the prediction of Vitreomacular Traction (VMT) syndrome outcomes. DESIGN A single-centre retrospective review. METHODS Records of consecutive adult patients attending the Royal Adelaide Hospital vitreoretinal clinic with evidence of spontaneous VMT were reviewed from January 2019 until May 2022. All patients with evidence of causes of cystoid macular oedema or secondary causes of VMT were excluded. OCT scans and outcome data obtained from patient records was used to train, test and then validate the models. RESULTS For the deep learning model, ninety-five patient files were identified from the OCT (SPECTRALIS system; Heidelberg Engineering, Heidelberg, Germany) records. 25% of the patients spontaneously improved, 48% remained stable and 27% had progression of their disease, approximately. The final longitudinal model was able to predict 'improved' or 'stable' disease with a positive predictive value of 0.72 and 0.79, respectively. The accuracy of the model was greater than 50%. CONCLUSIONS Deep-learning models may be utilised in real-world settings to predict outcomes of VMT. This approach requires further investigation as it may improve patient outcomes by aiding ophthalmologists in cross-checking management decisions and reduce the need for unnecessary interventions or delays.
Collapse
Affiliation(s)
- Eiman Usmani
- Discipline of Ophthalmology and Visual Science, University of Adelaide, Adelaide, Australia
- Department of Ophthalmology, Royal Adelaide Hospital and South Australian Institute of Ophthalmology, Adelaide, Australia
| | - Stephen Bacchi
- Department of Ophthalmology, Royal Adelaide Hospital and South Australian Institute of Ophthalmology, Adelaide, Australia
| | - Hao Zhang
- AMI Fusion Technology, University of Adelaide, Adelaide, Australia
| | - Chelsea Guymer
- Discipline of Ophthalmology and Visual Science, University of Adelaide, Adelaide, Australia
- Department of Ophthalmology, Royal Adelaide Hospital and South Australian Institute of Ophthalmology, Adelaide, Australia
| | - Amber Kraczkowska
- Discipline of Ophthalmology and Visual Science, University of Adelaide, Adelaide, Australia
| | - Javen Qinfeng Shi
- Institute of Machine Learning, University of Adelaide, Adelaide, Australia
| | - Jagjit Gilhotra
- Discipline of Ophthalmology and Visual Science, University of Adelaide, Adelaide, Australia
- Department of Ophthalmology, Royal Adelaide Hospital and South Australian Institute of Ophthalmology, Adelaide, Australia
| | - Weng Onn Chan
- Discipline of Ophthalmology and Visual Science, University of Adelaide, Adelaide, Australia
- Department of Ophthalmology, Royal Adelaide Hospital and South Australian Institute of Ophthalmology, Adelaide, Australia
| |
Collapse
|
14
|
Wang Y, Wei R, Yang D, Song K, Shen Y, Niu L, Li M, Zhou X. Development and validation of a deep learning model to predict axial length from ultra-wide field images. Eye (Lond) 2024; 38:1296-1300. [PMID: 38102471 PMCID: PMC11076502 DOI: 10.1038/s41433-023-02885-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Revised: 11/22/2023] [Accepted: 11/30/2023] [Indexed: 12/17/2023] Open
Abstract
BACKGROUND To validate the feasibility of building a deep learning model to predict axial length (AL) for moderate to high myopic patients from ultra-wide field (UWF) images. METHODS This study included 6174 UWF images from 3134 myopic patients during 2014 to 2020 in Eye and ENT Hospital of Fudan University. Of 6174 images, 4939 were used for training, 617 for validation, and 618 for testing. The coefficient of determination (R2), mean absolute error (MAE), and mean squared error (MSE) were used for model performance evaluation. RESULTS The model predicted AL with high accuracy. Evaluating performance of R2, MSE and MAE were 0.579, 1.419 and 0.9043, respectively. Prediction bias of 64.88% of the tests was under 1-mm error, 76.90% of tests was within the range of 5% error and 97.57% within 10% error. The prediction bias had a strong negative correlation with true AL values and showed significant difference between male and female (P < 0.001). Generated heatmaps demonstrated that the model focused on posterior atrophy changes in pathological fundus and peri-optic zone in normal fundus. In sex-specific models, R2, MSE, and MAE results of the female AL model were 0.411, 1.357, and 0.911 in female dataset and 0.343, 2.428, and 1.264 in male dataset. The corresponding metrics of male AL models were 0.216, 2.900, and 1.352 in male dataset and 0.083, 2.112, and 1.154 in female dataset. CONCLUSIONS It is feasible to utilize deep learning models to predict AL for moderate to high myopic patients with UWF images.
Collapse
Affiliation(s)
- Yunzhe Wang
- Eye Institute and Department of Ophthalmology, Eye & ENT Hospital, Fudan University, Shanghai, China
- NHC Key Laboratory of Myopia (Fudan University); Key Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China
- Shanghai Research Center of Ophthalmology and Optometry, Shanghai, China
- Shanghai Engineering Research Center of Laser and Autostereoscopic 3D for Vision Care, Shanghai, China
| | - Ruoyan Wei
- Eye Institute and Department of Ophthalmology, Eye & ENT Hospital, Fudan University, Shanghai, China
- NHC Key Laboratory of Myopia (Fudan University); Key Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China
- Shanghai Research Center of Ophthalmology and Optometry, Shanghai, China
- Shanghai Engineering Research Center of Laser and Autostereoscopic 3D for Vision Care, Shanghai, China
- Shanghai Medical College and Zhongshan Hospital Immunotherapy Translational Research Center, Shanghai, China
| | - Danjuan Yang
- Eye Institute and Department of Ophthalmology, Eye & ENT Hospital, Fudan University, Shanghai, China
- NHC Key Laboratory of Myopia (Fudan University); Key Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China
- Shanghai Research Center of Ophthalmology and Optometry, Shanghai, China
- Shanghai Engineering Research Center of Laser and Autostereoscopic 3D for Vision Care, Shanghai, China
| | - Kaimin Song
- Beijing Airdoc Technology Co., Ltd, Beijing, China
| | - Yang Shen
- Eye Institute and Department of Ophthalmology, Eye & ENT Hospital, Fudan University, Shanghai, China
- NHC Key Laboratory of Myopia (Fudan University); Key Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China
- Shanghai Research Center of Ophthalmology and Optometry, Shanghai, China
- Shanghai Engineering Research Center of Laser and Autostereoscopic 3D for Vision Care, Shanghai, China
| | - Lingling Niu
- Eye Institute and Department of Ophthalmology, Eye & ENT Hospital, Fudan University, Shanghai, China
- NHC Key Laboratory of Myopia (Fudan University); Key Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China
- Shanghai Research Center of Ophthalmology and Optometry, Shanghai, China
- Shanghai Engineering Research Center of Laser and Autostereoscopic 3D for Vision Care, Shanghai, China
| | - Meiyan Li
- Eye Institute and Department of Ophthalmology, Eye & ENT Hospital, Fudan University, Shanghai, China.
- NHC Key Laboratory of Myopia (Fudan University); Key Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China.
- Shanghai Research Center of Ophthalmology and Optometry, Shanghai, China.
- Shanghai Engineering Research Center of Laser and Autostereoscopic 3D for Vision Care, Shanghai, China.
| | - Xingtao Zhou
- Eye Institute and Department of Ophthalmology, Eye & ENT Hospital, Fudan University, Shanghai, China.
- NHC Key Laboratory of Myopia (Fudan University); Key Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China.
- Shanghai Research Center of Ophthalmology and Optometry, Shanghai, China.
- Shanghai Engineering Research Center of Laser and Autostereoscopic 3D for Vision Care, Shanghai, China.
| |
Collapse
|
15
|
Musleh AM, AlRyalat SA, Abid MN, Salem Y, Hamila HM, Sallam AB. Diagnostic accuracy of artificial intelligence in detecting retinitis pigmentosa: A systematic review and meta-analysis. Surv Ophthalmol 2024; 69:411-417. [PMID: 38042377 DOI: 10.1016/j.survophthal.2023.11.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2023] [Revised: 11/20/2023] [Accepted: 11/27/2023] [Indexed: 12/04/2023]
Abstract
Retinitis pigmentosa (RP) is often undetected in its early stages. Artificial intelligence (AI) has emerged as a promising tool in medical diagnostics. Therefore, we conducted a systematic review and meta-analysis to evaluate the diagnostic accuracy of AI in detecting RP using various ophthalmic images. We conducted a systematic search on PubMed, Scopus, and Web of Science databases on December 31, 2022. We included studies in the English language that used any ophthalmic imaging modality, such as OCT or fundus photography, used any AI technologies, had at least an expert in ophthalmology as a reference standard, and proposed an AI algorithm able to distinguish between images with and without retinitis pigmentosa features. We considered the sensitivity, specificity, and area under the curve (AUC) as the main measures of accuracy. We had a total of 14 studies in the qualitative analysis and 10 studies in the quantitative analysis. In total, the studies included in the meta-analysis dealt with 920,162 images. Overall, AI showed an excellent performance in detecting RP with pooled sensitivity and specificity of 0.985 [95%CI: 0.948-0.996], 0.993 [95%CI: 0.982-0.997] respectively. The area under the receiver operating characteristic (AUROC), using a random-effect model, was calculated to be 0.999 [95%CI: 0.998-1.000; P < 0.001]. The Zhou and Dendukuri I² test revealed a low level of heterogeneity between the studies, with [I2 = 19.94%] for sensitivity and [I2 = 21.07%] for specificity. The bivariate I² [20.33%] also suggested a low degree of heterogeneity. We found evidence supporting the accuracy of AI in the detection of RP; however, the level of heterogeneity between the studies was low.
Collapse
Affiliation(s)
| | - Saif Aldeen AlRyalat
- Department of Ophthalmology, The University of Jordan, Amman, Jordan; Department of Ophthalmology, Houston Methodist Hospital, Houston, TX, USA.
| | - Mohammad Naim Abid
- Marka Specialty Hospital, Amman, Jordan; Valley Retina Institute, P.A., McAllen, TX, USA
| | - Yahia Salem
- Faculty of Medicine, The University of Jordan, Amman, Jordan
| | | | - Ahmed B Sallam
- Harvey and Bernice Jones Eye Institute at the University of Arkansas for Medical Sciences (UAMS), Little Rock, AR, USA
| |
Collapse
|
16
|
Wang X, Li H, Zheng H, Sun G, Wang W, Yi Z, Xu A, He L, Wang H, Jia W, Li Z, Li C, Ye M, Du B, Chen C. Automatic Detection of 30 Fundus Diseases Using Ultra-Widefield Fluorescein Angiography with Deep Experts Aggregation. Ophthalmol Ther 2024; 13:1125-1144. [PMID: 38416330 DOI: 10.1007/s40123-024-00900-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2023] [Accepted: 01/26/2024] [Indexed: 02/29/2024] Open
Abstract
INTRODUCTION Inaccurate, untimely diagnoses of fundus diseases leads to vision-threatening complications and even blindness. We built a deep learning platform (DLP) for automatic detection of 30 fundus diseases using ultra-widefield fluorescein angiography (UWFFA) with deep experts aggregation. METHODS This retrospective and cross-sectional database study included a total of 61,609 UWFFA images dating from 2016 to 2021, involving more than 3364 subjects in multiple centers across China. All subjects were divided into 30 different groups. The state-of-the-art convolutional neural network architecture, ConvNeXt, was chosen as the backbone to train and test the receiver operating characteristic curve (ROC) of the proposed system on test data and external test date. We compared the classification performance of the proposed system with that of ophthalmologists, including two retinal specialists. RESULTS We built a DLP to analyze UWFFA, which can detect up to 30 fundus diseases, with a frequency-weighted average area under the receiver operating characteristic curve (AUC) of 0.940 in the primary test dataset and 0.954 in the external multi-hospital test dataset. The tool shows comparable accuracy with retina specialists in diagnosis and evaluation. CONCLUSIONS This is the first study on a large-scale UWFFA dataset for multi-retina disease classification. We believe that our UWFFA DLP advances the diagnosis by artificial intelligence (AI) in various retinal diseases and would contribute to labor-saving and precision medicine especially in remote areas.
Collapse
Affiliation(s)
- Xiaoling Wang
- Eye Center, Renmin Hospital of Wuhan University, No. 9 ZhangZhiDong Street, Wuhan, 430060, Hubei, China
| | - He Li
- National Engineering Research Center for Multimedia Software, School of Computer Science, Wuhan University, Wuhan, 430072, Hubei, China
| | - Hongmei Zheng
- Eye Center, Renmin Hospital of Wuhan University, No. 9 ZhangZhiDong Street, Wuhan, 430060, Hubei, China
| | - Gongpeng Sun
- Eye Center, Renmin Hospital of Wuhan University, No. 9 ZhangZhiDong Street, Wuhan, 430060, Hubei, China
| | - Wenyu Wang
- Eye Center, Renmin Hospital of Wuhan University, No. 9 ZhangZhiDong Street, Wuhan, 430060, Hubei, China
| | - Zuohuizi Yi
- Eye Center, Renmin Hospital of Wuhan University, No. 9 ZhangZhiDong Street, Wuhan, 430060, Hubei, China
| | - A'min Xu
- Eye Center, Renmin Hospital of Wuhan University, No. 9 ZhangZhiDong Street, Wuhan, 430060, Hubei, China
| | - Lu He
- Eye Center, Renmin Hospital of Wuhan University, No. 9 ZhangZhiDong Street, Wuhan, 430060, Hubei, China
| | - Haiyan Wang
- Shaanxi Eye Hospital, Xi'an People's Hospital (Xi'an Fourth Hospital), No. 21, Jiefang Road, Xi'an, 710004, Shaanxi, China
| | - Wei Jia
- Shaanxi Eye Hospital, Xi'an People's Hospital (Xi'an Fourth Hospital), No. 21, Jiefang Road, Xi'an, 710004, Shaanxi, China
| | - Zhiqing Li
- Tianjin Medical University Eye Hospital, No. 251, Fukang Road, Nankai District, Tianjin, 300384, China
| | - Chang Li
- Tianjin Medical University Eye Hospital, No. 251, Fukang Road, Nankai District, Tianjin, 300384, China
| | - Mang Ye
- National Engineering Research Center for Multimedia Software, School of Computer Science, Wuhan University, Wuhan, 430072, Hubei, China.
| | - Bo Du
- National Engineering Research Center for Multimedia Software, School of Computer Science, Wuhan University, Wuhan, 430072, Hubei, China.
| | - Changzheng Chen
- Eye Center, Renmin Hospital of Wuhan University, No. 9 ZhangZhiDong Street, Wuhan, 430060, Hubei, China.
| |
Collapse
|
17
|
Zhang J, Lin S, Cheng T, Xu Y, Lu L, He J, Yu T, Peng Y, Zhang Y, Zou H, Ma Y. RETFound-enhanced community-based fundus disease screening: real-world evidence and decision curve analysis. NPJ Digit Med 2024; 7:108. [PMID: 38693205 PMCID: PMC11063045 DOI: 10.1038/s41746-024-01109-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2023] [Accepted: 04/12/2024] [Indexed: 05/03/2024] Open
Abstract
Visual impairments and blindness are major public health concerns globally. Effective eye disease screening aided by artificial intelligence (AI) is a promising countermeasure, although it is challenged by practical constraints such as poor image quality in community screening. The recently developed ophthalmic foundation model RETFound has shown higher accuracy in retinal image recognition tasks. This study developed an RETFound-enhanced deep learning (DL) model for multiple-eye disease screening using real-world images from community screenings. Our results revealed that our DL model improved the sensitivity and specificity by over 15% compared with commercial models. Our model also shows better generalisation ability than AI models developed using traditional processes. Additionally, decision curve analysis underscores the higher net benefit of employing our model in both urban and rural settings in China. These findings indicate that the RETFound-enhanced DL model can achieve a higher net benefit in community-based screening, advocating its adoption in low- and middle-income countries to address global eye health challenges.
Collapse
Affiliation(s)
- Juzhao Zhang
- Shanghai Eye Disease Prevention & Treatment Center/ Shanghai Eye Hospital, School of Medicine, Tongji University, Shanghai, China
- National Clinical Research Center for Eye Disease, Shanghai, China
- Shanghai Engineering Center of Precise Diagnosis and Treatment of Eye Diseases, Shanghai, China
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Senlin Lin
- Shanghai Eye Disease Prevention & Treatment Center/ Shanghai Eye Hospital, School of Medicine, Tongji University, Shanghai, China
- National Clinical Research Center for Eye Disease, Shanghai, China
- Shanghai Engineering Center of Precise Diagnosis and Treatment of Eye Diseases, Shanghai, China
| | - Tianhao Cheng
- School of Computer Science, Shanghai Key Laboratory of Intelligent Information Processing, Fudan University, Shanghai, China
| | - Yi Xu
- Shanghai Eye Disease Prevention & Treatment Center/ Shanghai Eye Hospital, School of Medicine, Tongji University, Shanghai, China
- National Clinical Research Center for Eye Disease, Shanghai, China
- Shanghai Engineering Center of Precise Diagnosis and Treatment of Eye Diseases, Shanghai, China
| | - Lina Lu
- Shanghai Eye Disease Prevention & Treatment Center/ Shanghai Eye Hospital, School of Medicine, Tongji University, Shanghai, China
- National Clinical Research Center for Eye Disease, Shanghai, China
- Shanghai Engineering Center of Precise Diagnosis and Treatment of Eye Diseases, Shanghai, China
| | - Jiangnan He
- Shanghai Eye Disease Prevention & Treatment Center/ Shanghai Eye Hospital, School of Medicine, Tongji University, Shanghai, China
| | - Tao Yu
- Shanghai Eye Disease Prevention & Treatment Center/ Shanghai Eye Hospital, School of Medicine, Tongji University, Shanghai, China
| | - Yajun Peng
- Shanghai Eye Disease Prevention & Treatment Center/ Shanghai Eye Hospital, School of Medicine, Tongji University, Shanghai, China
| | - Yuejie Zhang
- School of Computer Science, Shanghai Key Laboratory of Intelligent Information Processing, Fudan University, Shanghai, China.
| | - Haidong Zou
- Shanghai Eye Disease Prevention & Treatment Center/ Shanghai Eye Hospital, School of Medicine, Tongji University, Shanghai, China.
- National Clinical Research Center for Eye Disease, Shanghai, China.
- Shanghai Engineering Center of Precise Diagnosis and Treatment of Eye Diseases, Shanghai, China.
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China.
| | - Yingyan Ma
- Shanghai Eye Disease Prevention & Treatment Center/ Shanghai Eye Hospital, School of Medicine, Tongji University, Shanghai, China.
- National Clinical Research Center for Eye Disease, Shanghai, China.
- Shanghai Engineering Center of Precise Diagnosis and Treatment of Eye Diseases, Shanghai, China.
- Department of Ophthalmology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China.
| |
Collapse
|
18
|
Driban M, Yan A, Selvam A, Ong J, Vupparaboina KK, Chhablani J. Artificial intelligence in chorioretinal pathology through fundoscopy: a comprehensive review. Int J Retina Vitreous 2024; 10:36. [PMID: 38654344 PMCID: PMC11036694 DOI: 10.1186/s40942-024-00554-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2024] [Accepted: 04/02/2024] [Indexed: 04/25/2024] Open
Abstract
BACKGROUND Applications for artificial intelligence (AI) in ophthalmology are continually evolving. Fundoscopy is one of the oldest ocular imaging techniques but remains a mainstay in posterior segment imaging due to its prevalence, ease of use, and ongoing technological advancement. AI has been leveraged for fundoscopy to accomplish core tasks including segmentation, classification, and prediction. MAIN BODY In this article we provide a review of AI in fundoscopy applied to representative chorioretinal pathologies, including diabetic retinopathy and age-related macular degeneration, among others. We conclude with a discussion of future directions and current limitations. SHORT CONCLUSION As AI evolves, it will become increasingly essential for the modern ophthalmologist to understand its applications and limitations to improve patient outcomes and continue to innovate.
Collapse
Affiliation(s)
- Matthew Driban
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA
| | - Audrey Yan
- Department of Medicine, West Virginia School of Osteopathic Medicine, Lewisburg, WV, USA
| | - Amrish Selvam
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA
| | - Joshua Ong
- Michigan Medicine, University of Michigan, Ann Arbor, USA
| | | | - Jay Chhablani
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA.
| |
Collapse
|
19
|
Wang J, Wang SZ, Qin XL, Chen M, Zhang HM, Liu X, Xiang MJ, Hu JB, Huang HY, Lan CJ. Algorithm of automatic identification of diabetic retinopathy foci based on ultra-widefield scanning laser ophthalmoscopy. Int J Ophthalmol 2024; 17:610-615. [PMID: 38638262 PMCID: PMC10988084 DOI: 10.18240/ijo.2024.04.02] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Accepted: 01/15/2024] [Indexed: 04/20/2024] Open
Abstract
AIM To propose an algorithm for automatic detection of diabetic retinopathy (DR) lesions based on ultra-widefield scanning laser ophthalmoscopy (SLO). METHODS The algorithm utilized the FasterRCNN (Faster Regions with CNN features)+ResNet50 (Residua Network 50)+FPN (Feature Pyramid Networks) method for detecting hemorrhagic spots, cotton wool spots, exudates, and microaneurysms in DR ultra-widefield SLO. Subimage segmentation combined with a deeper residual network FasterRCNN+ResNet50 was employed for feature extraction to enhance intelligent learning rate. Feature fusion was carried out by the feature pyramid network FPN, which significantly improved lesion detection rates in SLO fundus images. RESULTS By analyzing 1076 ultra-widefield SLO images provided by our hospital, with a resolution of 2600×2048 dpi, the accuracy rates for hemorrhagic spots, cotton wool spots, exudates, and microaneurysms were found to be 87.23%, 83.57%, 86.75%, and 54.94%, respectively. CONCLUSION The proposed algorithm demonstrates intelligent detection of DR lesions in ultra-widefield SLO, providing significant advantages over traditional fundus color imaging intelligent diagnosis algorithms.
Collapse
Affiliation(s)
- Jie Wang
- Aier Eye Hospital (East of Chengdu), Chengdu 610051, Sichuan Province, China
| | - Su-Zhen Wang
- Department of Ophthalmology, Chengdu First People's Hospital, Chengdu 610095, Sichuan Province, China
| | - Xiao-Lin Qin
- Aier Eye Hospital (East of Chengdu), Chengdu 610051, Sichuan Province, China
| | - Meng Chen
- Aier Eye Hospital (East of Chengdu), Chengdu 610051, Sichuan Province, China
| | - Heng-Ming Zhang
- School of Computer and Artificial Intelligence, Southwest Jiaotong University, Chengdu 610097, Sichuan Province, China
| | - Xin Liu
- Aier Eye Hospital (East of Chengdu), Chengdu 610051, Sichuan Province, China
| | - Meng-Jun Xiang
- Aier Eye Hospital (East of Chengdu), Chengdu 610051, Sichuan Province, China
| | - Jian-Bin Hu
- Chengdu Aier Eye Hospital, Chengdu 610041, Sichuan Province, China
| | - Hai-Yu Huang
- School of Computer and Artificial Intelligence, Southwest Jiaotong University, Chengdu 610097, Sichuan Province, China
| | - Chang-Jun Lan
- Aier Eye Hospital (East of Chengdu), Chengdu 610051, Sichuan Province, China
| |
Collapse
|
20
|
Lin S, Ma Y, Jiang Y, Li W, Peng Y, Yu T, Xu Y, Zhu J, Lu L, Zou H. Service Quality and Residents' Preferences for Facilitated Self-Service Fundus Disease Screening: Cross-Sectional Study. J Med Internet Res 2024; 26:e45545. [PMID: 38630535 PMCID: PMC11063888 DOI: 10.2196/45545] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2023] [Revised: 10/15/2023] [Accepted: 03/12/2024] [Indexed: 04/19/2024] Open
Abstract
BACKGROUND Fundus photography is the most important examination in eye disease screening. A facilitated self-service eye screening pattern based on the fully automatic fundus camera was developed in 2022 in Shanghai, China; it may help solve the problem of insufficient human resources in primary health care institutions. However, the service quality and residents' preference for this new pattern are unclear. OBJECTIVE This study aimed to compare the service quality and residents' preferences between facilitated self-service eye screening and traditional manual screening and to explore the relationships between the screening service's quality and residents' preferences. METHODS We conducted a cross-sectional study in Shanghai, China. Residents who underwent facilitated self-service fundus disease screening at one of the screening sites were assigned to the exposure group; those who were screened with a traditional fundus camera operated by an optometrist at an adjacent site comprised the control group. The primary outcome was the screening service quality, including effectiveness (image quality and screening efficiency), physiological discomfort, safety, convenience, and trustworthiness. The secondary outcome was the participants' preferences. Differences in service quality and the participants' preferences between the 2 groups were compared using chi-square tests separately. Subgroup analyses for exploring the relationships between the screening service's quality and residents' preference were conducted using generalized logit models. RESULTS A total of 358 residents enrolled; among them, 176 (49.16%) were included in the exposure group and the remaining 182 (50.84%) in the control group. Residents' basic characteristics were balanced between the 2 groups. There was no significant difference in service quality between the 2 groups (image quality pass rate: P=.79; average screening time: P=.57; no physiological discomfort rate: P=.92; safety rate: P=.78; convenience rate: P=.95; trustworthiness rate: P=.20). However, the proportion of participants who were willing to use the same technology for their next screening was significantly lower in the exposure group than in the control group (P<.001). Subgroup analyses suggest that distrust in the facilitated self-service eye screening might increase the probability of refusal to undergo screening (P=.02). CONCLUSIONS This study confirms that the facilitated self-service fundus disease screening pattern could achieve good service quality. However, it was difficult to reverse residents' preferences for manual screening in a short period, especially when the original manual service was already excellent. Therefore, the digital transformation of health care must be cautious. We suggest that attention be paid to the residents' individual needs. More efficient man-machine collaboration and personalized health management solutions based on large language models are both needed.
Collapse
Affiliation(s)
- Senlin Lin
- Shanghai Eye Diseases Prevention &Treatment Center/ Shanghai Eye Hospital, School of Medicine, Tongji University, Shanghai, China
- National Clinical Research Center for Eye Diseases, Shanghai, China
- Shanghai Engineering Research Center of Precise Diagnosis and Treatment of Eye Diseases, Shanghai, China
| | - Yingyan Ma
- Shanghai Eye Diseases Prevention &Treatment Center/ Shanghai Eye Hospital, School of Medicine, Tongji University, Shanghai, China
- National Clinical Research Center for Eye Diseases, Shanghai, China
- Shanghai Engineering Research Center of Precise Diagnosis and Treatment of Eye Diseases, Shanghai, China
- Shanghai General Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Yanwei Jiang
- Shanghai Hongkou Center for Disease Control and Prevention, Shanghai, China
| | - Wenwen Li
- School of Management, Fudan University, Shanghai, China
| | - Yajun Peng
- Shanghai Eye Diseases Prevention &Treatment Center/ Shanghai Eye Hospital, School of Medicine, Tongji University, Shanghai, China
- National Clinical Research Center for Eye Diseases, Shanghai, China
- Shanghai Engineering Research Center of Precise Diagnosis and Treatment of Eye Diseases, Shanghai, China
| | - Tao Yu
- Shanghai Eye Diseases Prevention &Treatment Center/ Shanghai Eye Hospital, School of Medicine, Tongji University, Shanghai, China
- National Clinical Research Center for Eye Diseases, Shanghai, China
- Shanghai Engineering Research Center of Precise Diagnosis and Treatment of Eye Diseases, Shanghai, China
| | - Yi Xu
- Shanghai Eye Diseases Prevention &Treatment Center/ Shanghai Eye Hospital, School of Medicine, Tongji University, Shanghai, China
- National Clinical Research Center for Eye Diseases, Shanghai, China
- Shanghai Engineering Research Center of Precise Diagnosis and Treatment of Eye Diseases, Shanghai, China
| | - Jianfeng Zhu
- Shanghai Eye Diseases Prevention &Treatment Center/ Shanghai Eye Hospital, School of Medicine, Tongji University, Shanghai, China
- National Clinical Research Center for Eye Diseases, Shanghai, China
- Shanghai Engineering Research Center of Precise Diagnosis and Treatment of Eye Diseases, Shanghai, China
| | - Lina Lu
- Shanghai Eye Diseases Prevention &Treatment Center/ Shanghai Eye Hospital, School of Medicine, Tongji University, Shanghai, China
- National Clinical Research Center for Eye Diseases, Shanghai, China
- Shanghai Engineering Research Center of Precise Diagnosis and Treatment of Eye Diseases, Shanghai, China
| | - Haidong Zou
- Shanghai Eye Diseases Prevention &Treatment Center/ Shanghai Eye Hospital, School of Medicine, Tongji University, Shanghai, China
- National Clinical Research Center for Eye Diseases, Shanghai, China
- Shanghai Engineering Research Center of Precise Diagnosis and Treatment of Eye Diseases, Shanghai, China
- Shanghai General Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| |
Collapse
|
21
|
Hasan MM, Phu J, Sowmya A, Meijering E, Kalloniatis M. Artificial intelligence in the diagnosis of glaucoma and neurodegenerative diseases. Clin Exp Optom 2024; 107:130-146. [PMID: 37674264 DOI: 10.1080/08164622.2023.2235346] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2023] [Accepted: 07/07/2023] [Indexed: 09/08/2023] Open
Abstract
Artificial Intelligence is a rapidly expanding field within computer science that encompasses the emulation of human intelligence by machines. Machine learning and deep learning - two primary data-driven pattern analysis approaches under the umbrella of artificial intelligence - has created considerable interest in the last few decades. The evolution of technology has resulted in a substantial amount of artificial intelligence research on ophthalmic and neurodegenerative disease diagnosis using retinal images. Various artificial intelligence-based techniques have been used for diagnostic purposes, including traditional machine learning, deep learning, and their combinations. Presented here is a review of the literature covering the last 10 years on this topic, discussing the use of artificial intelligence in analysing data from different modalities and their combinations for the diagnosis of glaucoma and neurodegenerative diseases. The performance of published artificial intelligence methods varies due to several factors, yet the results suggest that such methods can potentially facilitate clinical diagnosis. Generally, the accuracy of artificial intelligence-assisted diagnosis ranges from 67-98%, and the area under the sensitivity-specificity curve (AUC) ranges from 0.71-0.98, which outperforms typical human performance of 71.5% accuracy and 0.86 area under the curve. This indicates that artificial intelligence-based tools can provide clinicians with useful information that would assist in providing improved diagnosis. The review suggests that there is room for improvement of existing artificial intelligence-based models using retinal imaging modalities before they are incorporated into clinical practice.
Collapse
Affiliation(s)
- Md Mahmudul Hasan
- School of Computer Science and Engineering, University of New South Wales, Kensington, New South Wales, Australia
| | - Jack Phu
- School of Optometry and Vision Science, University of New South Wales, Kensington, Australia
- Centre for Eye Health, University of New South Wales, Sydney, New South Wales, Australia
- School of Medicine (Optometry), Deakin University, Waurn Ponds, Victoria, Australia
| | - Arcot Sowmya
- School of Computer Science and Engineering, University of New South Wales, Kensington, New South Wales, Australia
| | - Erik Meijering
- School of Computer Science and Engineering, University of New South Wales, Kensington, New South Wales, Australia
| | - Michael Kalloniatis
- School of Optometry and Vision Science, University of New South Wales, Kensington, Australia
- School of Medicine (Optometry), Deakin University, Waurn Ponds, Victoria, Australia
| |
Collapse
|
22
|
Liu Y, Xie H, Zhao X, Tang J, Yu Z, Wu Z, Tian R, Chen Y, Chen M, Ntentakis DP, Du Y, Chen T, Hu Y, Zhang S, Lei B, Zhang G. Automated detection of nine infantile fundus diseases and conditions in retinal images using a deep learning system. EPMA J 2024; 15:39-51. [PMID: 38463622 PMCID: PMC10923762 DOI: 10.1007/s13167-024-00350-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2023] [Accepted: 01/21/2024] [Indexed: 03/12/2024]
Abstract
Purpose We developed an Infant Retinal Intelligent Diagnosis System (IRIDS), an automated system to aid early diagnosis and monitoring of infantile fundus diseases and health conditions to satisfy urgent needs of ophthalmologists. Methods We developed IRIDS by combining convolutional neural networks and transformer structures, using a dataset of 7697 retinal images (1089 infants) from four hospitals. It identifies nine fundus diseases and conditions, namely, retinopathy of prematurity (ROP) (mild ROP, moderate ROP, and severe ROP), retinoblastoma (RB), retinitis pigmentosa (RP), Coats disease, coloboma of the choroid, congenital retinal fold (CRF), and normal. IRIDS also includes depth attention modules, ResNet-18 (Res-18), and Multi-Axis Vision Transformer (MaxViT). Performance was compared to that of ophthalmologists using 450 retinal images. The IRIDS employed a five-fold cross-validation approach to generate the classification results. Results Several baseline models achieved the following metrics: accuracy, precision, recall, F1-score (F1), kappa, and area under the receiver operating characteristic curve (AUC) with best values of 94.62% (95% CI, 94.34%-94.90%), 94.07% (95% CI, 93.32%-94.82%), 90.56% (95% CI, 88.64%-92.48%), 92.34% (95% CI, 91.87%-92.81%), 91.15% (95% CI, 90.37%-91.93%), and 99.08% (95% CI, 99.07%-99.09%), respectively. In comparison, IRIDS showed promising results compared to ophthalmologists, demonstrating an average accuracy, precision, recall, F1, kappa, and AUC of 96.45% (95% CI, 96.37%-96.53%), 95.86% (95% CI, 94.56%-97.16%), 94.37% (95% CI, 93.95%-94.79%), 95.03% (95% CI, 94.45%-95.61%), 94.43% (95% CI, 93.96%-94.90%), and 99.51% (95% CI, 99.51%-99.51%), respectively, in multi-label classification on the test dataset, utilizing the Res-18 and MaxViT models. These results suggest that, particularly in terms of AUC, IRIDS achieved performance that warrants further investigation for the detection of retinal abnormalities. Conclusions IRIDS identifies nine infantile fundus diseases and conditions accurately. It may aid non-ophthalmologist personnel in underserved areas in infantile fundus disease screening. Thus, preventing severe complications. The IRIDS serves as an example of artificial intelligence integration into ophthalmology to achieve better outcomes in predictive, preventive, and personalized medicine (PPPM / 3PM) in the treatment of infantile fundus diseases. Supplementary Information The online version contains supplementary material available at 10.1007/s13167-024-00350-y.
Collapse
Affiliation(s)
- Yaling Liu
- Shenzhen Eye Hospital, Shenzhen Eye Institute, Jinan University, Shenzhen, 518040 China
| | - Hai Xie
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| | - Xinyu Zhao
- Shenzhen Eye Hospital, Shenzhen Eye Institute, Jinan University, Shenzhen, 518040 China
| | - Jiannan Tang
- Shenzhen Eye Hospital, Shenzhen Eye Institute, Jinan University, Shenzhen, 518040 China
| | - Zhen Yu
- Shenzhen Eye Hospital, Shenzhen Eye Institute, Jinan University, Shenzhen, 518040 China
| | - Zhenquan Wu
- Shenzhen Eye Hospital, Shenzhen Eye Institute, Jinan University, Shenzhen, 518040 China
| | - Ruyin Tian
- Shenzhen Eye Hospital, Shenzhen Eye Institute, Jinan University, Shenzhen, 518040 China
| | - Yi Chen
- Shenzhen Eye Hospital, Shenzhen Eye Institute, Jinan University, Shenzhen, 518040 China
- Guizhou Medical University, Guiyang, Guizhou China
| | - Miaohong Chen
- Shenzhen Eye Hospital, Shenzhen Eye Institute, Jinan University, Shenzhen, 518040 China
- Guizhou Medical University, Guiyang, Guizhou China
| | - Dimitrios P. Ntentakis
- Retina Service, Ines and Fred Yeatts Retina Research Laboratory, Angiogenesis Laboratory, Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School, Boston, MA USA
| | - Yueshanyi Du
- Shenzhen Eye Hospital, Shenzhen Eye Institute, Jinan University, Shenzhen, 518040 China
| | - Tingyi Chen
- Shenzhen Eye Hospital, Shenzhen Eye Institute, Jinan University, Shenzhen, 518040 China
- Guizhou Medical University, Guiyang, Guizhou China
| | - Yarou Hu
- Shenzhen Eye Hospital, Shenzhen Eye Institute, Jinan University, Shenzhen, 518040 China
| | - Sifan Zhang
- Guizhou Medical University, Guiyang, Guizhou China
- Southern University of Science and Technology School of Medicine, Shenzhen, China
| | - Baiying Lei
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| | - Guoming Zhang
- Shenzhen Eye Hospital, Shenzhen Eye Institute, Jinan University, Shenzhen, 518040 China
- Guizhou Medical University, Guiyang, Guizhou China
| |
Collapse
|
23
|
Pandey PU, Ballios BG, Christakis PG, Kaplan AJ, Mathew DJ, Ong Tone S, Wan MJ, Micieli JA, Wong JCY. Ensemble of deep convolutional neural networks is more accurate and reliable than board-certified ophthalmologists at detecting multiple diseases in retinal fundus photographs. Br J Ophthalmol 2024; 108:417-423. [PMID: 36720585 PMCID: PMC10894841 DOI: 10.1136/bjo-2022-322183] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Accepted: 01/11/2023] [Indexed: 02/02/2023]
Abstract
AIMS To develop an algorithm to classify multiple retinal pathologies accurately and reliably from fundus photographs and to validate its performance against human experts. METHODS We trained a deep convolutional ensemble (DCE), an ensemble of five convolutional neural networks (CNNs), to classify retinal fundus photographs into diabetic retinopathy (DR), glaucoma, age-related macular degeneration (AMD) and normal eyes. The CNN architecture was based on the InceptionV3 model, and initial weights were pretrained on the ImageNet dataset. We used 43 055 fundus images from 12 public datasets. Five trained ensembles were then tested on an 'unseen' set of 100 images. Seven board-certified ophthalmologists were asked to classify these test images. RESULTS Board-certified ophthalmologists achieved a mean accuracy of 72.7% over all classes, while the DCE achieved a mean accuracy of 79.2% (p=0.03). The DCE had a statistically significant higher mean F1-score for DR classification compared with the ophthalmologists (76.8% vs 57.5%; p=0.01) and greater but statistically non-significant mean F1-scores for glaucoma (83.9% vs 75.7%; p=0.10), AMD (85.9% vs 85.2%; p=0.69) and normal eyes (73.0% vs 70.5%; p=0.39). The DCE had a greater mean agreement between accuracy and confident of 81.6% vs 70.3% (p<0.001). DISCUSSION We developed a deep learning model and found that it could more accurately and reliably classify four categories of fundus images compared with board-certified ophthalmologists. This work provides proof-of-principle that an algorithm is capable of accurate and reliable recognition of multiple retinal diseases using only fundus photographs.
Collapse
Affiliation(s)
- Prashant U Pandey
- School of Biomedical Engineering, The University of British Columbia, Vancouver, British Columbia, Canada
| | - Brian G Ballios
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
- Krembil Research Institute, University Health Network, Toronto, Ontario, Canada
- Kensington Vision and Research Centre and Kensington Research Institute, Toronto, Ontario, Canada
| | - Panos G Christakis
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
- Kensington Vision and Research Centre and Kensington Research Institute, Toronto, Ontario, Canada
| | - Alexander J Kaplan
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
| | - David J Mathew
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
- Krembil Research Institute, University Health Network, Toronto, Ontario, Canada
- Kensington Vision and Research Centre and Kensington Research Institute, Toronto, Ontario, Canada
| | - Stephan Ong Tone
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
- Sunnybrook Research Institute, Toronto, Ontario, Canada
| | - Michael J Wan
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
| | - Jonathan A Micieli
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
- Kensington Vision and Research Centre and Kensington Research Institute, Toronto, Ontario, Canada
- Department of Ophthalmology, St. Michael's Hospital, Unity Health, Toronto, Ontario, Canada
| | - Jovi C Y Wong
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
24
|
Thiemann N, Sonntag SR, Kreikenbohm M, Böhmerle G, Stagge J, Grisanti S, Martinetz T, Miura Y. Artificial Intelligence in Fluorescence Lifetime Imaging Ophthalmoscopy (FLIO) Data Analysis-Toward Retinal Metabolic Diagnostics. Diagnostics (Basel) 2024; 14:431. [PMID: 38396470 PMCID: PMC10888399 DOI: 10.3390/diagnostics14040431] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2023] [Revised: 01/30/2024] [Accepted: 02/06/2024] [Indexed: 02/25/2024] Open
Abstract
The purpose of this study was to investigate the possibility of implementing an artificial intelligence (AI) approach for the analysis of fluorescence lifetime imaging ophthalmoscopy (FLIO) data even with small data. FLIO data, including the fluorescence intensity and mean fluorescence lifetime (τm) of two spectral channels, as well as OCT-A data from 26 non-smokers and 28 smokers without systemic and ocular diseases were used. The analysis was performed with support vector machines (SVMs), a well-known AI method for small datasets, and compared with the results of convolutional neural networks (CNNs) and autoencoder networks. The SVM was the only tested AI method, which was able to distinguish τm between non-smokers and heavy smokers. The accuracy was about 80%. OCT-A data did not show significant differences. The feasibility and usefulness of the AI in analyzing FLIO and OCT-A data without any apparent retinal diseases were demonstrated. Although further studies with larger datasets are necessary to validate the results, the results greatly suggest that AI could be useful in analyzing FLIO-data even from healthy subjects without retinal disease and even with small datasets. AI-assisted FLIO is expected to greatly advance early retinal diagnosis.
Collapse
Affiliation(s)
- Natalie Thiemann
- Institute for Neuro- and Bioinformatics, University of Lübeck, 23538 Lübeck, Germany
| | - Svenja Rebecca Sonntag
- Department of Ophthalmology, University of Luebeck, University Hospital Schleswig-Holstein, Campus Lübeck, 23538 Lübeck, Germany
| | - Marie Kreikenbohm
- Department of Ophthalmology, University of Luebeck, University Hospital Schleswig-Holstein, Campus Lübeck, 23538 Lübeck, Germany
| | - Giulia Böhmerle
- Department of Ophthalmology, University of Luebeck, University Hospital Schleswig-Holstein, Campus Lübeck, 23538 Lübeck, Germany
| | - Jessica Stagge
- Department of Ophthalmology, University of Luebeck, University Hospital Schleswig-Holstein, Campus Lübeck, 23538 Lübeck, Germany
| | - Salvatore Grisanti
- Department of Ophthalmology, University of Luebeck, University Hospital Schleswig-Holstein, Campus Lübeck, 23538 Lübeck, Germany
| | - Thomas Martinetz
- Institute for Neuro- and Bioinformatics, University of Lübeck, 23538 Lübeck, Germany
| | - Yoko Miura
- Department of Ophthalmology, University of Luebeck, University Hospital Schleswig-Holstein, Campus Lübeck, 23538 Lübeck, Germany
- Institute of Biomedical Optics, University of Lübeck, 23538 Lübeck, Germany
| |
Collapse
|
25
|
Prashar J, Tay N. Performance of artificial intelligence for the detection of pathological myopia from colour fundus images: a systematic review and meta-analysis. Eye (Lond) 2024; 38:303-314. [PMID: 37550366 PMCID: PMC10810874 DOI: 10.1038/s41433-023-02680-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Revised: 07/12/2023] [Accepted: 07/19/2023] [Indexed: 08/09/2023] Open
Abstract
BACKGROUND Pathological myopia (PM) is a major cause of worldwide blindness and represents a serious threat to eye health globally. Artificial intelligence (AI)-based methods are gaining traction in ophthalmology as highly sensitive and specific tools for screening and diagnosis of many eye diseases. However, there is currently a lack of high-quality evidence for their use in the diagnosis of PM. METHODS A systematic review and meta-analysis of studies evaluating the diagnostic performance of AI-based tools in PM was conducted according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidance. Five electronic databases were searched, results were assessed against the inclusion criteria and a quality assessment was conducted for included studies. Model sensitivity and specificity were pooled using the DerSimonian and Laird (random-effects) model. Subgroup analysis and meta-regression were performed. RESULTS Of 1021 citations identified, 17 studies were included in the systematic review and 11 studies, evaluating 165,787 eyes, were included in the meta-analysis. The area under the summary receiver operator curve (SROC) was 0.9905. The pooled sensitivity was 95.9% [95.5%-96.2%], and the overall pooled specificity was 96.5% [96.3%-96.6%]. The pooled diagnostic odds ratio (DOR) for detection of PM was 841.26 [418.37-1691.61]. CONCLUSIONS This systematic review and meta-analysis provides robust early evidence that AI-based, particularly deep-learning based, diagnostic tools are a highly specific and sensitive modality for the detection of PM. There is potential for such tools to be incorporated into ophthalmic public health screening programmes, particularly in resource-poor areas with a substantial prevalence of high myopia.
Collapse
Affiliation(s)
- Jai Prashar
- University College London, London, UK.
- Moorfields Eye Hospital NHS Foundation Trust, London, UK.
| | | |
Collapse
|
26
|
Choi JY, Ryu IH, Kim JK, Lee IS, Yoo TK. Development of a generative deep learning model to improve epiretinal membrane detection in fundus photography. BMC Med Inform Decis Mak 2024; 24:25. [PMID: 38273286 PMCID: PMC10811871 DOI: 10.1186/s12911-024-02431-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2023] [Accepted: 01/17/2024] [Indexed: 01/27/2024] Open
Abstract
BACKGROUND The epiretinal membrane (ERM) is a common retinal disorder characterized by abnormal fibrocellular tissue at the vitreomacular interface. Most patients with ERM are asymptomatic at early stages. Therefore, screening for ERM will become increasingly important. Despite the high prevalence of ERM, few deep learning studies have investigated ERM detection in the color fundus photography (CFP) domain. In this study, we built a generative model to enhance ERM detection performance in the CFP. METHODS This deep learning study retrospectively collected 302 ERM and 1,250 healthy CFP data points from a healthcare center. The generative model using StyleGAN2 was trained using single-center data. EfficientNetB0 with StyleGAN2-based augmentation was validated using independent internal single-center data and external datasets. We randomly assigned healthcare center data to the development (80%) and internal validation (20%) datasets. Data from two publicly accessible sources were used as external validation datasets. RESULTS StyleGAN2 facilitated realistic CFP synthesis with the characteristic cellophane reflex features of the ERM. The proposed method with StyleGAN2-based augmentation outperformed the typical transfer learning without a generative adversarial network. The proposed model achieved an area under the receiver operating characteristic (AUC) curve of 0.926 for internal validation. AUCs of 0.951 and 0.914 were obtained for the two external validation datasets. Compared with the deep learning model without augmentation, StyleGAN2-based augmentation improved the detection performance and contributed to the focus on the location of the ERM. CONCLUSIONS We proposed an ERM detection model by synthesizing realistic CFP images with the pathological features of ERM through generative deep learning. We believe that our deep learning framework will help achieve a more accurate detection of ERM in a limited data setting.
Collapse
Affiliation(s)
- Joon Yul Choi
- Department of Biomedical Engineering, Yonsei University, Wonju, South Korea
| | - Ik Hee Ryu
- Department of Refractive Surgery, B&VIIT Eye Center, B2 GT Tower, 1317-23 Seocho-Dong, Seocho-Gu, Seoul, South Korea
- Research and development department, VISUWORKS, Seoul, South Korea
| | - Jin Kuk Kim
- Department of Refractive Surgery, B&VIIT Eye Center, B2 GT Tower, 1317-23 Seocho-Dong, Seocho-Gu, Seoul, South Korea
- Research and development department, VISUWORKS, Seoul, South Korea
| | - In Sik Lee
- Department of Refractive Surgery, B&VIIT Eye Center, B2 GT Tower, 1317-23 Seocho-Dong, Seocho-Gu, Seoul, South Korea
| | - Tae Keun Yoo
- Department of Refractive Surgery, B&VIIT Eye Center, B2 GT Tower, 1317-23 Seocho-Dong, Seocho-Gu, Seoul, South Korea.
- Research and development department, VISUWORKS, Seoul, South Korea.
| |
Collapse
|
27
|
Al-Absi HRH, Pai A, Naeem U, Mohamed FK, Arya S, Sbeit RA, Bashir M, El Shafei MM, El Hajj N, Alam T. DiaNet v2 deep learning based method for diabetes diagnosis using retinal images. Sci Rep 2024; 14:1595. [PMID: 38238377 PMCID: PMC10796402 DOI: 10.1038/s41598-023-49677-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2023] [Accepted: 12/11/2023] [Indexed: 01/22/2024] Open
Abstract
Diabetes mellitus (DM) is a prevalent chronic metabolic disorder linked to increased morbidity and mortality. With a significant portion of cases remaining undiagnosed, particularly in the Middle East North Africa (MENA) region, more accurate and accessible diagnostic methods are essential. Current diagnostic tests like fasting plasma glucose (FPG), oral glucose tolerance tests (OGTT), random plasma glucose (RPG), and hemoglobin A1c (HbA1c) have limitations, leading to misclassifications and discomfort for patients. The aim of this study is to enhance diabetes diagnosis accuracy by developing an improved predictive model using retinal images from the Qatari population, addressing the limitations of current diagnostic methods. This study explores an alternative approach involving retinal images, building upon the DiaNet model, the first deep learning model for diabetes detection based solely on retinal images. The newly proposed DiaNet v2 model is developed using a large dataset from Qatar Biobank (QBB) and Hamad Medical Corporation (HMC) covering wide range of pathologies in the the retinal images. Utilizing the most extensive collection of retinal images from the 5545 participants (2540 diabetic patients and 3005 control), DiaNet v2 is developed for diabetes diagnosis. DiaNet v2 achieves an impressive accuracy of over 92%, 93% sensitivity, and 91% specificity in distinguishing diabetic patients from the control group. Given the high prevalence of diabetes and the limitations of existing diagnostic methods in clinical setup, this study proposes an innovative solution. By leveraging a comprehensive retinal image dataset and applying advanced deep learning techniques, DiaNet v2 demonstrates a remarkable accuracy in diabetes diagnosis. This approach has the potential to revolutionize diabetes detection, providing a more accessible, non-invasive and accurate method for early intervention and treatment planning, particularly in regions with high diabetes rates like MENA.
Collapse
Affiliation(s)
- Hamada R H Al-Absi
- College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Anant Pai
- Ophthalmology Section, Department of Surgery, Hamad Medical Corporation, Doha, Qatar
| | - Usman Naeem
- Ophthalmology Section, Department of Surgery, Hamad Medical Corporation, Doha, Qatar
| | - Fatma Kassem Mohamed
- Ophthalmology Section, Department of Surgery, Hamad Medical Corporation, Doha, Qatar
| | - Saket Arya
- Ophthalmology Section, Department of Surgery, Hamad Medical Corporation, Doha, Qatar
| | - Rami Abu Sbeit
- Ophthalmology Section, Department of Surgery, Hamad Medical Corporation, Doha, Qatar
| | - Mohammed Bashir
- Endocrine Section, Department of Medicine, Hamad Medical Corporation, Doha, Qatar
- Qatar Metabolic Institute, Hamad Medical Corporation, Doha, Qatar
| | | | - Nady El Hajj
- College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
- College of Health and Life Sciences, Hamad Bin Khalifa University, Doha, Qatar
| | - Tanvir Alam
- College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar.
| |
Collapse
|
28
|
Li B, Chen H, Yu W, Zhang M, Lu F, Ma J, Hao Y, Li X, Hu B, Shen L, Mao J, He X, Wang H, Ding D, Li X, Chen Y. The performance of a deep learning system in assisting junior ophthalmologists in diagnosing 13 major fundus diseases: a prospective multi-center clinical trial. NPJ Digit Med 2024; 7:8. [PMID: 38212607 PMCID: PMC10784504 DOI: 10.1038/s41746-023-00991-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Accepted: 12/11/2023] [Indexed: 01/13/2024] Open
Abstract
Artificial intelligence (AI)-based diagnostic systems have been reported to improve fundus disease screening in previous studies. This multicenter prospective self-controlled clinical trial aims to evaluate the diagnostic performance of a deep learning system (DLS) in assisting junior ophthalmologists in detecting 13 major fundus diseases. A total of 1493 fundus images from 748 patients were prospectively collected from five tertiary hospitals in China. Nine junior ophthalmologists were trained and annotated the images with or without the suggestions proposed by the DLS. The diagnostic performance was evaluated among three groups: DLS-assisted junior ophthalmologist group (test group), junior ophthalmologist group (control group) and DLS group. The diagnostic consistency was 84.9% (95%CI, 83.0% ~ 86.9%), 72.9% (95%CI, 70.3% ~ 75.6%) and 85.5% (95%CI, 83.5% ~ 87.4%) in the test group, control group and DLS group, respectively. With the help of the proposed DLS, the diagnostic consistency of junior ophthalmologists improved by approximately 12% (95% CI, 9.1% ~ 14.9%) with statistical significance (P < 0.001). For the detection of 13 diseases, the test group achieved significant higher sensitivities (72.2% ~ 100.0%) and comparable specificities (90.8% ~ 98.7%) comparing with the control group (sensitivities, 50% ~ 100%; specificities 96.7 ~ 99.8%). The DLS group presented similar performance to the test group in the detection of any fundus abnormality (sensitivity, 95.7%; specificity, 87.2%) and each of the 13 diseases (sensitivity, 83.3% ~ 100.0%; specificity, 89.0 ~ 98.0%). The proposed DLS provided a novel approach for the automatic detection of 13 major fundus diseases with high diagnostic consistency and assisted to improve the performance of junior ophthalmologists, resulting especially in reducing the risk of missed diagnoses. ClinicalTrials.gov NCT04723160.
Collapse
Affiliation(s)
- Bing Li
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China
- Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing, China
| | - Huan Chen
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China
- Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing, China
| | - Weihong Yu
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China
- Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing, China
| | - Ming Zhang
- Department of Ophthalmology, West China Hospital, Sichuan University, Chengdu, China
| | - Fang Lu
- Department of Ophthalmology, West China Hospital, Sichuan University, Chengdu, China
| | - Jingxue Ma
- Department of Ophthalmology, Second Hospital of Hebei Medical University, Shijiazhuang, China
| | - Yuhua Hao
- Department of Ophthalmology, Second Hospital of Hebei Medical University, Shijiazhuang, China
| | - Xiaorong Li
- Department of Retina, Tianjin Medical University Eye Hospital, Tianjin, China
| | - Bojie Hu
- Department of Retina, Tianjin Medical University Eye Hospital, Tianjin, China
| | - Lijun Shen
- Department of Retina Center, Affiliated Eye Hospital of Wenzhou Medical University, Hangzhou, Zhejiang Province, China
| | - Jianbo Mao
- Department of Retina Center, Affiliated Eye Hospital of Wenzhou Medical University, Hangzhou, Zhejiang Province, China
| | - Xixi He
- School of Information Science and Technology, North China University of Technology, Beijing, China
- Beijing Key Laboratory on Integration and Analysis of Large-scale Stream Data, Beijing, China
| | - Hao Wang
- Visionary Intelligence Ltd., Beijing, China
| | | | - Xirong Li
- MoE Key Lab of DEKE, Renmin University of China, Beijing, China
| | - Youxin Chen
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China.
- Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing, China.
| |
Collapse
|
29
|
Naing SL, Aimmanee P. Automated optic disk segmentation for optic disk edema classification using factorized gradient vector flow. Sci Rep 2024; 14:371. [PMID: 38172282 PMCID: PMC10764308 DOI: 10.1038/s41598-023-50908-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Accepted: 12/27/2023] [Indexed: 01/05/2024] Open
Abstract
One significant ocular symptom of neuro-ophthalmic disorders of the optic disk (OD) is optic disk edema (ODE). The etiologies of ODE are broad, with various symptoms and effects. Early detection of ODE can prevent potential vision loss and fatal vision problems. The texture of edematous OD significantly differs from the non-edematous OD in retinal images. As a result, techniques that usually work for non-edematous cases may not work well for edematous cases. We propose a fully automatic OD classification of edematous and non-edematous OD on fundus image collections containing a mixture of edematous and non-edematous ODs. The proposed algorithm involved localization, segmentation, and classification of edematous and non-edematous OD. The factorized gradient vector flow (FGVF) was used to segment the ODs. The OD type was classified using a linear support vector machine (SVM) based on 27 features extracted from the vessels, GLCM, color, and intensity line profile. The proposed method was tested on 295 images with 146 edematous cases and 149 non-edematous cases from three datasets. The segmentation achieves an average precision of 88.41%, recall of 89.35%, and F1-Score of 86.53%. The average classification accuracy is 99.40% and outperforms the state-of-the-art method by 3.43%.
Collapse
Affiliation(s)
- Seint Lei Naing
- School of Information, Computer, and Communication Technology, Sirindhorn International Institute of Technology, Thammasat University, 131 Moo 5, Tiwanon Rd, Bangkadi, Meung, Patumthani, 12000, Thailand
| | - Pakinee Aimmanee
- School of Information, Computer, and Communication Technology, Sirindhorn International Institute of Technology, Thammasat University, 131 Moo 5, Tiwanon Rd, Bangkadi, Meung, Patumthani, 12000, Thailand.
| |
Collapse
|
30
|
Zhao T, Guan Y, Tu D, Yuan L, Lu G. Neighbored-attention U-net (NAU-net) for diabetic retinopathy image segmentation. Front Med (Lausanne) 2023; 10:1309795. [PMID: 38131040 PMCID: PMC10733532 DOI: 10.3389/fmed.2023.1309795] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2023] [Accepted: 11/22/2023] [Indexed: 12/23/2023] Open
Abstract
Background Diabetic retinopathy-related (DR-related) diseases are posing an increasing threat to eye health as the number of patients with diabetes mellitus that are young increases significantly. The automatic diagnosis of DR-related diseases has benefited from the rapid development of image semantic segmentation and other deep learning technology. Methods Inspired by the architecture of U-Net family, a neighbored attention U-Net (NAU-Net) is designed to balance the identification performance and computational cost for DR fundus image segmentation. In the new network, only the neighboring high- and low-dimensional feature maps of the encoder and decoder are fused by using four attention gates. With the help of this improvement, the common target features in the high-dimensional feature maps of encoder are enhanced, and they are also fused with the low-dimensional feature map of decoder. Moreover, this network fuses only neighboring layers and does not include the inner layers commonly used in U-Net++. Consequently, the proposed network incurs a better identification performance with a lower computational cost. Results The experimental results of three open datasets of DR fundus images, including DRIVE, HRF, and CHASEDB, indicate that the NAU-Net outperforms FCN, SegNet, attention U-Net, and U-Net++ in terms of Dice score, IoU, accuracy, and precision, while its computation cost is between attention U-Net and U-Net++. Conclusion The proposed NAU-Net exhibits better performance at a relatively low computational cost and provides an efficient novel approach for DR fundus image segmentation and a new automatic tool for DR-related eye disease diagnosis.
Collapse
Affiliation(s)
- Tingting Zhao
- The Second Department of Internal Medicine, Donghu Hospital of Wuhan, Wuhan, China
| | - Yawen Guan
- The Second Department of Internal Medicine, Donghu Hospital of Wuhan, Wuhan, China
| | - Dan Tu
- The Second Department of Internal Medicine, Donghu Hospital of Wuhan, Wuhan, China
| | - Lixia Yuan
- The Department of Ophthalmology, Donghu Hospital of Wuhan, Wuhan, China
| | - Guangtao Lu
- Precision Manufacturing Institute, Wuhan University of Science and Technology, Wuhan, China
| |
Collapse
|
31
|
Kim JH, Hong J, Choi H, Kang HG, Yoon S, Hwang JY, Park YR, Cheon KA. Development of Deep Ensembles to Screen for Autism and Symptom Severity Using Retinal Photographs. JAMA Netw Open 2023; 6:e2347692. [PMID: 38100107 PMCID: PMC10724768 DOI: 10.1001/jamanetworkopen.2023.47692] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/30/2023] [Accepted: 10/31/2023] [Indexed: 12/18/2023] Open
Abstract
Importance Screening for autism spectrum disorder (ASD) is constrained by limited resources, particularly trained professionals to conduct evaluations. Individuals with ASD have structural retinal changes that potentially reflect brain alterations, including visual pathway abnormalities through embryonic and anatomic connections. Whether deep learning algorithms can aid in objective screening for ASD and symptom severity using retinal photographs is unknown. Objective To develop deep ensemble models to differentiate between retinal photographs of individuals with ASD vs typical development (TD) and between individuals with severe ASD vs mild to moderate ASD. Design, Setting, and Participants This diagnostic study was conducted at a single tertiary-care hospital (Severance Hospital, Yonsei University College of Medicine) in Seoul, Republic of Korea. Retinal photographs of individuals with ASD were prospectively collected between April and October 2022, and those of age- and sex-matched individuals with TD were retrospectively collected between December 2007 and February 2023. Deep ensembles of 5 models were built with 10-fold cross-validation using the pretrained ResNeXt-50 (32×4d) network. Score-weighted visual explanations for convolutional neural networks, with a progressive erasing technique, were used for model visualization and quantitative validation. Data analysis was performed between December 2022 and October 2023. Exposures Autism Diagnostic Observation Schedule-Second Edition calibrated severity scores (cutoff of 8) and Social Responsiveness Scale-Second Edition T scores (cutoff of 76) were used to assess symptom severity. Main Outcomes and Measures The main outcomes were participant-level area under the receiver operating characteristic curve (AUROC), sensitivity, and specificity. The 95% CI was estimated through the bootstrapping method with 1000 resamples. Results This study included 1890 eyes of 958 participants. The ASD and TD groups each included 479 participants (945 eyes), had a mean (SD) age of 7.8 (3.2) years, and comprised mostly boys (392 [81.8%]). For ASD screening, the models had a mean AUROC, sensitivity, and specificity of 1.00 (95% CI, 1.00-1.00) on the test set. These models retained a mean AUROC of 1.00 using only 10% of the image containing the optic disc. For symptom severity screening, the models had a mean AUROC of 0.74 (95% CI, 0.67-0.80), sensitivity of 0.58 (95% CI, 0.49-0.66), and specificity of 0.74 (95% CI, 0.67-0.82) on the test set. Conclusions and Relevance These findings suggest that retinal photographs may be a viable objective screening tool for ASD and possibly for symptom severity. Retinal photograph use may speed the ASD screening process, which may help improve accessibility to specialized child psychiatry assessments currently strained by limited resources.
Collapse
Affiliation(s)
- Jae Han Kim
- Yonsei University College of Medicine, Severance Hospital, Yonsei University Health System, Seoul, Republic of Korea
| | - JaeSeong Hong
- Department of Biomedical Systems Informatics, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Hangnyoung Choi
- Department of Child and Adolescent Psychiatry, Severance Hospital, Yonsei University College of Medicine, Seoul, Republic of Korea
- Institute of Behavioral Science in Medicine, Yonsei University College of Medicine, Yonsei University Health System, Seoul, Republic of Korea
| | - Hyun Goo Kang
- Department of Ophthalmology, Institute of Vision Research, Severance Eye Hospital, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Sangchul Yoon
- Department of Medical Humanities and Social Sciences, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Jung Yeon Hwang
- Yonsei University College of Medicine, Severance Hospital, Yonsei University Health System, Seoul, Republic of Korea
| | - Yu Rang Park
- Department of Biomedical Systems Informatics, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Keun-Ah Cheon
- Department of Child and Adolescent Psychiatry, Severance Hospital, Yonsei University College of Medicine, Seoul, Republic of Korea
- Institute of Behavioral Science in Medicine, Yonsei University College of Medicine, Yonsei University Health System, Seoul, Republic of Korea
| |
Collapse
|
32
|
Gao Z, Pan X, Shao J, Jiang X, Su Z, Jin K, Ye J. Automatic interpretation and clinical evaluation for fundus fluorescein angiography images of diabetic retinopathy patients by deep learning. Br J Ophthalmol 2023; 107:1852-1858. [PMID: 36171054 DOI: 10.1136/bjo-2022-321472] [Citation(s) in RCA: 34] [Impact Index Per Article: 34.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2022] [Accepted: 09/04/2022] [Indexed: 11/03/2022]
Abstract
BACKGROUND/AIMS Fundus fluorescein angiography (FFA) is an important technique to evaluate diabetic retinopathy (DR) and other retinal diseases. The interpretation of FFA images is complex and time-consuming, and the ability of diagnosis is uneven among different ophthalmologists. The aim of the study is to develop a clinically usable multilevel classification deep learning model for FFA images, including prediagnosis assessment and lesion classification. METHODS A total of 15 599 FFA images of 1558 eyes from 845 patients diagnosed with DR were collected and annotated. Three convolutional neural network (CNN) models were trained to generate the label of image quality, location, laterality of eye, phase and five lesions. Performance of the models was evaluated by accuracy, F-1 score, the area under the curve and human-machine comparison. The images with false positive and false negative results were analysed in detail. RESULTS Compared with LeNet-5 and VGG16, ResNet18 got the best result, achieving an accuracy of 80.79%-93.34% for prediagnosis assessment and an accuracy of 63.67%-88.88% for lesion detection. The human-machine comparison showed that the CNN had similar accuracy with junior ophthalmologists. The false positive and false negative analysis indicated a direction of improvement. CONCLUSION This is the first study to do automated standardised labelling on FFA images. Our model is able to be applied in clinical practice, and will make great contributions to the development of intelligent diagnosis of FFA images.
Collapse
Affiliation(s)
- Zhiyuan Gao
- Department of Ophthalmology, Zhejiang University School of Medicine Second Affiliated Hospital, Hangzhou, Zhejiang, China
| | - Xiangji Pan
- Department of Ophthalmology, Zhejiang University School of Medicine Second Affiliated Hospital, Hangzhou, Zhejiang, China
| | - Ji Shao
- Department of Ophthalmology, Zhejiang University School of Medicine Second Affiliated Hospital, Hangzhou, Zhejiang, China
| | - Xiaoyu Jiang
- College of Control Science and Engineering, Zhejiang University, Hangzhou, Zhejiang, China
| | - Zhaoan Su
- Department of Ophthalmology, Zhejiang University School of Medicine Second Affiliated Hospital, Hangzhou, Zhejiang, China
| | - Kai Jin
- Department of Ophthalmology, Zhejiang University School of Medicine Second Affiliated Hospital, Hangzhou, Zhejiang, China
| | - Juan Ye
- Department of Ophthalmology, Zhejiang University School of Medicine Second Affiliated Hospital, Hangzhou, Zhejiang, China
| |
Collapse
|
33
|
Bo ZH, Guo Y, Lyu J, Liang H, He J, Deng S, Xu F, Lou X, Dai Q. Relay learning: a physically secure framework for clinical multi-site deep learning. NPJ Digit Med 2023; 6:204. [PMID: 37925578 PMCID: PMC10625523 DOI: 10.1038/s41746-023-00934-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2023] [Accepted: 09/25/2023] [Indexed: 11/06/2023] Open
Abstract
Big data serves as the cornerstone for constructing real-world deep learning systems across various domains. In medicine and healthcare, a single clinical site lacks sufficient data, thus necessitating the involvement of multiple sites. Unfortunately, concerns regarding data security and privacy hinder the sharing and reuse of data across sites. Existing approaches to multi-site clinical learning heavily depend on the security of the network firewall and system implementation. To address this issue, we propose Relay Learning, a secure deep-learning framework that physically isolates clinical data from external intruders while still leveraging the benefits of multi-site big data. We demonstrate the efficacy of Relay Learning in three medical tasks of different diseases and anatomical structures, including structure segmentation of retina fundus, mediastinum tumors diagnosis, and brain midline localization. We evaluate Relay Learning by comparing its performance to alternative solutions through multi-site validation and external validation. Incorporating a total of 41,038 medical images from 21 medical hosts, including 7 external hosts, with non-uniform distributions, we observe significant performance improvements with Relay Learning across all three tasks. Specifically, it achieves an average performance increase of 44.4%, 24.2%, and 36.7% for retinal fundus segmentation, mediastinum tumor diagnosis, and brain midline localization, respectively. Remarkably, Relay Learning even outperforms central learning on external test sets. In the meanwhile, Relay Learning keeps data sovereignty locally without cross-site network connections. We anticipate that Relay Learning will revolutionize clinical multi-site collaboration and reshape the landscape of healthcare in the future.
Collapse
Affiliation(s)
- Zi-Hao Bo
- School of Software, Tsinghua University, Beijing, China
- BNRist, Tsinghua University, Beijing, China
| | - Yuchen Guo
- BNRist, Tsinghua University, Beijing, China.
| | - Jinhao Lyu
- Department of Radiology, Chinese PLA General Hospital / Chinese PLA Medical School, Beijing, China
| | - Hengrui Liang
- Department of Thoracic Oncology and Surgery, China State Key Laboratory of Respiratory Disease & National Clinical Research Center for Respiratory Disease, The First Affiliated Hospital of Guangzhou Medical University, Guangzhou, China
| | - Jianxing He
- Department of Thoracic Oncology and Surgery, China State Key Laboratory of Respiratory Disease & National Clinical Research Center for Respiratory Disease, The First Affiliated Hospital of Guangzhou Medical University, Guangzhou, China
| | - Shijie Deng
- Department of Radiology, The 921st Hospital of Chinese PLA, Changsha, China
| | - Feng Xu
- School of Software, Tsinghua University, Beijing, China.
- BNRist, Tsinghua University, Beijing, China.
| | - Xin Lou
- Department of Radiology, Chinese PLA General Hospital / Chinese PLA Medical School, Beijing, China.
| | - Qionghai Dai
- BNRist, Tsinghua University, Beijing, China.
- Department of Automation, Tsinghua University, Beijing, China.
| |
Collapse
|
34
|
Wang M, Lin T, Wang L, Lin A, Zou K, Xu X, Zhou Y, Peng Y, Meng Q, Qian Y, Deng G, Wu Z, Chen J, Lin J, Zhang M, Zhu W, Zhang C, Zhang D, Goh RSM, Liu Y, Pang CP, Chen X, Chen H, Fu H. Uncertainty-inspired open set learning for retinal anomaly identification. Nat Commun 2023; 14:6757. [PMID: 37875484 PMCID: PMC10598011 DOI: 10.1038/s41467-023-42444-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Accepted: 10/11/2023] [Indexed: 10/26/2023] Open
Abstract
Failure to recognize samples from the classes unseen during training is a major limitation of artificial intelligence in the real-world implementation for recognition and classification of retinal anomalies. We establish an uncertainty-inspired open set (UIOS) model, which is trained with fundus images of 9 retinal conditions. Besides assessing the probability of each category, UIOS also calculates an uncertainty score to express its confidence. Our UIOS model with thresholding strategy achieves an F1 score of 99.55%, 97.01% and 91.91% for the internal testing set, external target categories (TC)-JSIEC dataset and TC-unseen testing set, respectively, compared to the F1 score of 92.20%, 80.69% and 64.74% by the standard AI model. Furthermore, UIOS correctly predicts high uncertainty scores, which would prompt the need for a manual check in the datasets of non-target categories retinal diseases, low-quality fundus images, and non-fundus images. UIOS provides a robust method for real-world screening of retinal anomalies.
Collapse
Affiliation(s)
- Meng Wang
- Institute of High Performance Computing (IHPC), Agency for Science, Technology and Research (A*STAR), 1 Fusionopolis Way, #16-16 Connexis, Singapore, 138632, Republic of Singapore
| | - Tian Lin
- Joint Shantou International Eye Center, Shantou University and the Chinese University of Hong Kong, 515041, Shantou, Guangdong, China
| | - Lianyu Wang
- College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, 211100, Nanjing, Jiangsu, China
- Laboratory of Brain-Machine Intelligence Technology, Ministry of Education Nanjing University of Aeronautics and Astronautics, 211106, Nanjing, Jiangsu, China
| | - Aidi Lin
- Joint Shantou International Eye Center, Shantou University and the Chinese University of Hong Kong, 515041, Shantou, Guangdong, China
| | - Ke Zou
- National Key Laboratory of Fundamental Science on Synthetic Vision and the College of Computer Science, Sichuan University, 610065, Chengdu, Sichuan, China
| | - Xinxing Xu
- Institute of High Performance Computing (IHPC), Agency for Science, Technology and Research (A*STAR), 1 Fusionopolis Way, #16-16 Connexis, Singapore, 138632, Republic of Singapore
| | - Yi Zhou
- School of Electronics and Information Engineering, Soochow University, 215006, Suzhou, Jiangsu, China
| | - Yuanyuan Peng
- School of Biomedical Engineering, Anhui Medical University, 230032, Hefei, Anhui, China
| | - Qingquan Meng
- School of Electronics and Information Engineering, Soochow University, 215006, Suzhou, Jiangsu, China
| | - Yiming Qian
- Institute of High Performance Computing (IHPC), Agency for Science, Technology and Research (A*STAR), 1 Fusionopolis Way, #16-16 Connexis, Singapore, 138632, Republic of Singapore
| | - Guoyao Deng
- National Key Laboratory of Fundamental Science on Synthetic Vision and the College of Computer Science, Sichuan University, 610065, Chengdu, Sichuan, China
| | - Zhiqun Wu
- Longchuan People's Hospital, 517300, Heyuan, Guangdong, China
| | - Junhong Chen
- Puning People's Hospital, 515300, Jieyang, Guangdong, China
| | - Jianhong Lin
- Haifeng PengPai Memory Hospital, 516400, Shanwei, Guangdong, China
| | - Mingzhi Zhang
- Joint Shantou International Eye Center, Shantou University and the Chinese University of Hong Kong, 515041, Shantou, Guangdong, China
| | - Weifang Zhu
- School of Electronics and Information Engineering, Soochow University, 215006, Suzhou, Jiangsu, China
| | - Changqing Zhang
- College of Intelligence and Computing, Tianjin University, 300350, Tianjin, China
| | - Daoqiang Zhang
- College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, 211100, Nanjing, Jiangsu, China
- Laboratory of Brain-Machine Intelligence Technology, Ministry of Education Nanjing University of Aeronautics and Astronautics, 211106, Nanjing, Jiangsu, China
| | - Rick Siow Mong Goh
- Institute of High Performance Computing (IHPC), Agency for Science, Technology and Research (A*STAR), 1 Fusionopolis Way, #16-16 Connexis, Singapore, 138632, Republic of Singapore
| | - Yong Liu
- Institute of High Performance Computing (IHPC), Agency for Science, Technology and Research (A*STAR), 1 Fusionopolis Way, #16-16 Connexis, Singapore, 138632, Republic of Singapore
| | - Chi Pui Pang
- Joint Shantou International Eye Center, Shantou University and the Chinese University of Hong Kong, 515041, Shantou, Guangdong, China
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, 999077, Hong Kong, China
| | - Xinjian Chen
- School of Electronics and Information Engineering, Soochow University, 215006, Suzhou, Jiangsu, China.
- State Key Laboratory of Radiation Medicine and Protection, Soochow University, 215006, Suzhou, China.
| | - Haoyu Chen
- Joint Shantou International Eye Center, Shantou University and the Chinese University of Hong Kong, 515041, Shantou, Guangdong, China.
| | - Huazhu Fu
- Institute of High Performance Computing (IHPC), Agency for Science, Technology and Research (A*STAR), 1 Fusionopolis Way, #16-16 Connexis, Singapore, 138632, Republic of Singapore.
| |
Collapse
|
35
|
Zhao X, Lin Z, Yu S, Xiao J, Xie L, Xu Y, Tsui CK, Cui K, Zhao L, Zhang G, Zhang S, Lu Y, Lin H, Liang X, Lin D. An artificial intelligence system for the whole process from diagnosis to treatment suggestion of ischemic retinal diseases. Cell Rep Med 2023; 4:101197. [PMID: 37734379 PMCID: PMC10591037 DOI: 10.1016/j.xcrm.2023.101197] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2023] [Revised: 05/29/2023] [Accepted: 08/23/2023] [Indexed: 09/23/2023]
Abstract
Ischemic retinal diseases (IRDs) are a series of common blinding diseases that depend on accurate fundus fluorescein angiography (FFA) image interpretation for diagnosis and treatment. An artificial intelligence system (Ai-Doctor) was developed to interpret FFA images. Ai-Doctor performed well in image phase identification (area under the curve [AUC], 0.991-0.999, range), diabetic retinopathy (DR) and branch retinal vein occlusion (BRVO) diagnosis (AUC, 0.979-0.992), and non-perfusion area segmentation (Dice similarity coefficient [DSC], 89.7%-90.1%) and quantification. The segmentation model was expanded to unencountered IRDs (central RVO and retinal vasculitis), with DSCs of 89.2% and 83.6%, respectively. A clinically applicable ischemia index (CAII) was proposed to evaluate ischemic degree; patients with CAII values exceeding 0.17 in BRVO and 0.08 in DR may be associated with increased possibility for laser therapy. Ai-Doctor is expected to achieve accurate FFA image interpretation for IRDs, potentially reducing the reliance on retinal specialists.
Collapse
Affiliation(s)
- Xinyu Zhao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China; Shenzhen Eye Hospital, Jinan University, Shenzhen Eye Institute, Shenzhen 518040, China
| | - Zhenzhe Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China
| | - Shanshan Yu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China
| | - Jun Xiao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China
| | - Liqiong Xie
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China
| | - Yue Xu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China
| | - Ching-Kit Tsui
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China
| | - Kaixuan Cui
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China
| | - Lanqin Zhao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China
| | - Guoming Zhang
- Shenzhen Eye Hospital, Jinan University, Shenzhen Eye Institute, Shenzhen 518040, China
| | - Shaochong Zhang
- Shenzhen Eye Hospital, Jinan University, Shenzhen Eye Institute, Shenzhen 518040, China
| | - Yan Lu
- Foshan Second People's Hospital, Foshan 528001, China
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China; Hainan Eye Hospital and Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Haikou 570311, China; Center for Precision Medicine and Department of Genetics and Biomedical Informatics, Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou 510080, China.
| | - Xiaoling Liang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| | - Duoru Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou 510060, China.
| |
Collapse
|
36
|
Li L, Lin D, Lin Z, Li M, Lian Z, Zhao L, Wu X, Liu L, Liu J, Wei X, Luo M, Zeng D, Yan A, Iao WC, Shang Y, Xu F, Xiang W, He M, Fu Z, Wang X, Deng Y, Fan X, Ye Z, Wei M, Zhang J, Liu B, Li J, Ding X, Lin H. DeepQuality improves infant retinopathy screening. NPJ Digit Med 2023; 6:192. [PMID: 37845275 PMCID: PMC10579317 DOI: 10.1038/s41746-023-00943-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Accepted: 10/05/2023] [Indexed: 10/18/2023] Open
Abstract
Image quality variation is a prominent cause of performance degradation for intelligent disease diagnostic models in clinical applications. Image quality issues are particularly prominent in infantile fundus photography due to poor patient cooperation, which poses a high risk of misdiagnosis. Here, we developed a deep learning-based image quality assessment and enhancement system (DeepQuality) for infantile fundus images to improve infant retinopathy screening. DeepQuality can accurately detect various quality defects concerning integrity, illumination, and clarity with area under the curve (AUC) values ranging from 0.933 to 0.995. It can also comprehensively score the overall quality of each fundus photograph. By analyzing 2,015,758 infantile fundus photographs from real-world settings using DeepQuality, we found that 58.3% of them had varying degrees of quality defects, and large variations were observed among different regions and categories of hospitals. Additionally, DeepQuality provides quality enhancement based on the results of quality assessment. After quality enhancement, the performance of retinopathy of prematurity (ROP) diagnosis of clinicians was significantly improved. Moreover, the integration of DeepQuality and AI diagnostic models can effectively improve the model performance for detecting ROP. This study may be an important reference for the future development of other image-based intelligent disease screening systems.
Collapse
Affiliation(s)
- Longhui Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Duoru Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China.
| | - Zhenzhe Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Mingyuan Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Zhangkai Lian
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Lanqin Zhao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Xiaohang Wu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Lixue Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Jiali Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Xiaoyue Wei
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Mingjie Luo
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Danqi Zeng
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Anqi Yan
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Wai Cheng Iao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Yuanjun Shang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Fabao Xu
- Department of Ophthalmology, Qilu Hospital, Shandong University, Jinan, Shandong, China
| | - Wei Xiang
- Department of Clinical Laboratory Medicine, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
| | - Muchen He
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Zhe Fu
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Xueyu Wang
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Yaru Deng
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Xinyan Fan
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Zhijun Ye
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Meirong Wei
- Department of Ophthalmology, Maternal and Children's Hospital, Liuzhou, Guangxi, China
| | - Jianping Zhang
- Department of Ophthalmology, Maternal and Children's Hospital, Liuzhou, Guangxi, China
| | - Baohai Liu
- Department of Ophthalmology, Maternal and Children's Hospital, Linyi, Shandong, China
| | - Jianqiao Li
- Department of Ophthalmology, Qilu Hospital, Shandong University, Jinan, Shandong, China
| | - Xiaoyan Ding
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China.
- Hainan Eye Hospital and Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Haikou, Hainan, China.
- Center for Precision Medicine and Department of Genetics and Biomedical Informatics, Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China.
| |
Collapse
|
37
|
Gao M, Jiang H, Zhu L, Jiang Z, Geng M, Ren Q, Lu Y. Discriminative ensemble meta-learning with co-regularization for rare fundus diseases diagnosis. Med Image Anal 2023; 89:102884. [PMID: 37459674 DOI: 10.1016/j.media.2023.102884] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Revised: 05/26/2023] [Accepted: 06/28/2023] [Indexed: 09/08/2023]
Abstract
Deep neural networks (DNNs) have been widely applied in the medical image community, contributing to automatic ophthalmic screening systems for some common diseases. However, the incidence of fundus diseases patterns exhibits a typical long-tailed distribution. In clinic, a small number of common fundus diseases have sufficient observed cases for large-scale analysis while most of the fundus diseases are infrequent. For these rare diseases with extremely low-data regimes, it is challenging to train DNNs to realize automatic diagnosis. In this work, we develop an automatic diagnosis system for rare fundus diseases, based on the meta-learning framework. The system incorporates a co-regularization loss and the ensemble-learning strategy into the meta-learning framework, fully leveraging the advantage of multi-scale hierarchical feature embedding. We initially conduct comparative experiments on our newly-constructed lightweight multi-disease fundus images dataset for the few-shot recognition task (namely, FundusData-FS). Moreover, we verify the cross-domain transferability from miniImageNet to FundusData-FS, and further confirm our method's good repeatability. Rigorous experiments demonstrate that our method can detect rare fundus diseases, and is superior to the state-of-the-art methods. These investigations demonstrate that the potential of our method for the real clinical practice is promising.
Collapse
Affiliation(s)
- Mengdi Gao
- Institute of Medical Technology, Peking University Health Science Center, Peking University, Beijing 100191, China; Department of Biomedical Engineering, College of Future Technology, Peking University, Beijing 100871, China; National Biomedical Imaging Center, Peking University, Beijing 100871, China; Institute of Biomedical Engineering, Peking University Shenzhen Graduate School, Shenzhen 518055, China; Institute of Biomedical Engineering, Shenzhen Bay Laboratory 5F, Shenzhen 518071, China
| | - Hongyang Jiang
- Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen 518055, China
| | - Lei Zhu
- Institute of Medical Technology, Peking University Health Science Center, Peking University, Beijing 100191, China; Department of Biomedical Engineering, College of Future Technology, Peking University, Beijing 100871, China; National Biomedical Imaging Center, Peking University, Beijing 100871, China; Institute of Biomedical Engineering, Peking University Shenzhen Graduate School, Shenzhen 518055, China; Institute of Biomedical Engineering, Shenzhen Bay Laboratory 5F, Shenzhen 518071, China
| | - Zhe Jiang
- Institute of Medical Technology, Peking University Health Science Center, Peking University, Beijing 100191, China; Department of Biomedical Engineering, College of Future Technology, Peking University, Beijing 100871, China; National Biomedical Imaging Center, Peking University, Beijing 100871, China; Institute of Biomedical Engineering, Peking University Shenzhen Graduate School, Shenzhen 518055, China; Institute of Biomedical Engineering, Shenzhen Bay Laboratory 5F, Shenzhen 518071, China
| | - Mufeng Geng
- Institute of Medical Technology, Peking University Health Science Center, Peking University, Beijing 100191, China; Department of Biomedical Engineering, College of Future Technology, Peking University, Beijing 100871, China; National Biomedical Imaging Center, Peking University, Beijing 100871, China; Institute of Biomedical Engineering, Peking University Shenzhen Graduate School, Shenzhen 518055, China; Institute of Biomedical Engineering, Shenzhen Bay Laboratory 5F, Shenzhen 518071, China
| | - Qiushi Ren
- Institute of Medical Technology, Peking University Health Science Center, Peking University, Beijing 100191, China; Department of Biomedical Engineering, College of Future Technology, Peking University, Beijing 100871, China; National Biomedical Imaging Center, Peking University, Beijing 100871, China; Institute of Biomedical Engineering, Peking University Shenzhen Graduate School, Shenzhen 518055, China; Institute of Biomedical Engineering, Shenzhen Bay Laboratory 5F, Shenzhen 518071, China
| | - Yanye Lu
- Institute of Medical Technology, Peking University Health Science Center, Peking University, Beijing 100191, China; National Biomedical Imaging Center, Peking University, Beijing 100871, China; Institute of Biomedical Engineering, Peking University Shenzhen Graduate School, Shenzhen 518055, China.
| |
Collapse
|
38
|
Zhou Y, Chia MA, Wagner SK, Ayhan MS, Williamson DJ, Struyven RR, Liu T, Xu M, Lozano MG, Woodward-Court P, Kihara Y, Altmann A, Lee AY, Topol EJ, Denniston AK, Alexander DC, Keane PA. A foundation model for generalizable disease detection from retinal images. Nature 2023; 622:156-163. [PMID: 37704728 PMCID: PMC10550819 DOI: 10.1038/s41586-023-06555-x] [Citation(s) in RCA: 102] [Impact Index Per Article: 102.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Accepted: 08/18/2023] [Indexed: 09/15/2023]
Abstract
Medical artificial intelligence (AI) offers great potential for recognizing signs of health conditions in retinal images and expediting the diagnosis of eye diseases and systemic disorders1. However, the development of AI models requires substantial annotation and models are usually task-specific with limited generalizability to different clinical applications2. Here, we present RETFound, a foundation model for retinal images that learns generalizable representations from unlabelled retinal images and provides a basis for label-efficient model adaptation in several applications. Specifically, RETFound is trained on 1.6 million unlabelled retinal images by means of self-supervised learning and then adapted to disease detection tasks with explicit labels. We show that adapted RETFound consistently outperforms several comparison models in the diagnosis and prognosis of sight-threatening eye diseases, as well as incident prediction of complex systemic disorders such as heart failure and myocardial infarction with fewer labelled data. RETFound provides a generalizable solution to improve model performance and alleviate the annotation workload of experts to enable broad clinical AI applications from retinal imaging.
Collapse
Affiliation(s)
- Yukun Zhou
- Centre for Medical Image Computing, University College London, London, UK.
- NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust, London, UK.
- Department of Medical Physics and Biomedical Engineering, University College London, London, UK.
| | - Mark A Chia
- NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust, London, UK
- Institute of Ophthalmology, University College London, London, UK
| | - Siegfried K Wagner
- NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust, London, UK
- Institute of Ophthalmology, University College London, London, UK
| | - Murat S Ayhan
- Centre for Medical Image Computing, University College London, London, UK
- NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust, London, UK
- Institute of Ophthalmology, University College London, London, UK
| | - Dominic J Williamson
- Centre for Medical Image Computing, University College London, London, UK
- NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust, London, UK
- Institute of Ophthalmology, University College London, London, UK
| | - Robbert R Struyven
- Centre for Medical Image Computing, University College London, London, UK
- NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust, London, UK
- Institute of Ophthalmology, University College London, London, UK
| | - Timing Liu
- NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Moucheng Xu
- Centre for Medical Image Computing, University College London, London, UK
- Department of Medical Physics and Biomedical Engineering, University College London, London, UK
| | - Mateo G Lozano
- NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust, London, UK
- Department of Computer Science, University of Coruña, A Coruña, Spain
| | - Peter Woodward-Court
- Centre for Medical Image Computing, University College London, London, UK
- NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust, London, UK
- Institute of Health Informatics, University College London, London, UK
| | - Yuka Kihara
- Department of Ophthalmology, University of Washington, Seattle, WA, USA
- Roger and Angie Karalis Johnson Retina Center, University of Washington, Seattle, WA, USA
| | - Andre Altmann
- Centre for Medical Image Computing, University College London, London, UK
- Department of Medical Physics and Biomedical Engineering, University College London, London, UK
| | - Aaron Y Lee
- Department of Ophthalmology, University of Washington, Seattle, WA, USA
- Roger and Angie Karalis Johnson Retina Center, University of Washington, Seattle, WA, USA
| | - Eric J Topol
- Department of Molecular Medicine, Scripps Research, La Jolla, CA, USA
| | - Alastair K Denniston
- Academic Unit of Ophthalmology, University of Birmingham, Birmingham, UK
- University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK
| | - Daniel C Alexander
- Centre for Medical Image Computing, University College London, London, UK
- Department of Computer Science, University College London, London, UK
| | - Pearse A Keane
- NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust, London, UK.
- Institute of Ophthalmology, University College London, London, UK.
| |
Collapse
|
39
|
Suman S, Tiwari AK, Singh K. Computer-aided diagnostic system for hypertensive retinopathy: A review. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 240:107627. [PMID: 37320942 DOI: 10.1016/j.cmpb.2023.107627] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/06/2022] [Revised: 05/03/2023] [Accepted: 05/27/2023] [Indexed: 06/17/2023]
Abstract
Hypertensive Retinopathy (HR) is a retinal disease caused by elevated blood pressure for a prolonged period. There are no obvious signs in the early stages of high blood pressure, but it affects various body parts over time, including the eyes. HR is a biomarker for several illnesses, including retinal diseases, atherosclerosis, strokes, kidney disease, and cardiovascular risks. Early microcirculation abnormalities in chronic diseases can be diagnosed through retinal examination prior to the onset of major clinical consequences. Computer-aided diagnosis (CAD) plays a vital role in the early identification of HR with improved diagnostic accuracy, which is time-efficient and demands fewer resources. Recently, numerous studies have been reported on the automatic identification of HR. This paper provides a comprehensive review of the automated tasks of Artery-Vein (A/V) classification, Arteriovenous ratio (AVR) computation, HR detection (Binary classification), and HR severity grading. The review is conducted using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) protocol. The paper discusses the clinical features of HR, the availability of datasets, existing methods used for A/V classification, AVR computation, HR detection, and severity grading, and performance evaluation metrics. The reviewed articles are summarized with classifiers details, adoption of different kinds of methodologies, performance comparisons, datasets details, their pros and cons, and computational platform. For each task, a summary and critical in-depth analysis are provided, as well as common research issues and challenges in the existing studies. Finally, the paper proposes future research directions to overcome challenges associated with data set availability, HR detection, and severity grading.
Collapse
Affiliation(s)
- Supriya Suman
- Interdisciplinary Research Platform (IDRP): Smart Healthcare, Indian Institute of Technology, N.H. 62, Nagaur Road, Karwar, Jodhpur, Rajasthan 342030, India.
| | - Anil Kumar Tiwari
- Department of Electrical Engineering, Indian Institute of Technology, N.H. 62, Nagaur Road, Karwar, Jodhpur, Rajasthan 342030, India
| | - Kuldeep Singh
- Department of Pediatrics, All India Institute of Medical Sciences, Basni Industrial Area Phase-2, Jodhpur, Rajasthan 342005, India
| |
Collapse
|
40
|
Ji J, Zhang W, Dong Y, Lin R, Geng Y, Hong L. Automated cervical cell segmentation using deep ensemble learning. BMC Med Imaging 2023; 23:137. [PMID: 37735354 PMCID: PMC10514950 DOI: 10.1186/s12880-023-01096-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2023] [Accepted: 09/04/2023] [Indexed: 09/23/2023] Open
Abstract
BACKGROUND Cervical cell segmentation is a fundamental step in automated cervical cancer cytology screening. The aim of this study was to develop and evaluate a deep ensemble model for cervical cell segmentation including both cytoplasm and nucleus segmentation. METHODS The Cx22 dataset was used to develop the automated cervical cell segmentation algorithm. The U-Net, U-Net + + , DeepLabV3, DeepLabV3Plus, Transunet, and Segformer were used as candidate model architectures, and each of the first four architectures adopted two different encoders choosing from resnet34, resnet50 and denseNet121. Models were trained under two settings: trained from scratch, encoders initialized from ImageNet pre-trained models and then all layers were fine-tuned. For every segmentation task, four models were chosen as base models, and Unweighted average was adopted as the model ensemble method. RESULTS U-Net and U-Net + + with resnet34 and denseNet121 encoders trained using transfer learning consistently performed better than other models, so they were chosen as base models. The ensemble model obtained the Dice similarity coefficient, sensitivity, specificity of 0.9535 (95% CI:0.9534-0.9536), 0.9621 (0.9619-0.9622),0.9835 (0.9834-0.9836) and 0.7863 (0.7851-0.7876), 0.9581 (0.9573-0.959), 0.9961 (0.9961-0.9962) on cytoplasm segmentation and nucleus segmentation, respectively. The Dice, sensitivity, specificity of baseline models for cytoplasm segmentation and nucleus segmentation were 0.948, 0.954, 0.9823 and 0.750, 0.713, 0.9988, respectively. Except for the specificity of cytoplasm segmentation, all metrics outperformed the best baseline models (P < 0.05) with a moderate margin. CONCLUSIONS The proposed algorithm achieved better performances on cervical cell segmentation than baseline models. It can be potentially used in automated cervical cancer cytology screening system.
Collapse
Affiliation(s)
- Jie Ji
- Network & Information Center, Shantou University, Shantou, 515041, Guangdong, China
| | - Weifeng Zhang
- Guangdong Provincial International Collaborative Center of Molecular Medicine, Laboratory of Molecular Pathology, Shantou University Medical College, Shantou, 515041, China
| | - Yuejiao Dong
- Department of Pathology, the First Affiliated Hospital of Shantou University Medical College, Shantou, 515041, Guangdong, China
| | - Ruilin Lin
- Department of Pathology, the First Affiliated Hospital of Shantou University Medical College, Shantou, 515041, Guangdong, China
| | - Yiqun Geng
- Guangdong Provincial International Collaborative Center of Molecular Medicine, Laboratory of Molecular Pathology, Shantou University Medical College, Shantou, 515041, China.
| | - Liangli Hong
- Department of Pathology, the First Affiliated Hospital of Shantou University Medical College, Shantou, 515041, Guangdong, China.
| |
Collapse
|
41
|
Liu YF, Ji YK, Fei FQ, Chen NM, Zhu ZT, Fei XZ. Research progress in artificial intelligence assisted diabetic retinopathy diagnosis. Int J Ophthalmol 2023; 16:1395-1405. [PMID: 37724288 PMCID: PMC10475636 DOI: 10.18240/ijo.2023.09.05] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Accepted: 06/14/2023] [Indexed: 09/20/2023] Open
Abstract
Diabetic retinopathy (DR) is one of the most common retinal vascular diseases and one of the main causes of blindness worldwide. Early detection and treatment can effectively delay vision decline and even blindness in patients with DR. In recent years, artificial intelligence (AI) models constructed by machine learning and deep learning (DL) algorithms have been widely used in ophthalmology research, especially in diagnosing and treating ophthalmic diseases, particularly DR. Regarding DR, AI has mainly been used in its diagnosis, grading, and lesion recognition and segmentation, and good research and application results have been achieved. This study summarizes the research progress in AI models based on machine learning and DL algorithms for DR diagnosis and discusses some limitations and challenges in AI research.
Collapse
Affiliation(s)
- Yun-Fang Liu
- Department of Ophthalmology, First People's Hospital of Huzhou, Huzhou University, Huzhou 313000, Zhejiang Province, China
| | - Yu-Ke Ji
- Eye Hospital, Nanjing Medical University, Nanjing 210000, Jiangsu Province, China
| | - Fang-Qin Fei
- Department of Endocrinology, First People's Hospital of Huzhou, Huzhou University, Huzhou 313000, Zhejiang Province, China
| | - Nai-Mei Chen
- Department of Ophthalmology, Huai'an Hospital of Huai'an City, Huai'an 223000, Jiangsu Province, China
| | - Zhen-Tao Zhu
- Department of Ophthalmology, Huai'an Hospital of Huai'an City, Huai'an 223000, Jiangsu Province, China
| | - Xing-Zhen Fei
- Department of Endocrinology, First People's Hospital of Huzhou, Huzhou University, Huzhou 313000, Zhejiang Province, China
| |
Collapse
|
42
|
Wan C, Hua R, Li K, Hong X, Fang D, Yang W. Automatic Diagnosis of Different Types of Retinal Vein Occlusion Based on Fundus Images. INT J INTELL SYST 2023; 2023:1-13. [DOI: 10.1155/2023/1587410] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2024]
Abstract
Retinal vein occlusion (RVO) is the second common cause of blindness following diabetic retinopathy. The manual screening of fundus images to detect RVO is time consuming. Deep-learning techniques have been used for screening RVO due to their outstanding performance in many applications. However, unlike other images, medical images have smaller lesions, which require a more elaborate approach. To provide patients with an accurate diagnosis, followed by timely and effective treatment, we developed an intelligent method for automatic RVO screening on fundus images. Swin Transformer learns the hierarchy of low-to high-level features like the convolutional neural network. However, Swin Transformer extracts features from fundus images through attention modules, which pay more attention to the interrelationship between the features and each other. The model is more universal, does not rely entirely on the data itself, and focuses not only on local information but has a diffusion mechanism from local to global. To suppress overfitting, we adopt a regularization strategy, label smoothing, which uses one-hot to add noise to reduce the weight of the categories of true sample labels when calculating the loss function. The choice of different models using a 5-fold cross-validation on our own datasets indicates that Swin Transformer performs better. The accuracy of classifying all datasets is 98.75 ± 0.000, and the accuracy of identifying MRVO, CRVO, BRVO, and normal, using the method proposed in the paper, is 94.49 ± 0.094, 99.98 ± 0.015, 98.88 ± 0.08, and 99.42 ± 0.012, respectively. The method will be useful to diagnose RVO and help decide grade through fundus images, which has the potency to provide patients with further diagnosis and treatment.
Collapse
Affiliation(s)
- Cheng Wan
- College of Electronic and Information Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 211100, China
| | - Rongrong Hua
- College of Electronic and Information Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 211100, China
| | - Kunke Li
- Shenzhen Eye Hospital, Jinan University, Shenzhen 518040, China
| | - Xiangqian Hong
- Shenzhen Eye Hospital, Jinan University, Shenzhen 518040, China
| | - Dong Fang
- Shenzhen Eye Hospital, Jinan University, Shenzhen 518040, China
| | - Weihua Yang
- Shenzhen Eye Hospital, Jinan University, Shenzhen 518040, China
| |
Collapse
|
43
|
Zhelev Z, Peters J, Rogers M, Allen M, Kijauskaite G, Seedat F, Wilkinson E, Hyde C. Test accuracy of artificial intelligence-based grading of fundus images in diabetic retinopathy screening: A systematic review. J Med Screen 2023; 30:97-112. [PMID: 36617971 PMCID: PMC10399100 DOI: 10.1177/09691413221144382] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2022] [Revised: 11/14/2022] [Accepted: 11/18/2022] [Indexed: 01/10/2023]
Abstract
OBJECTIVES To systematically review the accuracy of artificial intelligence (AI)-based systems for grading of fundus images in diabetic retinopathy (DR) screening. METHODS We searched MEDLINE, EMBASE, the Cochrane Library and the ClinicalTrials.gov from 1st January 2000 to 27th August 2021. Accuracy studies published in English were included if they met the pre-specified inclusion criteria. Selection of studies for inclusion, data extraction and quality assessment were conducted by one author with a second reviewer independently screening and checking 20% of titles. Results were analysed narratively. RESULTS Forty-three studies evaluating 15 deep learning (DL) and 4 machine learning (ML) systems were included. Nine systems were evaluated in a single study each. Most studies were judged to be at high or unclear risk of bias in at least one QUADAS-2 domain. Sensitivity for referable DR and higher grades was ≥85% while specificity varied and was <80% for all ML systems and in 6/31 studies evaluating DL systems. Studies reported high accuracy for detection of ungradable images, but the latter were analysed and reported inconsistently. Seven studies reported that AI was more sensitive but less specific than human graders. CONCLUSIONS AI-based systems are more sensitive than human graders and could be safe to use in clinical practice but have variable specificity. However, for many systems evidence is limited, at high risk of bias and may not generalise across settings. Therefore, pre-implementation assessment in the target clinical pathway is essential to obtain reliable and applicable accuracy estimates.
Collapse
Affiliation(s)
- Zhivko Zhelev
- Exeter Test Group, University of Exeter Medical School, University of Exeter, Exeter, UK
| | - Jaime Peters
- Exeter Test Group, University of Exeter Medical School, University of Exeter, Exeter, UK
| | - Morwenna Rogers
- NIHR ARC South West Peninsula (PenARC), University of Exeter Medical School, University of Exeter, Exeter, UK
| | - Michael Allen
- University of Exeter Medical School, University of Exeter, Exeter, UK
| | | | | | | | - Christopher Hyde
- Exeter Test Group, University of Exeter Medical School, University of Exeter, Exeter, UK
| |
Collapse
|
44
|
Yang Z, Zhang Y, Xu K, Sun J, Wu Y, Zhou M. DeepDrRVO: A GAN-auxiliary two-step masked transformer framework benefits early recognition and differential diagnosis of retinal vascular occlusion from color fundus photographs. Comput Biol Med 2023; 163:107148. [PMID: 37329618 DOI: 10.1016/j.compbiomed.2023.107148] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Revised: 05/25/2023] [Accepted: 06/07/2023] [Indexed: 06/19/2023]
Abstract
Retinal vascular occlusion (RVO) are common causes of visual impairment. Accurate recognition and differential diagnosis of RVO are unmet medical needs for determining appropriate treatments and health care to properly manage the ocular condition and minimize the damaging effects. To leverage deep learning as a potential solution to detect RVO reliably, we developed a deep learning model on color fundus photographs (CFPs) using a two-step masked SwinTransformer with a Few-Sample Generator (FSG)-auxiliary training framework (called DeepDrRVO) for early and differential RVO diagnosis. The DeepDrRVO was trained on the training set from the in-house cohort and achieved consistently high performance in early recognition and differential diagnosis of RVO in the validation set from the in-house cohort with an accuracy of 86.3%, and other three independent multi-center cohorts with the accuracy of 92.6%, 90.8%, and 100%. Further comparative analysis showed that the proposed DeepDrRVO outperforms conventional state-of-the-art classification models, such as ResNet18, ResNet50d, MobileNetv3, and EfficientNetb1. These results highlight the potential benefits of the deep learning model in automatic early RVO detection and differential diagnosis for improving clinical outcomes and providing insights into diagnosing other ocular diseases with a few-shot learning challenge. The DeepDrRVO is publicly available on https://github.com/ZhouSunLab-Workshops/DeepDrRVO.
Collapse
Affiliation(s)
- Zijian Yang
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, PR China
| | - Yibo Zhang
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, PR China
| | - Ke Xu
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, PR China
| | - Jie Sun
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, PR China.
| | - Yue Wu
- The Affiliated Ningbo Eye Hospital of Wenzhou Medical University, Ningbo, 315042, PR China.
| | - Meng Zhou
- National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, PR China; Institute of PSI Genomics, Wenzhou, 325027, PR China.
| |
Collapse
|
45
|
Casey AE, Ansari S, Nakisa B, Kelly B, Brown P, Cooper P, Muhammad I, Livingstone S, Reddy S, Makinen VP. Application of a Comprehensive Evaluation Framework to COVID-19 Studies: Systematic Review of Translational Aspects of Artificial Intelligence in Health Care. JMIR AI 2023; 2:e42313. [PMID: 37457747 PMCID: PMC10337329 DOI: 10.2196/42313] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/31/2022] [Revised: 11/23/2022] [Accepted: 03/22/2023] [Indexed: 07/18/2023]
Abstract
Background Despite immense progress in artificial intelligence (AI) models, there has been limited deployment in health care environments. The gap between potential and actual AI applications is likely due to the lack of translatability between controlled research environments (where these models are developed) and clinical environments for which the AI tools are ultimately intended. Objective We previously developed the Translational Evaluation of Healthcare AI (TEHAI) framework to assess the translational value of AI models and to support successful transition to health care environments. In this study, we applied the TEHAI framework to the COVID-19 literature in order to assess how well translational topics are covered. Methods A systematic literature search for COVID-19 AI studies published between December 2019 and December 2020 resulted in 3830 records. A subset of 102 (2.7%) papers that passed the inclusion criteria was sampled for full review. The papers were assessed for translational value and descriptive data collected by 9 reviewers (each study was assessed by 2 reviewers). Evaluation scores and extracted data were compared by a third reviewer for resolution of discrepancies. The review process was conducted on the Covidence software platform. Results We observed a significant trend for studies to attain high scores for technical capability but low scores for the areas essential for clinical translatability. Specific questions regarding external model validation, safety, nonmaleficence, and service adoption received failed scores in most studies. Conclusions Using TEHAI, we identified notable gaps in how well translational topics of AI models are covered in the COVID-19 clinical sphere. These gaps in areas crucial for clinical translatability could, and should, be considered already at the model development stage to increase translatability into real COVID-19 health care environments.
Collapse
Affiliation(s)
- Aaron Edward Casey
- South Australian Health and Medical Research Institute Adelaide Australia
- Australian Centre for Precision Health Cancer Research Institute University of South Australia Adelaide Australia
| | - Saba Ansari
- School of Medicine Deakin University Geelong Australia
| | - Bahareh Nakisa
- School of Information Technology Deakin University Geelong Australia
| | | | | | - Paul Cooper
- School of Medicine Deakin University Geelong Australia
| | | | | | - Sandeep Reddy
- School of Medicine Deakin University Geelong Australia
| | - Ville-Petteri Makinen
- South Australian Health and Medical Research Institute Adelaide Australia
- Australian Centre for Precision Health Cancer Research Institute University of South Australia Adelaide Australia
- Computational Medicine Faculty of Medicine University of Oulu Oulu Finland
- Centre for Life Course Health Research Faculty of Medicine University of Oulu Oulu Finland
| |
Collapse
|
46
|
Chen RJ, Wang JJ, Williamson DFK, Chen TY, Lipkova J, Lu MY, Sahai S, Mahmood F. Algorithmic fairness in artificial intelligence for medicine and healthcare. Nat Biomed Eng 2023; 7:719-742. [PMID: 37380750 PMCID: PMC10632090 DOI: 10.1038/s41551-023-01056-8] [Citation(s) in RCA: 59] [Impact Index Per Article: 59.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2021] [Accepted: 04/13/2023] [Indexed: 06/30/2023]
Abstract
In healthcare, the development and deployment of insufficiently fair systems of artificial intelligence (AI) can undermine the delivery of equitable care. Assessments of AI models stratified across subpopulations have revealed inequalities in how patients are diagnosed, treated and billed. In this Perspective, we outline fairness in machine learning through the lens of healthcare, and discuss how algorithmic biases (in data acquisition, genetic variation and intra-observer labelling variability, in particular) arise in clinical workflows and the resulting healthcare disparities. We also review emerging technology for mitigating biases via disentanglement, federated learning and model explainability, and their role in the development of AI-based software as a medical device.
Collapse
Affiliation(s)
- Richard J Chen
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and Massachusetts Institute of Technology, Cambridge, MA, USA
- Cancer Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
| | - Judy J Wang
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Boston University School of Medicine, Boston, MA, USA
| | - Drew F K Williamson
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Tiffany Y Chen
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Jana Lipkova
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Ming Y Lu
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and Massachusetts Institute of Technology, Cambridge, MA, USA
- Cancer Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
- Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Sharifa Sahai
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and Massachusetts Institute of Technology, Cambridge, MA, USA
- Department of Systems Biology, Harvard Medical School, Boston, MA, USA
| | - Faisal Mahmood
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA.
- Cancer Program, Broad Institute of Harvard and Massachusetts Institute of Technology, Cambridge, MA, USA.
- Cancer Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA.
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA.
- Harvard Data Science Initiative, Harvard University, Cambridge, MA, USA.
| |
Collapse
|
47
|
Krzywicki T, Brona P, Zbrzezny AM, Grzybowski AE. A Global Review of Publicly Available Datasets Containing Fundus Images: Characteristics, Barriers to Access, Usability, and Generalizability. J Clin Med 2023; 12:jcm12103587. [PMID: 37240693 DOI: 10.3390/jcm12103587] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Revised: 04/29/2023] [Accepted: 05/17/2023] [Indexed: 05/28/2023] Open
Abstract
This article provides a comprehensive and up-to-date overview of the repositories that contain color fundus images. We analyzed them regarding availability and legality, presented the datasets' characteristics, and identified labeled and unlabeled image sets. This study aimed to complete all publicly available color fundus image datasets to create a central catalog of available color fundus image datasets.
Collapse
Affiliation(s)
- Tomasz Krzywicki
- Faculty of Mathematics and Computer Science, University of Warmia and Mazury, 10-710 Olsztyn, Poland
| | - Piotr Brona
- Department of Ophthalmology, Poznan City Hospital, 61-285 Poznań, Poland
| | - Agnieszka M Zbrzezny
- Faculty of Mathematics and Computer Science, University of Warmia and Mazury, 10-710 Olsztyn, Poland
- Faculty of Design, SWPS University of Social Sciences and Humanities, Chodakowska 19/31, 03-815 Warsaw, Poland
| | - Andrzej E Grzybowski
- Institute for Research in Ophthalmology, Foundation for Ophthalmology Development, 60-836 Poznań, Poland
| |
Collapse
|
48
|
Zbrzezny AM, Grzybowski AE. Deceptive Tricks in Artificial Intelligence: Adversarial Attacks in Ophthalmology. J Clin Med 2023; 12:jcm12093266. [PMID: 37176706 PMCID: PMC10179065 DOI: 10.3390/jcm12093266] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Revised: 04/20/2023] [Accepted: 04/26/2023] [Indexed: 05/15/2023] Open
Abstract
The artificial intelligence (AI) systems used for diagnosing ophthalmic diseases have significantly progressed in recent years. The diagnosis of difficult eye conditions, such as cataracts, diabetic retinopathy, age-related macular degeneration, glaucoma, and retinopathy of prematurity, has become significantly less complicated as a result of the development of AI algorithms, which are currently on par with ophthalmologists in terms of their level of effectiveness. However, in the context of building AI systems for medical applications such as identifying eye diseases, addressing the challenges of safety and trustworthiness is paramount, including the emerging threat of adversarial attacks. Research has increasingly focused on understanding and mitigating these attacks, with numerous articles discussing this topic in recent years. As a starting point for our discussion, we used the paper by Ma et al. "Understanding Adversarial Attacks on Deep Learning Based Medical Image Analysis Systems". A literature review was performed for this study, which included a thorough search of open-access research papers using online sources (PubMed and Google). The research provides examples of unique attack strategies for medical images. Unfortunately, unique algorithms for attacks on the various ophthalmic image types have yet to be developed. It is a task that needs to be performed. As a result, it is necessary to build algorithms that validate the computation and explain the findings of artificial intelligence models. In this article, we focus on adversarial attacks, one of the most well-known attack methods, which provide evidence (i.e., adversarial examples) of the lack of resilience of decision models that do not include provable guarantees. Adversarial attacks have the potential to provide inaccurate findings in deep learning systems and can have catastrophic effects in the healthcare industry, such as healthcare financing fraud and wrong diagnosis.
Collapse
Affiliation(s)
- Agnieszka M Zbrzezny
- Faculty of Mathematics and Computer Science, University of Warmia and Mazury, 10-710 Olsztyn, Poland
- Faculty of Design, SWPS University of Social Sciences and Humanities, Chodakowska 19/31, 03-815 Warsaw, Poland
| | - Andrzej E Grzybowski
- Institute for Research in Ophthalmology, Foundation for Ophthalmology Development, 60-836 Poznan, Poland
| |
Collapse
|
49
|
Tang YW, Ji J, Lin JW, Wang J, Wang Y, Liu Z, Hu Z, Yang JF, Ng TK, Zhang M, Pang CP, Cen LP. Automatic Detection of Peripheral Retinal Lesions From Ultrawide-Field Fundus Images Using Deep Learning. Asia Pac J Ophthalmol (Phila) 2023; 12:284-292. [PMID: 36912572 DOI: 10.1097/apo.0000000000000599] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2022] [Accepted: 12/19/2022] [Indexed: 03/14/2023] Open
Abstract
PURPOSE To establish a multilabel-based deep learning (DL) algorithm for automatic detection and categorization of clinically significant peripheral retinal lesions using ultrawide-field fundus images. METHODS A total of 5958 ultrawide-field fundus images from 3740 patients were randomly split into a training set, validation set, and test set. A multilabel classifier was developed to detect rhegmatogenous retinal detachment, cystic retinal tuft, lattice degeneration, and retinal breaks. Referral decision was automatically generated based on the results of each disease class. t -distributed stochastic neighbor embedding heatmaps were used to visualize the features extracted by the neural networks. Gradient-weighted class activation mapping and guided backpropagation heatmaps were generated to investigate the image locations for decision-making by the DL models. The performance of the classifier(s) was evaluated by sensitivity, specificity, accuracy, F 1 score, area under receiver operating characteristic curve (AUROC) with 95% CI, and area under the precision-recall curve. RESULTS In the test set, all categories achieved a sensitivity of 0.836-0.918, a specificity of 0.858-0.989, an accuracy of 0.854-0.977, an F 1 score of 0.400-0.931, an AUROC of 0.9205-0.9882, and an area under the precision-recall curve of 0.6723-0.9745. The referral decisions achieved an AUROC of 0.9758 (95% CI= 0.9648-0.9869). The multilabel classifier had significantly better performance in cystic retinal tuft detection than the binary classifier (AUROC= 0.9781 vs 0.6112, P < 0.001). The model showed comparable performance with human experts. CONCLUSIONS This new DL model of a multilabel classifier is capable of automatic, accurate, and early detection of clinically significant peripheral retinal lesions with various sample sizes. It can be applied in peripheral retinal screening in clinics.
Collapse
Affiliation(s)
- Yi-Wen Tang
- Joint Shantou International Eye Center of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Jie Ji
- Network and Information Center, Shantou University, Shantou, Guangdong, China
| | - Jian-Wei Lin
- Joint Shantou International Eye Center of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Ji Wang
- Joint Shantou International Eye Center of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Yun Wang
- Joint Shantou International Eye Center of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Zibo Liu
- Joint Shantou International Eye Center of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Zhanchi Hu
- Joint Shantou International Eye Center of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Jian-Feng Yang
- Joint Shantou International Eye Center of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Tsz Kin Ng
- Joint Shantou International Eye Center of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
- Shantou University Medical College, Shantou, Guangdong, China
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Mingzhi Zhang
- Joint Shantou International Eye Center of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Chi Pui Pang
- Joint Shantou International Eye Center of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Ling-Ping Cen
- Joint Shantou International Eye Center of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| |
Collapse
|
50
|
Tan Y, Zhao SX, Yang KF, Li YJ. A lightweight network guided with differential matched filtering for retinal vessel segmentation. Comput Biol Med 2023; 160:106924. [PMID: 37146492 DOI: 10.1016/j.compbiomed.2023.106924] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Revised: 04/03/2023] [Accepted: 04/13/2023] [Indexed: 05/07/2023]
Abstract
The geometric morphology of retinal vessels reflects the state of cardiovascular health, and fundus images are important reference materials for ophthalmologists. Great progress has been made in automated vessel segmentation, but few studies have focused on thin vessel breakage and false-positives in areas with lesions or low contrast. In this work, we propose a new network, differential matched filtering guided attention UNet (DMF-AU), to address these issues, incorporating a differential matched filtering layer, feature anisotropic attention, and a multiscale consistency constrained backbone to perform thin vessel segmentation. The differential matched filtering is used for the early identification of locally linear vessels, and the resulting rough vessel map guides the backbone to learn vascular details. Feature anisotropic attention reinforces the vessel features of spatial linearity at each stage of the model. Multiscale constraints reduce the loss of vessel information while pooling within large receptive fields. In tests on multiple classical datasets, the proposed model performed well compared with other algorithms on several specially designed criteria for vessel segmentation. DMF-AU is a high-performance, lightweight vessel segmentation model. The source code is at https://github.com/tyb311/DMF-AU.
Collapse
Affiliation(s)
- Yubo Tan
- The MOE Key Laboratory for Neuroinformation, Radiation Oncology Key Laboratory of Sichuan Province, University of Electronic Science and Technology of China, China.
| | - Shi-Xuan Zhao
- The MOE Key Laboratory for Neuroinformation, Radiation Oncology Key Laboratory of Sichuan Province, University of Electronic Science and Technology of China, China.
| | - Kai-Fu Yang
- The MOE Key Laboratory for Neuroinformation, Radiation Oncology Key Laboratory of Sichuan Province, University of Electronic Science and Technology of China, China.
| | - Yong-Jie Li
- The MOE Key Laboratory for Neuroinformation, Radiation Oncology Key Laboratory of Sichuan Province, University of Electronic Science and Technology of China, China.
| |
Collapse
|