1
|
Lim ZW, Li J, Wong D, Chung J, Toh A, Lee JL, Lam C, Balakrishnan M, Chia A, Chua J, Girard M, Hoang QV, Chong R, Wong CW, Saw SM, Schmetterer L, Brennan N, Ang M. Comparison of manual and artificial intelligence-automated choroidal thickness segmentation of optical coherence tomography imaging in myopic adults. EYE AND VISION (LONDON, ENGLAND) 2024; 11:21. [PMID: 38831465 PMCID: PMC11145894 DOI: 10.1186/s40662-024-00385-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Accepted: 04/17/2024] [Indexed: 06/05/2024]
Abstract
BACKGROUND Myopia affects 1.4 billion individuals worldwide. Notably, there is increasing evidence that choroidal thickness plays an important role in myopia and risk of developing myopia-related conditions. With the advancements in artificial intelligence (AI), choroidal thickness segmentation can now be automated, offering inherent advantages such as better repeatability, reduced grader variability, and less reliance for manpower. Hence, we aimed to evaluate the agreement between AI-automated and manual segmented measurements of subfoveal choroidal thickness (SFCT) using two swept-source optical coherence tomography (OCT) systems. METHODS Subjects aged ≥ 16 years, with myopia of ≥ 0.50 diopters in both eyes, were recruited from the Prospective Myopia Cohort Study in Singapore (PROMYSE). OCT scans were acquired using Triton DRI-OCT and PLEX Elite 9000. OCT images were segmented both automatically with an established SA-Net architecture and manually using a standard technique with adjudication by two independent graders. SFCT was subsequently determined based on the segmentation. The Bland-Altman plot and intraclass correlation coefficient (ICC) were used to evaluate the agreement. RESULTS A total of 229 subjects (456 eyes) with mean [± standard deviation (SD)] age of 34.1 (10.4) years were included. The overall SFCT (mean ± SD) based on manual segmentation was 216.9 ± 82.7 µm with Triton DRI-OCT and 239.3 ± 84.3 µm with PLEX Elite 9000. ICC values demonstrated excellent agreement between AI-automated and manual segmented SFCT measurements (PLEX Elite 9000: ICC = 0.937, 95% CI: 0.922 to 0.949, P < 0.001; Triton DRI-OCT: ICC = 0.887, 95% CI: 0.608 to 0.950, P < 0.001). For PLEX Elite 9000, manual segmented measurements were generally thicker when compared to AI-automated segmented measurements, with a fixed bias of 6.3 µm (95% CI: 3.8 to 8.9, P < 0.001) and proportional bias of 0.120 (P < 0.001). On the other hand, manual segmented measurements were comparatively thinner than AI-automated segmented measurements for Triton DRI-OCT, with a fixed bias of - 26.7 µm (95% CI: - 29.7 to - 23.7, P < 0.001) and proportional bias of - 0.090 (P < 0.001). CONCLUSION We observed an excellent agreement in choroidal segmentation measurements when comparing manual with AI-automated techniques, using images from two SS-OCT systems. Given its edge over manual segmentation, automated segmentation may potentially emerge as the primary method of choroidal thickness measurement in the future.
Collapse
Affiliation(s)
- Zhi Wei Lim
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Jonathan Li
- Department of Ophthalmology, University of California, San Francisco, CA, USA
| | - Damon Wong
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Ophthalmology and Visual Sciences Department, Duke-NUS Medical School, Singapore, Singapore
- SERI-NTU Advanced Ocular Engineering (STANCE), Singapore Eye Research Institute and Nanyang Technological University, Singapore, Singapore
- Center for Medical Physics and Biomedical Engineering, Medical University Vienna, Vienna, Austria
| | - Joey Chung
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Angeline Toh
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Jia Ling Lee
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Crystal Lam
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Maithily Balakrishnan
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Audrey Chia
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
| | - Jacqueline Chua
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Ophthalmology and Visual Sciences Department, Duke-NUS Medical School, Singapore, Singapore
- SERI-NTU Advanced Ocular Engineering (STANCE), Singapore Eye Research Institute and Nanyang Technological University, Singapore, Singapore
| | - Michael Girard
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
- Institute of Molecular and Clinical Ophthalmology, Basel, Switzerland
| | - Quan V Hoang
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
- Department of Ophthalmology, Edward S. Harkness Eye Institute, Columbia University Vagelos College of Physicians and Surgeons, New York, NY, USA
| | - Rachel Chong
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
| | - Chee Wai Wong
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
| | - Seang Mei Saw
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
- Saw Swee Hock School of Public Health, National University of Singapore, Singapore, Singapore
| | - Leopold Schmetterer
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore
- SERI-NTU Advanced Ocular Engineering (STANCE), Singapore Eye Research Institute and Nanyang Technological University, Singapore, Singapore
- Center for Medical Physics and Biomedical Engineering, Medical University Vienna, Vienna, Austria
- Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
- Institute of Molecular and Clinical Ophthalmology, Basel, Switzerland
- School of Chemistry, Chemical Engineering and Biotechnology, Nanyang Technological University, Singapore, Singapore
- Department of Clinical Pharmacology, Medical University Vienna, Vienna, Austria
| | | | - Marcus Ang
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore.
- Ophthalmology and Visual Sciences Department, Duke-NUS Medical School, Singapore, Singapore.
| |
Collapse
|
2
|
Martin E, Cook AG, Frost SM, Turner AW, Chen FK, McAllister IL, Nolde JM, Schlaich MP. Ocular biomarkers: useful incidental findings by deep learning algorithms in fundus photographs. Eye (Lond) 2024:10.1038/s41433-024-03085-2. [PMID: 38734746 DOI: 10.1038/s41433-024-03085-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2023] [Revised: 04/03/2024] [Accepted: 04/11/2024] [Indexed: 05/13/2024] Open
Abstract
BACKGROUND/OBJECTIVES Artificial intelligence can assist with ocular image analysis for screening and diagnosis, but it is not yet capable of autonomous full-spectrum screening. Hypothetically, false-positive results may have unrealized screening potential arising from signals persisting despite training and/or ambiguous signals such as from biomarker overlap or high comorbidity. The study aimed to explore the potential to detect clinically useful incidental ocular biomarkers by screening fundus photographs of hypertensive adults using diabetic deep learning algorithms. SUBJECTS/METHODS Patients referred for treatment-resistant hypertension were imaged at a hospital unit in Perth, Australia, between 2016 and 2022. The same 45° colour fundus photograph selected for each of the 433 participants imaged was processed by three deep learning algorithms. Two expert retinal specialists graded all false-positive results for diabetic retinopathy in non-diabetic participants. RESULTS Of the 29 non-diabetic participants misclassified as positive for diabetic retinopathy, 28 (97%) had clinically useful retinal biomarkers. The models designed to screen for fewer diseases captured more incidental disease. All three algorithms showed a positive correlation between severity of hypertensive retinopathy and misclassified diabetic retinopathy. CONCLUSIONS The results suggest that diabetic deep learning models may be responsive to hypertensive and other clinically useful retinal biomarkers within an at-risk, hypertensive cohort. Observing that models trained for fewer diseases captured more incidental pathology increases confidence in signalling hypotheses aligned with using self-supervised learning to develop autonomous comprehensive screening. Meanwhile, non-referable and false-positive outputs of other deep learning screening models could be explored for immediate clinical use in other populations.
Collapse
Affiliation(s)
- Eve Martin
- Commonwealth Scientific and Industrial Research Organisation (CSIRO), Kensington, WA, Australia.
- School of Population and Global Health, The University of Western Australia, Crawley, Australia.
- Dobney Hypertension Centre - Royal Perth Hospital Unit, Medical School, The University of Western Australia, Perth, Australia.
- Australian e-Health Research Centre, Floreat, WA, Australia.
| | - Angus G Cook
- School of Population and Global Health, The University of Western Australia, Crawley, Australia
| | - Shaun M Frost
- Commonwealth Scientific and Industrial Research Organisation (CSIRO), Kensington, WA, Australia
- Australian e-Health Research Centre, Floreat, WA, Australia
| | - Angus W Turner
- Lions Eye Institute, Nedlands, WA, Australia
- Centre for Ophthalmology and Visual Science, The University of Western Australia, Perth, Australia
| | - Fred K Chen
- Lions Eye Institute, Nedlands, WA, Australia
- Centre for Ophthalmology and Visual Science, The University of Western Australia, Perth, Australia
- Centre for Eye Research Australia, The Royal Victorian Eye and Ear Hospital, East Melbourne, VIC, Australia
- Ophthalmology, Department of Surgery, The University of Melbourne, East Melbourne, VIC, Australia
- Ophthalmology Department, Royal Perth Hospital, Perth, Australia
| | - Ian L McAllister
- Lions Eye Institute, Nedlands, WA, Australia
- Centre for Ophthalmology and Visual Science, The University of Western Australia, Perth, Australia
| | - Janis M Nolde
- Dobney Hypertension Centre - Royal Perth Hospital Unit, Medical School, The University of Western Australia, Perth, Australia
- Departments of Cardiology and Nephrology, Royal Perth Hospital, Perth, Australia
| | - Markus P Schlaich
- Dobney Hypertension Centre - Royal Perth Hospital Unit, Medical School, The University of Western Australia, Perth, Australia
- Departments of Cardiology and Nephrology, Royal Perth Hospital, Perth, Australia
| |
Collapse
|
3
|
Driban M, Yan A, Selvam A, Ong J, Vupparaboina KK, Chhablani J. Artificial intelligence in chorioretinal pathology through fundoscopy: a comprehensive review. Int J Retina Vitreous 2024; 10:36. [PMID: 38654344 PMCID: PMC11036694 DOI: 10.1186/s40942-024-00554-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2024] [Accepted: 04/02/2024] [Indexed: 04/25/2024] Open
Abstract
BACKGROUND Applications for artificial intelligence (AI) in ophthalmology are continually evolving. Fundoscopy is one of the oldest ocular imaging techniques but remains a mainstay in posterior segment imaging due to its prevalence, ease of use, and ongoing technological advancement. AI has been leveraged for fundoscopy to accomplish core tasks including segmentation, classification, and prediction. MAIN BODY In this article we provide a review of AI in fundoscopy applied to representative chorioretinal pathologies, including diabetic retinopathy and age-related macular degeneration, among others. We conclude with a discussion of future directions and current limitations. SHORT CONCLUSION As AI evolves, it will become increasingly essential for the modern ophthalmologist to understand its applications and limitations to improve patient outcomes and continue to innovate.
Collapse
Affiliation(s)
- Matthew Driban
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA
| | - Audrey Yan
- Department of Medicine, West Virginia School of Osteopathic Medicine, Lewisburg, WV, USA
| | - Amrish Selvam
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA
| | - Joshua Ong
- Michigan Medicine, University of Michigan, Ann Arbor, USA
| | | | - Jay Chhablani
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA.
| |
Collapse
|
4
|
Chen R, Zhang W, Song F, Yu H, Cao D, Zheng Y, He M, Shi D. Translating color fundus photography to indocyanine green angiography using deep-learning for age-related macular degeneration screening. NPJ Digit Med 2024; 7:34. [PMID: 38347098 PMCID: PMC10861476 DOI: 10.1038/s41746-024-01018-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2023] [Accepted: 01/18/2024] [Indexed: 02/15/2024] Open
Abstract
Age-related macular degeneration (AMD) is the leading cause of central vision impairment among the elderly. Effective and accurate AMD screening tools are urgently needed. Indocyanine green angiography (ICGA) is a well-established technique for detecting chorioretinal diseases, but its invasive nature and potential risks impede its routine clinical application. Here, we innovatively developed a deep-learning model capable of generating realistic ICGA images from color fundus photography (CF) using generative adversarial networks (GANs) and evaluated its performance in AMD classification. The model was developed with 99,002 CF-ICGA pairs from a tertiary center. The quality of the generated ICGA images underwent objective evaluation using mean absolute error (MAE), peak signal-to-noise ratio (PSNR), structural similarity measures (SSIM), etc., and subjective evaluation by two experienced ophthalmologists. The model generated realistic early, mid and late-phase ICGA images, with SSIM spanned from 0.57 to 0.65. The subjective quality scores ranged from 1.46 to 2.74 on the five-point scale (1 refers to the real ICGA image quality, Kappa 0.79-0.84). Moreover, we assessed the application of translated ICGA images in AMD screening on an external dataset (n = 13887) by calculating area under the ROC curve (AUC) in classifying AMD. Combining generated ICGA with real CF images improved the accuracy of AMD classification with AUC increased from 0.93 to 0.97 (P < 0.001). These results suggested that CF-to-ICGA translation can serve as a cross-modal data augmentation method to address the data hunger often encountered in deep-learning research, and as a promising add-on for population-based AMD screening. Real-world validation is warranted before clinical usage.
Collapse
Affiliation(s)
- Ruoyu Chen
- Experimental Ophthalmology, School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China
- Research Centre for SHARP Vision, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China
| | - Weiyi Zhang
- Experimental Ophthalmology, School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China
- Research Centre for SHARP Vision, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China
| | - Fan Song
- Experimental Ophthalmology, School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China
- Research Centre for SHARP Vision, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China
| | - Honghua Yu
- Department of Ophthalmology, Guangdong Academy of Medical Sciences, Guangdong Provincial People's Hospital, Southern Medical University, Guangzhou, China
| | - Dan Cao
- Department of Ophthalmology, Guangdong Academy of Medical Sciences, Guangdong Provincial People's Hospital, Southern Medical University, Guangzhou, China
| | - Yingfeng Zheng
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China.
| | - Mingguang He
- Experimental Ophthalmology, School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China.
- Research Centre for SHARP Vision, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China.
- Centre for Eye and Vision Research (CEVR), 17W Hong Kong Science Park, Hong Kong SAR, China.
| | - Danli Shi
- Experimental Ophthalmology, School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China.
- Research Centre for SHARP Vision, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China.
| |
Collapse
|
5
|
Interlenghi M, Sborgia G, Venturi A, Sardone R, Pastore V, Boscia G, Landini L, Scotti G, Niro A, Moscara F, Bandi L, Salvatore C, Castiglioni I. A Radiomic-Based Machine Learning System to Diagnose Age-Related Macular Degeneration from Ultra-Widefield Fundus Retinography. Diagnostics (Basel) 2023; 13:2965. [PMID: 37761333 PMCID: PMC10528426 DOI: 10.3390/diagnostics13182965] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2023] [Revised: 09/04/2023] [Accepted: 09/13/2023] [Indexed: 09/29/2023] Open
Abstract
The present study was conducted to investigate the potential of radiomics to develop an explainable AI-based system to be applied to ultra-widefield fundus retinographies (UWF-FRTs) with the objective of predicting the presence of the early signs of Age-related Macular Degeneration (AMD) and stratifying subjects with low- versus high-risk of AMD. The ultimate aim was to provide clinicians with an automatic classifier and a signature of objective quantitative image biomarkers of AMD. The use of Machine Learning (ML) and radiomics was based on intensity and texture analysis in the macular region, detected by a Deep Learning (DL)-based macular detector. Two-hundred and twenty six UWF-FRTs were retrospectively collected from two centres and manually annotated to train and test the algorithms. Notably, the combination of the ML-based radiomics model and the DL-based macular detector reported 93% sensitivity and 74% specificity when applied to the data of the centre used for external testing, capturing explainable features associated with drusen or pigmentary abnormalities. In comparison to the human operator's annotations, the system yielded a 0.79 Cohen κ, demonstrating substantial concordance. To our knowledge, these results are the first provided by a radiomic approach for AMD supporting the suitability of an explainable feature extraction method combined with ML for UWF-FRT.
Collapse
Affiliation(s)
- Matteo Interlenghi
- DeepTrace Technologies S.R.L., 20122 Milan, Italy; (M.I.); (A.V.); (L.B.)
| | - Giancarlo Sborgia
- Department of Medical Science, Neuroscience and Sense Organs, Eye Clinic, University of Bari Aldo Moro, 70121 Bari, Italy; (G.S.); (V.P.); (G.B.); (L.L.); (G.S.); (F.M.)
| | - Alessandro Venturi
- DeepTrace Technologies S.R.L., 20122 Milan, Italy; (M.I.); (A.V.); (L.B.)
| | - Rodolfo Sardone
- National Institute of Gastroenterology—IRCCS “Saverio de Bellis”, 70013 Castellana Grotte, Italy;
- Unit of Statistics and Epidemiology, Local Healthcare Authority of Taranto, 74121 Taranto, Italy
| | - Valentina Pastore
- Department of Medical Science, Neuroscience and Sense Organs, Eye Clinic, University of Bari Aldo Moro, 70121 Bari, Italy; (G.S.); (V.P.); (G.B.); (L.L.); (G.S.); (F.M.)
| | - Giacomo Boscia
- Department of Medical Science, Neuroscience and Sense Organs, Eye Clinic, University of Bari Aldo Moro, 70121 Bari, Italy; (G.S.); (V.P.); (G.B.); (L.L.); (G.S.); (F.M.)
| | - Luca Landini
- Department of Medical Science, Neuroscience and Sense Organs, Eye Clinic, University of Bari Aldo Moro, 70121 Bari, Italy; (G.S.); (V.P.); (G.B.); (L.L.); (G.S.); (F.M.)
| | - Giacomo Scotti
- Department of Medical Science, Neuroscience and Sense Organs, Eye Clinic, University of Bari Aldo Moro, 70121 Bari, Italy; (G.S.); (V.P.); (G.B.); (L.L.); (G.S.); (F.M.)
| | - Alfredo Niro
- Eye Clinic, Hospital “SS. Annunziata”, ASL Taranto, 74121 Taranto, Italy;
| | - Federico Moscara
- Department of Medical Science, Neuroscience and Sense Organs, Eye Clinic, University of Bari Aldo Moro, 70121 Bari, Italy; (G.S.); (V.P.); (G.B.); (L.L.); (G.S.); (F.M.)
| | - Luca Bandi
- DeepTrace Technologies S.R.L., 20122 Milan, Italy; (M.I.); (A.V.); (L.B.)
| | - Christian Salvatore
- DeepTrace Technologies S.R.L., 20122 Milan, Italy; (M.I.); (A.V.); (L.B.)
- Department of Science, Technology and Society, University School for Advanced Studies IUSS Pavia, 27100 Pavia, Italy
| | - Isabella Castiglioni
- Department of Physics “Giuseppe Occhialini”, University of Milan-Bicocca, 20126 Milan, Italy;
| |
Collapse
|
6
|
He S, Bulloch G, Zhang L, Xie Y, Wu W, He Y, Meng W, Shi D, He M. Cross-camera Performance of Deep Learning Algorithms to Diagnose Common Ophthalmic Diseases: A Comparative Study Highlighting Feasibility to Portable Fundus Camera Use. Curr Eye Res 2023; 48:857-863. [PMID: 37246918 DOI: 10.1080/02713683.2023.2215984] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2023] [Revised: 04/19/2023] [Accepted: 05/14/2023] [Indexed: 05/30/2023]
Abstract
PURPOSE To compare the inter-camera performance and consistency of various deep learning (DL) diagnostic algorithms applied to fundus images taken from desktop Topcon and portable Optain cameras. METHODS Participants over 18 years of age were enrolled between November 2021 and April 2022. Pair-wise fundus photographs from each patient were collected in a single visit; once by Topcon (used as the reference camera) and once by a portable Optain camera (the new target camera). These were analyzed by three previously validated DL models for the detection of diabetic retinopathy (DR), age-related macular degeneration (AMD), and glaucomatous optic neuropathy (GON). Ophthalmologists manually analyzed all fundus photos for the presence of DR and these were referred to as the ground truth. Sensitivity, specificity, the area under the curve (AUC) and agreement between cameras (estimated by Cohen's weighted kappa, K) were the primary outcomes of this study. RESULTS A total of 504 patients were recruited. After excluding 12 photographs with matching errors and 59 photographs with low quality, 906 pairs of Topcon-Optain fundus photos were available for algorithm assessment. Topcon and Optain cameras had excellent consistency (Κ=0.80) when applied to the referable DR algorithm, while AMD had moderate consistency (Κ=0.41) and GON had poor consistency (Κ=0.32). For the DR model, Topcon and Optain achieved a sensitivity of 97.70% and 97.67% and a specificity of 97.92% and 97.93%, respectively. There was no significant difference between the two camera models (McNemar's test: x2=0.08, p = .78). CONCLUSION Topcon and Optain cameras had excellent consistency for detecting referable DR, albeit performances for detecting AMD and GON models were unsatisfactory. This study highlights the methods of using pair-wise images to evaluate DL models between reference and new fundus cameras.
Collapse
Affiliation(s)
- Shuang He
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, China
- Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Gabriella Bulloch
- University of Melbourne, Melbourne, Victoria, Australia
- Centre for Eye Research Australia, Melbourne, Victoria, Australia
| | - Liangxin Zhang
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, China
| | - Yiyu Xie
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, China
| | - Weiyu Wu
- Department of Ophthalmology, Guangdong Academy of Medical Sciences, Guangdong Provincial People's Hospital, Guangzhou, China
| | - Yahong He
- Department of Ophthalmology, Guangdong Academy of Medical Sciences, Guangdong Provincial People's Hospital, Guangzhou, China
| | - Wei Meng
- Eyetelligence Ltd, Melbourne, Victoria, Australia
| | - Danli Shi
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, China
- Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Mingguang He
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangzhou, China
- Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
- University of Melbourne, Melbourne, Victoria, Australia
- Centre for Eye Research Australia, Melbourne, Victoria, Australia
- Eyetelligence Ltd, Melbourne, Victoria, Australia
| |
Collapse
|
7
|
Crincoli E, Sacconi R, Querques G. Reshaping the use of Artificial Intelligence in Ophthalmology: Sometimes you Need to go Backwards. Retina 2023; 43:1429-1432. [PMID: 37343295 DOI: 10.1097/iae.0000000000003878] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/23/2023]
Affiliation(s)
- Emanuele Crincoli
- Department of Ophthalmology, University Vita-Salute, IRCCS San Raffaele Scientific Institute, Milan, Italy
| | | | | |
Collapse
|
8
|
Freiberg J, Welikala RA, Rovelt J, Owen CG, Rudnicka AR, Kolko M, Barman SA. Automated analysis of vessel morphometry in retinal images from a Danish high street optician setting. PLoS One 2023; 18:e0290278. [PMID: 37616264 PMCID: PMC10449151 DOI: 10.1371/journal.pone.0290278] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2022] [Accepted: 06/29/2023] [Indexed: 08/26/2023] Open
Abstract
PURPOSE To evaluate the test performance of the QUARTZ (QUantitative Analysis of Retinal vessel Topology and siZe) software in detecting retinal features from retinal images captured by health care professionals in a Danish high street optician chain, compared with test performance from other large population studies (i.e., UK Biobank) where retinal images were captured by non-experts. METHOD The dataset FOREVERP (Finding Ophthalmic Risk and Evaluating the Value of Eye exams and their predictive Reliability, Pilot) contains retinal images obtained from a Danish high street optician chain. The QUARTZ algorithm utilizes both image processing and machine learning methods to determine retinal image quality, vessel segmentation, vessel width, vessel classification (arterioles or venules), and optic disc localization. Outcomes were evaluated by metrics including sensitivity, specificity, and accuracy and compared to human expert ground truths. RESULTS QUARTZ's performance was evaluated on a subset of 3,682 images from the FOREVERP database. 80.55% of the FOREVERP images were labelled as being of adequate quality compared to 71.53% of UK Biobank images, with a vessel segmentation sensitivity of 74.64% and specificity of 98.41% (FOREVERP) compared with a sensitivity of 69.12% and specificity of 98.88% (UK Biobank). The mean (± standard deviation) vessel width of the ground truth was 16.21 (4.73) pixels compared to that predicted by QUARTZ of 17.01 (4.49) pixels, resulting in a difference of -0.8 (1.96) pixels. The differences were stable across a range of vessels. The detection rate for optic disc localisation was similar for the two datasets. CONCLUSION QUARTZ showed high performance when evaluated on the FOREVERP dataset, and demonstrated robustness across datasets, providing validity to direct comparisons and pooling of retinal feature measures across data sources.
Collapse
Affiliation(s)
- Josefine Freiberg
- Department of Drug Design and Pharmacology, University of Copenhagen, Copenhagen, Denmark
| | - Roshan A. Welikala
- School of Computer Science and Mathematics, Kingston University, Surrey, United Kingdom
| | - Jens Rovelt
- Department of Drug Design and Pharmacology, University of Copenhagen, Copenhagen, Denmark
| | - Christopher G. Owen
- Population Health Research Institute, St. George’s, University of London, London, United Kingdom
| | - Alicja R. Rudnicka
- Population Health Research Institute, St. George’s, University of London, London, United Kingdom
| | - Miriam Kolko
- Department of Drug Design and Pharmacology, University of Copenhagen, Copenhagen, Denmark
- Department of Ophthalmology, Copenhagen University Hospital, Rigshospitalet, Glostrup, Copenhagen, Denmark
| | - Sarah A. Barman
- School of Computer Science and Mathematics, Kingston University, Surrey, United Kingdom
| | | |
Collapse
|
9
|
Matta S, Lamard M, Conze PH, Le Guilcher A, Lecat C, Carette R, Basset F, Massin P, Rottier JB, Cochener B, Quellec G. Towards population-independent, multi-disease detection in fundus photographs. Sci Rep 2023; 13:11493. [PMID: 37460629 DOI: 10.1038/s41598-023-38610-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2022] [Accepted: 07/11/2023] [Indexed: 07/20/2023] Open
Abstract
Independent validation studies of automatic diabetic retinopathy screening systems have recently shown a drop of screening performance on external data. Beyond diabetic retinopathy, this study investigates the generalizability of deep learning (DL) algorithms for screening various ocular anomalies in fundus photographs, across heterogeneous populations and imaging protocols. The following datasets are considered: OPHDIAT (France, diabetic population), OphtaMaine (France, general population), RIADD (India, general population) and ODIR (China, general population). Two multi-disease DL algorithms were developed: a Single-Dataset (SD) network, trained on the largest dataset (OPHDIAT), and a Multiple-Dataset (MD) network, trained on multiple datasets simultaneously. To assess their generalizability, both algorithms were evaluated whenever training and test data originate from overlapping datasets or from disjoint datasets. The SD network achieved a mean per-disease area under the receiver operating characteristic curve (mAUC) of 0.9571 on OPHDIAT. However, it generalized poorly to the other three datasets (mAUC < 0.9). When all four datasets were involved in training, the MD network significantly outperformed the SD network (p = 0.0058), indicating improved generality. However, in leave-one-dataset-out experiments, performance of the MD network was significantly lower on populations unseen during training than on populations involved in training (p < 0.0001), indicating imperfect generalizability.
Collapse
Affiliation(s)
- Sarah Matta
- Université de Bretagne Occidentale, Brest, Bretagne, France.
- INSERM, UMR 1101, Brest, F-29 200, France.
| | - Mathieu Lamard
- Université de Bretagne Occidentale, Brest, Bretagne, France
- INSERM, UMR 1101, Brest, F-29 200, France
| | - Pierre-Henri Conze
- INSERM, UMR 1101, Brest, F-29 200, France
- IMT Atlantique, Brest, F-29200, France
| | | | - Clément Lecat
- Evolucare Technologies, Villers-Bretonneux, F-80800, France
| | | | - Fabien Basset
- Evolucare Technologies, Villers-Bretonneux, F-80800, France
| | - Pascale Massin
- Service d'Ophtalmologie, Hôpital Lariboisière, APHP, Paris, F-75475, France
| | - Jean-Bernard Rottier
- Bâtiment de consultation porte 14 Pôle Santé Sud CMCM, 28 Rue de Guetteloup, Le Mans, F-72100, France
| | - Béatrice Cochener
- Université de Bretagne Occidentale, Brest, Bretagne, France
- INSERM, UMR 1101, Brest, F-29 200, France
- Service d'Ophtalmologie, CHRU Brest, Brest, F-29200, France
| | | |
Collapse
|
10
|
Li Z, Wang L, Wu X, Jiang J, Qiang W, Xie H, Zhou H, Wu S, Shao Y, Chen W. Artificial intelligence in ophthalmology: The path to the real-world clinic. Cell Rep Med 2023:101095. [PMID: 37385253 PMCID: PMC10394169 DOI: 10.1016/j.xcrm.2023.101095] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Revised: 04/17/2023] [Accepted: 06/07/2023] [Indexed: 07/01/2023]
Abstract
Artificial intelligence (AI) has great potential to transform healthcare by enhancing the workflow and productivity of clinicians, enabling existing staff to serve more patients, improving patient outcomes, and reducing health disparities. In the field of ophthalmology, AI systems have shown performance comparable with or even better than experienced ophthalmologists in tasks such as diabetic retinopathy detection and grading. However, despite these quite good results, very few AI systems have been deployed in real-world clinical settings, challenging the true value of these systems. This review provides an overview of the current main AI applications in ophthalmology, describes the challenges that need to be overcome prior to clinical implementation of the AI systems, and discusses the strategies that may pave the way to the clinical translation of these systems.
Collapse
Affiliation(s)
- Zhongwen Li
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China; School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China.
| | - Lei Wang
- School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China
| | - Xuefang Wu
- Guizhou Provincial People's Hospital, Guizhou University, Guiyang 550002, China
| | - Jiewei Jiang
- School of Electronic Engineering, Xi'an University of Posts and Telecommunications, Xi'an 710121, China
| | - Wei Qiang
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China
| | - He Xie
- School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China
| | - Hongjian Zhou
- Department of Computer Science, University of Oxford, Oxford, Oxfordshire OX1 2JD, UK
| | - Shanjun Wu
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China
| | - Yi Shao
- Department of Ophthalmology, the First Affiliated Hospital of Nanchang University, Nanchang 330006, China.
| | - Wei Chen
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo 315000, China; School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou 325027, China.
| |
Collapse
|
11
|
Gomes RFT, Schuch LF, Martins MD, Honório EF, de Figueiredo RM, Schmith J, Machado GN, Carrard VC. Use of Deep Neural Networks in the Detection and Automated Classification of Lesions Using Clinical Images in Ophthalmology, Dermatology, and Oral Medicine-A Systematic Review. J Digit Imaging 2023; 36:1060-1070. [PMID: 36650299 PMCID: PMC10287602 DOI: 10.1007/s10278-023-00775-3] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Revised: 01/03/2023] [Accepted: 01/04/2023] [Indexed: 01/19/2023] Open
Abstract
Artificial neural networks (ANN) are artificial intelligence (AI) techniques used in the automated recognition and classification of pathological changes from clinical images in areas such as ophthalmology, dermatology, and oral medicine. The combination of enterprise imaging and AI is gaining notoriety for its potential benefits in healthcare areas such as cardiology, dermatology, ophthalmology, pathology, physiatry, radiation oncology, radiology, and endoscopic. The present study aimed to analyze, through a systematic literature review, the application of performance of ANN and deep learning in the recognition and automated classification of lesions from clinical images, when comparing to the human performance. The PRISMA 2020 approach (Preferred Reporting Items for Systematic Reviews and Meta-analyses) was used by searching four databases of studies that reference the use of IA to define the diagnosis of lesions in ophthalmology, dermatology, and oral medicine areas. A quantitative and qualitative analyses of the articles that met the inclusion criteria were performed. The search yielded the inclusion of 60 studies. It was found that the interest in the topic has increased, especially in the last 3 years. We observed that the performance of IA models is promising, with high accuracy, sensitivity, and specificity, most of them had outcomes equivalent to human comparators. The reproducibility of the performance of models in real-life practice has been reported as a critical point. Study designs and results have been progressively improved. IA resources have the potential to contribute to several areas of health. In the coming years, it is likely to be incorporated into everyday life, contributing to the precision and reducing the time required by the diagnostic process.
Collapse
Affiliation(s)
- Rita Fabiane Teixeira Gomes
- Graduate Program in Dentistry, School of Dentistry, Federal University of Rio Grande Do Sul, Barcelos 2492/503, Bairro Santana, Porto Alegre, RS, CEP 90035-003, Brazil.
| | - Lauren Frenzel Schuch
- Department of Oral Diagnosis, Piracicaba Dental School, University of Campinas, Piracicaba, Brazil
| | - Manoela Domingues Martins
- Graduate Program in Dentistry, School of Dentistry, Federal University of Rio Grande Do Sul, Barcelos 2492/503, Bairro Santana, Porto Alegre, RS, CEP 90035-003, Brazil
- Department of Oral Diagnosis, Piracicaba Dental School, University of Campinas, Piracicaba, Brazil
| | | | - Rodrigo Marques de Figueiredo
- Technology in Automation and Electronics Laboratory - TECAE Lab, University of Vale Do Rio Dos Sinos - UNISINOS, São Leopoldo, Brazil
| | - Jean Schmith
- Technology in Automation and Electronics Laboratory - TECAE Lab, University of Vale Do Rio Dos Sinos - UNISINOS, São Leopoldo, Brazil
| | - Giovanna Nunes Machado
- Technology in Automation and Electronics Laboratory - TECAE Lab, University of Vale Do Rio Dos Sinos - UNISINOS, São Leopoldo, Brazil
| | - Vinicius Coelho Carrard
- Graduate Program in Dentistry, School of Dentistry, Federal University of Rio Grande Do Sul, Barcelos 2492/503, Bairro Santana, Porto Alegre, RS, CEP 90035-003, Brazil
- Department of Epidemiology, School of Medicine, TelessaúdeRS-UFRGS, Federal University of Rio Grande Do Sul, Porto Alegre, RS, Brazil
- Department of Oral Medicine, Otorhinolaryngology Service, Hospital de Clínicas de Porto Alegre (HCPA), Porto Alegre, RS, Brazil
| |
Collapse
|
12
|
Poly TN, Islam MM, Walther BA, Lin MC, Jack Li YC. Artificial intelligence in diabetic retinopathy: Bibliometric analysis. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 231:107358. [PMID: 36731310 DOI: 10.1016/j.cmpb.2023.107358] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/01/2022] [Revised: 01/08/2023] [Accepted: 01/16/2023] [Indexed: 06/18/2023]
Abstract
BACKGROUND The use of artificial intelligence in diabetic retinopathy has become a popular research focus in the past decade. However, no scientometric report has provided a systematic overview of this scientific area. AIMS We utilized a bibliometric approach to identify and analyse the academic literature on artificial intelligence in diabetic retinopathy and explore emerging research trends, key authors, co-authorship networks, institutions, countries, and journals. We further captured the diabetic retinopathy conditions and technology commonly used within this area. METHODS Web of Science was used to collect relevant articles on artificial intelligence use in diabetic retinopathy published between January 1, 2012, and December 31, 2022 . All the retrieved titles were screened for eligibility, with one criterion that they must be in English. All the bibliographic information was extracted and used to perform a descriptive analysis. Bibliometrix (R tool) and VOSviewer (Leiden University) were used to construct and visualize the annual numbers of publications, journals, authors, countries, institutions, collaboration networks, keywords, and references. RESULTS In total, 931 articles that met the criteria were collected. The number of annual publications showed an increasing trend over the last ten years. Investigative Ophthalmology & Visual Science (58/931), IEEE Access (54/931), and Computers in Biology and Medicine (23/931) were the most journals with most publications. China (211/931), India (143/931, USA (133/931), and South Korea (44/931) were the most productive countries of origin. The National University of Singapore (40/931), Singapore Eye Research Institute (35/931), and Johns Hopkins University (34/931) were the most productive institutions. Ting D. (34/931), Wong T. (28/931), and Tan G. (17/931) were the most productive researchers. CONCLUSION This study summarizes the recent advances in artificial intelligence technology on diabetic retinopathy research and sheds light on the emerging trends, sources, leading institutions, and hot topics through bibliometric analysis and network visualization. Although this field has already shown great potential in health care, our findings will provide valuable clues relevant to future research directions and clinical practice.
Collapse
Affiliation(s)
- Tahmina Nasrin Poly
- Graduate Institute of Biomedical Informatics, College of Medical Science and Technology, Taipei Medical University, Taipei 110, Taiwan; International Center for Health Information Technology (ICHIT), Taipei Medical University, Taipei 110, Taiwan; Research Center of Big Data and Meta-Analysis, Wan Fang Hospital, Taipei Medical University, Taipei 116, Taiwan
| | - Md Mohaimenul Islam
- International Center for Health Information Technology (ICHIT), Taipei Medical University, Taipei 110, Taiwan; AESOP Technology, Songshan District, Taipei 105, Taiwan
| | - Bruno Andreas Walther
- Alfred-Wegener-Institut Helmholtz-Zentrum für Polar- und Meeresforschung, Am Handelshafen 12, Bremerhaven D-27570, Germany
| | - Ming Chin Lin
- Graduate Institute of Biomedical Informatics, College of Medical Science and Technology, Taipei Medical University, Taipei 110, Taiwan; Department of Neurosurgery, Shuang Ho Hospital, Taipei Medical University, New Taipei City 235041, Taiwan; Taipei Neuroscience Institute, Taipei Medical University, Taipei 110301, Taiwan
| | - Yu-Chuan Jack Li
- Graduate Institute of Biomedical Informatics, College of Medical Science and Technology, Taipei Medical University, Taipei 110, Taiwan; International Center for Health Information Technology (ICHIT), Taipei Medical University, Taipei 110, Taiwan; Research Center of Big Data and Meta-Analysis, Wan Fang Hospital, Taipei Medical University, Taipei 116, Taiwan; AESOP Technology, Songshan District, Taipei 105, Taiwan.
| |
Collapse
|
13
|
Li Z, Chen W. Solving data quality issues of fundus images in real-world settings by ophthalmic AI. Cell Rep Med 2023; 4:100951. [PMID: 36812885 PMCID: PMC9975325 DOI: 10.1016/j.xcrm.2023.100951] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/23/2023]
Abstract
Liu et al.1 develop a deep-learning-based flow cytometry-like image quality classifier, DeepFundus, for the automated, high-throughput, and multidimensional classification of fundus image quality. DeepFundus significantly improves the real-world performance of established artificial intelligence diagnostics in detecting multiple retinopathies.
Collapse
Affiliation(s)
- Zhongwen Li
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, China; School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou, China
| | - Wei Chen
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, China; School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou, China.
| |
Collapse
|
14
|
Li Z, Guo X, Zhang J, Liu X, Chang R, He M. Using deep leaning models to detect ophthalmic diseases: A comparative study. Front Med (Lausanne) 2023; 10:1115032. [PMID: 36936225 PMCID: PMC10014566 DOI: 10.3389/fmed.2023.1115032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2022] [Accepted: 02/03/2023] [Indexed: 03/05/2023] Open
Abstract
Purpose The aim of this study was to prospectively quantify the level of agreement among the deep learning system, non-physician graders, and general ophthalmologists with different levels of clinical experience in detecting referable diabetic retinopathy, age-related macular degeneration, and glaucomatous optic neuropathy. Methods Deep learning systems for diabetic retinopathy, age-related macular degeneration, and glaucomatous optic neuropathy classification, with accuracy proven through internal and external validation, were established using 210,473 fundus photographs. Five trained non-physician graders and 47 general ophthalmologists from China were chosen randomly and included in the analysis. A test set of 300 fundus photographs were randomly identified from an independent dataset of 42,388 gradable images. The grading outcomes of five retinal and five glaucoma specialists were used as the reference standard that was considered achieved when ≥50% of gradings were consistent among the included specialists. The area under receiver operator characteristic curve of different groups in relation to the reference standard was used to compare agreement for referable diabetic retinopathy, age-related macular degeneration, and glaucomatous optic neuropathy. Results The test set included 45 images (15.0%) with referable diabetic retinopathy, 46 (15.3%) with age-related macular degeneration, 46 (15.3%) with glaucomatous optic neuropathy, and 163 (55.4%) without these diseases. The area under receiver operator characteristic curve for non-physician graders, ophthalmologists with 3-5 years of clinical practice, ophthalmologists with 5-10 years of clinical practice, ophthalmologists with >10 years of clinical practice, and the deep learning system for referable diabetic retinopathy were 0.984, 0.964, 0.965, 0.954, and 0.990 (p = 0.415), respectively. The results for referable age-related macular degeneration were 0.912, 0.933, 0.946, 0.958, and 0.945, respectively, (p = 0.145), and 0.675, 0.862, 0.894, 0.976, and 0.994 for referable glaucomatous optic neuropathy, respectively (p < 0.001). Conclusion The findings of this study suggest that the accuracy of this deep learning system is comparable to that of trained non-physician graders and general ophthalmologists for referable diabetic retinopathy and age-related macular degeneration, but the deep learning system performance is better than that of trained non-physician graders for the detection of referable glaucomatous optic neuropathy.
Collapse
Affiliation(s)
- Zhixi Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-Sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Xinxing Guo
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-Sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
- Wilmer Eye Institute, Johns Hopkins University, Baltimore, MD, United States
| | - Jian Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-Sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Xing Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-Sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Robert Chang
- Department of Ophthalmology, Byers Eye Institute at Stanford University, Palo Alto, CA, United States
| | - Mingguang He
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-Sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
- *Correspondence: Mingguang He,
| |
Collapse
|
15
|
Kiburg KV, Turner A, He M. Telemedicine and delivery of ophthalmic care in rural and remote communities: Drawing from Australian experience. Clin Exp Ophthalmol 2022; 50:793-800. [PMID: 35975938 DOI: 10.1111/ceo.14147] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2022] [Revised: 08/07/2022] [Accepted: 08/12/2022] [Indexed: 01/07/2023]
Abstract
Rural and remote communities in Australia are characterised by small but widely dispersed populations. This has been proven to be a major hurdle in access to medical care services with screening and treatment goals repeatedly being missed. Telemedicine in ophthalmology provides the opportunity to increase the availability of high quality and timely access to healthcare within. Recent years has also seen the introduction of artificial intelligence (AI) in ophthalmology, particularly in the screening of diseases. AI will hopefully increase the number of appropriate referrals, reduce travel time for patients and ensure timely triage given the low number of qualified optometrists and ophthalmologists. Telemedicine and AI has been introduced in a number of countries and has led to tremendous benefits and advantages when compared to standard practices. This paper summarises current practices in telemedicine and AI and the future of this technology in improving patient care in the field of ophthalmology.
Collapse
Affiliation(s)
- Katerina V Kiburg
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Victoria, Australia.,Ophthalmology, Department of Surgery, University of Melbourne, Melbourne, Victoria, Australia
| | - Angus Turner
- Lions Outback Vision, Lions Eye Institute, Nedlands, Western Australia, Australia.,Centre for Ophthalmology and Visual Science, University of Western Australia, Nedlands, Western Australia, Australia
| | - Mingguang He
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Victoria, Australia.,Ophthalmology, Department of Surgery, University of Melbourne, Melbourne, Victoria, Australia
| |
Collapse
|
16
|
Sohn A, Fine HF, Mantopoulos D. How Artificial Intelligence Aspires to Change the Diagnostic and Treatment Paradigm in Eyes With Age-Related Macular Degeneration. Ophthalmic Surg Lasers Imaging Retina 2022; 53:474-480. [PMID: 36107621 DOI: 10.3928/23258160-20220817-01] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
17
|
Biswas S, Khan MIA, Hossain MT, Biswas A, Nakai T, Rohdin J. Which Color Channel Is Better for Diagnosing Retinal Diseases Automatically in Color Fundus Photographs? LIFE (BASEL, SWITZERLAND) 2022; 12:life12070973. [PMID: 35888063 PMCID: PMC9321111 DOI: 10.3390/life12070973] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/27/2022] [Revised: 05/25/2022] [Accepted: 06/01/2022] [Indexed: 11/22/2022]
Abstract
Color fundus photographs are the most common type of image used for automatic diagnosis of retinal diseases and abnormalities. As all color photographs, these images contain information about three primary colors, i.e., red, green, and blue, in three separate color channels. This work aims to understand the impact of each channel in the automatic diagnosis of retinal diseases and abnormalities. To this end, the existing works are surveyed extensively to explore which color channel is used most commonly for automatically detecting four leading causes of blindness and one retinal abnormality along with segmenting three retinal landmarks. From this survey, it is clear that all channels together are typically used for neural network-based systems, whereas for non-neural network-based systems, the green channel is most commonly used. However, from the previous works, no conclusion can be drawn regarding the importance of the different channels. Therefore, systematic experiments are conducted to analyse this. A well-known U-shaped deep neural network (U-Net) is used to investigate which color channel is best for segmenting one retinal abnormality and three retinal landmarks.
Collapse
Affiliation(s)
- Sangeeta Biswas
- Faculty of Engineering, University of Rajshahi, Rajshahi 6205, Bangladesh; (M.I.A.K.); (M.T.H.)
- Correspondence: or
| | - Md. Iqbal Aziz Khan
- Faculty of Engineering, University of Rajshahi, Rajshahi 6205, Bangladesh; (M.I.A.K.); (M.T.H.)
| | - Md. Tanvir Hossain
- Faculty of Engineering, University of Rajshahi, Rajshahi 6205, Bangladesh; (M.I.A.K.); (M.T.H.)
| | - Angkan Biswas
- CAPM Company Limited, Bonani, Dhaka 1213, Bangladesh;
| | - Takayoshi Nakai
- Faculty of Engineering, Shizuoka University, Hamamatsu 432-8561, Japan;
| | - Johan Rohdin
- Faculty of Information Technology, Brno University of Technology, 61200 Brno, Czech Republic;
| |
Collapse
|
18
|
Li J, Wang L, Gao Y, Liang Q, Chen L, Sun X, Yang H, Zhao Z, Meng L, Xue S, Du Q, Zhang Z, Lv C, Xu H, Guo Z, Xie G, Xie L. Automated detection of myopic maculopathy from color fundus photographs using deep convolutional neural networks. EYE AND VISION (LONDON, ENGLAND) 2022; 9:13. [PMID: 35361278 PMCID: PMC8973805 DOI: 10.1186/s40662-022-00285-3] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/16/2021] [Accepted: 03/09/2022] [Indexed: 02/07/2023]
Abstract
BACKGROUND Myopic maculopathy (MM) has become a major cause of visual impairment and blindness worldwide, especially in East Asian countries. Deep learning approaches such as deep convolutional neural networks (DCNN) have been successfully applied to identify some common retinal diseases and show great potential for the intelligent analysis of MM. This study aimed to build a reliable approach for automated detection of MM from retinal fundus images using DCNN models. METHODS A dual-stream DCNN (DCNN-DS) model that perceives features from both original images and corresponding processed images by color histogram distribution optimization method was designed for classification of no MM, tessellated fundus (TF), and pathologic myopia (PM). A total of 36,515 gradable images from four hospitals were used for DCNN model development, and 14,986 gradable images from the other two hospitals for external testing. We also compared the performance of the DCNN-DS model and four ophthalmologists on 3000 randomly sampled fundus images. RESULTS The DCNN-DS model achieved sensitivities of 93.3% and 91.0%, specificities of 99.6% and 98.7%, areas under the receiver operating characteristic curves (AUC) of 0.998 and 0.994 for detecting PM, whereas sensitivities of 98.8% and 92.8%, specificities of 95.6% and 94.1%, AUCs of 0.986 and 0.970 for detecting TF in two external testing datasets. In the sampled testing dataset, the sensitivities of four ophthalmologists ranged from 88.3% to 95.8% and 81.1% to 89.1%, and the specificities ranged from 95.9% to 99.2% and 77.8% to 97.3% for detecting PM and TF, respectively. Meanwhile, the DCNN-DS model achieved sensitivities of 90.8% and 97.9% and specificities of 99.1% and 94.0% for detecting PM and TF, respectively. CONCLUSIONS The proposed DCNN-DS approach demonstrated reliable performance with high sensitivity, specificity, and AUC to classify different MM levels on fundus photographs sourced from clinics. It can help identify MM automatically among the large myopic groups and show great potential for real-life applications.
Collapse
Affiliation(s)
- Jun Li
- Qingdao Eye Hospital of Shandong First Medical University, 5 Yanerdao Road, Qingdao, 266071, China.,State Key Laboratory Cultivation Base, Shandong Provincial Key Laboratory of Ophthalmology, Shandong Eye Institute, Shandong First Medical University & Shandong Academy of Medical Sciences, Qingdao, 266071, China
| | - Lilong Wang
- Ping An Healthcare Technology, 9F Building B, PingAn IFC, No. 1-3 Xinyuan South Road, Beijing, 100027, China
| | - Yan Gao
- Qingdao Eye Hospital of Shandong First Medical University, 5 Yanerdao Road, Qingdao, 266071, China.,State Key Laboratory Cultivation Base, Shandong Provincial Key Laboratory of Ophthalmology, Shandong Eye Institute, Shandong First Medical University & Shandong Academy of Medical Sciences, Qingdao, 266071, China
| | - Qianqian Liang
- Qingdao Eye Hospital of Shandong First Medical University, 5 Yanerdao Road, Qingdao, 266071, China.,State Key Laboratory Cultivation Base, Shandong Provincial Key Laboratory of Ophthalmology, Shandong Eye Institute, Shandong First Medical University & Shandong Academy of Medical Sciences, Qingdao, 266071, China
| | - Lingzhi Chen
- Ping An Healthcare Technology, 9F Building B, PingAn IFC, No. 1-3 Xinyuan South Road, Beijing, 100027, China
| | - Xiaolei Sun
- State Key Laboratory Cultivation Base, Shandong Provincial Key Laboratory of Ophthalmology, Shandong Eye Institute, Shandong First Medical University & Shandong Academy of Medical Sciences, Qingdao, 266071, China.,Shandong Eye Hospital of Shandong First Medical University, Jinan, 250021, China
| | | | | | - Lina Meng
- Qilu Hospital of Shandong University (Qingdao), Qingdao, 266035, China
| | - Shuyue Xue
- Qingdao Eye Hospital of Shandong First Medical University, 5 Yanerdao Road, Qingdao, 266071, China.,State Key Laboratory Cultivation Base, Shandong Provincial Key Laboratory of Ophthalmology, Shandong Eye Institute, Shandong First Medical University & Shandong Academy of Medical Sciences, Qingdao, 266071, China
| | - Qing Du
- Qingdao Eye Hospital of Shandong First Medical University, 5 Yanerdao Road, Qingdao, 266071, China.,State Key Laboratory Cultivation Base, Shandong Provincial Key Laboratory of Ophthalmology, Shandong Eye Institute, Shandong First Medical University & Shandong Academy of Medical Sciences, Qingdao, 266071, China
| | - Zhichun Zhang
- Qingdao Eye Hospital of Shandong First Medical University, 5 Yanerdao Road, Qingdao, 266071, China.,State Key Laboratory Cultivation Base, Shandong Provincial Key Laboratory of Ophthalmology, Shandong Eye Institute, Shandong First Medical University & Shandong Academy of Medical Sciences, Qingdao, 266071, China
| | - Chuanfeng Lv
- Ping An Healthcare Technology, 9F Building B, PingAn IFC, No. 1-3 Xinyuan South Road, Beijing, 100027, China
| | - Haifeng Xu
- Qingdao Eye Hospital of Shandong First Medical University, 5 Yanerdao Road, Qingdao, 266071, China.,State Key Laboratory Cultivation Base, Shandong Provincial Key Laboratory of Ophthalmology, Shandong Eye Institute, Shandong First Medical University & Shandong Academy of Medical Sciences, Qingdao, 266071, China
| | - Zhen Guo
- Qingdao Eye Hospital of Shandong First Medical University, 5 Yanerdao Road, Qingdao, 266071, China.,State Key Laboratory Cultivation Base, Shandong Provincial Key Laboratory of Ophthalmology, Shandong Eye Institute, Shandong First Medical University & Shandong Academy of Medical Sciences, Qingdao, 266071, China
| | - Guotong Xie
- Ping An Healthcare Technology, 9F Building B, PingAn IFC, No. 1-3 Xinyuan South Road, Beijing, 100027, China. .,Ping An Healthcare and Technology Company Limited, Shanghai, 200030, China. .,Ping An International Smart City Technology Company Limited, Shenzhen, 518000, China.
| | - Lixin Xie
- Qingdao Eye Hospital of Shandong First Medical University, 5 Yanerdao Road, Qingdao, 266071, China. .,State Key Laboratory Cultivation Base, Shandong Provincial Key Laboratory of Ophthalmology, Shandong Eye Institute, Shandong First Medical University & Shandong Academy of Medical Sciences, Qingdao, 266071, China.
| |
Collapse
|
19
|
Artificial intelligence to detect malignant eyelid tumors from photographic images. NPJ Digit Med 2022; 5:23. [PMID: 35236921 PMCID: PMC8891262 DOI: 10.1038/s41746-022-00571-3] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2021] [Accepted: 02/04/2022] [Indexed: 11/23/2022] Open
Abstract
Malignant eyelid tumors can invade adjacent structures and pose a threat to vision and even life. Early identification of malignant eyelid tumors is crucial to avoiding substantial morbidity and mortality. However, differentiating malignant eyelid tumors from benign ones can be challenging for primary care physicians and even some ophthalmologists. Here, based on 1,417 photographic images from 851 patients across three hospitals, we developed an artificial intelligence system using a faster region-based convolutional neural network and deep learning classification networks to automatically locate eyelid tumors and then distinguish between malignant and benign eyelid tumors. The system performed well in both internal and external test sets (AUCs ranged from 0.899 to 0.955). The performance of the system is comparable to that of a senior ophthalmologist, indicating that this system has the potential to be used at the screening stage for promoting the early detection and treatment of malignant eyelid tumors.
Collapse
|
20
|
Matta S, Lamard M, Conze PH, Le Guilcher A, Ricquebourg V, Benyoussef AA, Massin P, Rottier JB, Cochener B, Quellec G. Automatic Screening for Ocular Anomalies Using Fundus Photographs. Optom Vis Sci 2022; 99:281-291. [PMID: 34897234 DOI: 10.1097/opx.0000000000001845] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Abstract
SIGNIFICANCE Screening for ocular anomalies using fundus photography is key to prevent vision impairment and blindness. With the growing and aging population, automated algorithms that can triage fundus photographs and provide instant referral decisions are relevant to scale-up screening and face the shortage of ophthalmic expertise. PURPOSE This study aimed to develop a deep learning algorithm that detects any ocular anomaly in fundus photographs and to evaluate this algorithm for "normal versus anomalous" eye examination classification in the diabetic and general populations. METHODS The deep learning algorithm was developed and evaluated in two populations: the diabetic and general populations. Our patient cohorts consist of 37,129 diabetic patients from the OPHDIAT diabetic retinopathy screening network in Paris, France, and 7356 general patients from the OphtaMaine private screening network, in Le Mans, France. Each data set was divided into a development subset and a test subset of more than 4000 examinations each. For ophthalmologist/algorithm comparison, a subset of 2014 examinations from the OphtaMaine test subset was labeled by a second ophthalmologist. First, the algorithm was trained on the OPHDIAT development subset. Then, it was fine-tuned on the OphtaMaine development subset. RESULTS On the OPHDIAT test subset, the area under the receiver operating characteristic curve for normal versus anomalous classification was 0.9592. On the OphtaMaine test subset, the area under the receiver operating characteristic curve was 0.8347 before fine-tuning and 0.9108 after fine-tuning. On the ophthalmologist/algorithm comparison subset, the second ophthalmologist achieved a specificity of 0.8648 and a sensitivity of 0.6682. For the same specificity, the fine-tuned algorithm achieved a sensitivity of 0.8248. CONCLUSIONS The proposed algorithm compares favorably with human performance for normal versus anomalous eye examination classification using fundus photography. Artificial intelligence, which previously targeted a few retinal pathologies, can be used to screen for ocular anomalies comprehensively.
Collapse
Affiliation(s)
| | | | | | | | | | | | - Pascale Massin
- Ophtalmology Department, Lariboisière Hospital, APHP, Paris, France
| | | | | | | |
Collapse
|
21
|
Kumar H, Goh KL, Guymer RH, Wu Z. A clinical perspective on the expanding role of artificial intelligence in age-related macular degeneration. Clin Exp Optom 2022; 105:674-679. [PMID: 35073498 DOI: 10.1080/08164622.2021.2022961] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022] Open
Abstract
In recent years, there has been intense development of artificial intelligence (AI) techniques, which have the potential to improve the clinical management of age-related macular degeneration (AMD) and facilitate the prevention of irreversible vision loss from this condition. Such AI techniques could be used as clinical decision support tools to: (i) improve the detection of AMD by community eye health practitioners, (ii) enhance risk stratification to enable personalised monitoring strategies for those with the early stages of AMD, and (iii) enable early detection of signs indicative of possible choroidal neovascularisation allowing triaging of patients requiring urgent review. This review discusses the latest developments in AI techniques that show promise for these tasks, as well as how they may help in the management of patients being treated for choroidal neovascularisation and in accelerating the discovery of new treatments in AMD.
Collapse
Affiliation(s)
- Himeesh Kumar
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, Victoria, Australia
| | - Kai Lyn Goh
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, Victoria, Australia
| | - Robyn H Guymer
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, Victoria, Australia
| | - Zhichao Wu
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, Victoria, Australia
| |
Collapse
|
22
|
Review of Machine Learning Applications Using Retinal Fundus Images. Diagnostics (Basel) 2022; 12:diagnostics12010134. [PMID: 35054301 PMCID: PMC8774893 DOI: 10.3390/diagnostics12010134] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2021] [Revised: 01/03/2022] [Accepted: 01/03/2022] [Indexed: 02/04/2023] Open
Abstract
Automating screening and diagnosis in the medical field saves time and reduces the chances of misdiagnosis while saving on labor and cost for physicians. With the feasibility and development of deep learning methods, machines are now able to interpret complex features in medical data, which leads to rapid advancements in automation. Such efforts have been made in ophthalmology to analyze retinal images and build frameworks based on analysis for the identification of retinopathy and the assessment of its severity. This paper reviews recent state-of-the-art works utilizing the color fundus image taken from one of the imaging modalities used in ophthalmology. Specifically, the deep learning methods of automated screening and diagnosis for diabetic retinopathy (DR), age-related macular degeneration (AMD), and glaucoma are investigated. In addition, the machine learning techniques applied to the retinal vasculature extraction from the fundus image are covered. The challenges in developing these systems are also discussed.
Collapse
|
23
|
Gu Y, Wang X, Pan J, Yong Z, Guo S, Pan T, Jiao Y, Zhou Z. Effective methods of diabetic retinopathy detection based on deep convolutional neural networks. Int J Comput Assist Radiol Surg 2021; 16:2177-2187. [PMID: 34606059 DOI: 10.1007/s11548-021-02498-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2021] [Accepted: 09/13/2021] [Indexed: 11/28/2022]
Abstract
PURPOSE Diabetic retinopathy (DR) has become the leading cause of blindness worldwide. In clinical practice, the detection of DR often takes a lot of time and effort for ophthalmologist. It is necessary to develop an automatic assistant diagnosis method based on medical image analysis techniques. METHODS Firstly, we design a feature enhanced attention module to capture focus lesions and regions. Secondly, we propose a stage sampling strategy to solve the problem of data imbalance on datasets and avoid the CNN ignoring the focus features of samples that account for small parts. Finally, we treat DR detection as a regression task to keep the gradual change characteristics of lesions and output the final classification results through the optimization method on the validation set. RESULTS Extensive experiments are conducted on open-source datasets. Our methods achieve 0.851 quadratic weighted kappa which outperforms first place in the Kaggle DR detection competition based on the EyePACS dataset and get the accuracy of 0.914 in the referable/non-referable task and 0.913 in the normal/abnormal task based on the Messidor dataset. CONCLUSION In this paper, we propose three novel automatic DR detection methods based on deep convolutional neural networks. The results illustrate that our methods can obtain comparable performance compared with previous methods and generate visualization pictures with potential lesions for doctors and patients.
Collapse
Affiliation(s)
- Yunchao Gu
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, 100191, China.,Hangzhou Innovation Research Institute, Beihang University, Hangzhou, 100191, China.,Beijing Advanced Innovation Center for Big Data and Brain Computing (BDBC), Beihang University, Beijing, 100191, China
| | - Xinliang Wang
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, 100191, China
| | - Junjun Pan
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, 100191, China. .,Peng Cheng Laboratory, Shenzhen, 518000, China.
| | - Zhifan Yong
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, 100191, China
| | - Shihui Guo
- School of Informatics, Xiamen University, Xiamen, 361005, China
| | - Tianze Pan
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, 100191, China
| | - Yonghong Jiao
- Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, 100730, China
| | - Zhong Zhou
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, 100191, China
| |
Collapse
|
24
|
Nuzzi R, Boscia G, Marolo P, Ricardi F. The Impact of Artificial Intelligence and Deep Learning in Eye Diseases: A Review. Front Med (Lausanne) 2021; 8:710329. [PMID: 34527682 PMCID: PMC8437147 DOI: 10.3389/fmed.2021.710329] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2021] [Accepted: 07/23/2021] [Indexed: 12/21/2022] Open
Abstract
Artificial intelligence (AI) is a subset of computer science dealing with the development and training of algorithms that try to replicate human intelligence. We report a clinical overview of the basic principles of AI that are fundamental to appreciating its application to ophthalmology practice. Here, we review the most common eye diseases, focusing on some of the potential challenges and limitations emerging with the development and application of this new technology into ophthalmology.
Collapse
Affiliation(s)
- Raffaele Nuzzi
- Ophthalmology Unit, A.O.U. City of Health and Science of Turin, Department of Surgical Sciences, University of Turin, Turin, Italy
| | | | | | | |
Collapse
|
25
|
Takhchidi K, Gliznitsa PV, Svetozarskiy SN, Bursov AI, Shusterzon KA. Labelling of data on fundus color pictures used to train a deep learning model enhances its macular pathology recognition capabilities. BULLETIN OF RUSSIAN STATE MEDICAL UNIVERSITY 2021. [DOI: 10.24075/brsmu.2021.040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
Abstract
Retinal diseases remain one of the leading causes of visual impairments in the world. The development of automated diagnostic methods can improve the efficiency and availability of the macular pathology mass screening programs. The objective of this work was to develop and validate deep learning algorithms detecting macular pathology (age-related macular degeneration, AMD) based on the analysis of color fundus photographs with and without data labeling. We used 1200 color fundus photographs from local databases, including 575 retinal images of AMD patients and 625 pictures of the retina of healthy people. The deep learning algorithm was deployed in the Faster RCNN neural network with ResNet50 for convolution. The process employed the transfer learning method. As a result, in the absence of labeling, the accuracy of the model was unsatisfactory (79%) because the neural network selected the areas of attention incorrectly. Data labeling improved the efficacy of the developed method: with the test dataset, the model determined the areas with informative features adequately, and the classification accuracy reached 96.6%. Thus, image data labeling significantly improves the accuracy of retinal color images recognition by a neural network and enables development and training of effective models with limited datasets.
Collapse
Affiliation(s)
- KhP Takhchidi
- Pirogov Russian National Research Medical University, Moscow, Russia
| | - PV Gliznitsa
- OOO Innovatsioonniye Tekhnologii (Innovative Technologies, LLC), Nizhny Novgorod, Russia
| | - SN Svetozarskiy
- Volga District Medical Center under the Federal Medical-Biological Agency, Nizhny Novgorod, Russia
| | - AI Bursov
- Ivannikov Institute for System Programming of RAS, Moscow, Russia
| | - KA Shusterzon
- L.A. Melentiev Energy Systems Institute, Irkutsk, Russia
| |
Collapse
|
26
|
Smith JR. Having impact. Clin Exp Ophthalmol 2021; 49:537-539. [PMID: 34351694 DOI: 10.1111/ceo.13968] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Affiliation(s)
- Justine R Smith
- Flinders College of Medicine & Public Health, Flinders University, Adelaide, South Australia, Australia
| |
Collapse
|
27
|
Real-world artificial intelligence-based opportunistic screening for diabetic retinopathy in endocrinology and indigenous healthcare settings in Australia. Sci Rep 2021; 11:15808. [PMID: 34349130 PMCID: PMC8339059 DOI: 10.1038/s41598-021-94178-5] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2021] [Accepted: 07/05/2021] [Indexed: 12/27/2022] Open
Abstract
This study investigated the diagnostic performance, feasibility, and end-user experiences of an artificial intelligence (AI)-assisted diabetic retinopathy (DR) screening model in real-world Australian healthcare settings. The study consisted of two components: (1) DR screening of patients using an AI-assisted system and (2) in-depth interviews with health professionals involved in implementing screening. Participants with type 1 or type 2 diabetes mellitus attending two endocrinology outpatient and three Aboriginal Medical Services clinics between March 2018 and May 2019 were invited to a prospective observational study. A single 45-degree (macula centred), non-stereoscopic, colour retinal image was taken of each eye from participants and were instantly screened for referable DR using a custom offline automated AI system. A total of 236 participants, including 174 from endocrinology and 62 from Aboriginal Medical Services clinics, provided informed consent and 203 (86.0%) were included in the analysis. A total of 33 consenting participants (14%) were excluded from the primary analysis due to ungradable or missing images from small pupils (n = 21, 63.6%), cataract (n = 7, 21.2%), poor fixation (n = 2, 6.1%), technical issues (n = 2, 6.1%), and corneal scarring (n = 1, 3%). The area under the curve, sensitivity, and specificity of the AI system for referable DR were 0.92, 96.9% and 87.7%, respectively. There were 51 disagreements between the reference standard and index test diagnoses, including 29 which were manually graded as ungradable, 21 false positives, and one false negative. A total of 28 participants (11.9%) were referred for follow-up based on new ocular findings, among whom, 15 (53.6%) were able to be contacted and 9 (60%) adhered to referral. Of 207 participants who completed a satisfaction questionnaire, 93.7% specified they were either satisfied or extremely satisfied, and 93.2% specified they would be likely or extremely likely to use this service again. Clinical staff involved in screening most frequently noted that the AI system was easy to use, and the real-time diagnostic report was useful. Our study indicates that AI-assisted DR screening model is accurate and well-accepted by patients and clinicians in endocrinology and indigenous healthcare settings. Future deployments of AI-assisted screening models would require consideration of downstream referral pathways.
Collapse
|
28
|
Cai C, Tafti AP, Ngufor C, Zhang P, Xiao P, Dai M, Liu H, Noseworthy P, Chen M, Friedman PA, Cha YM. Using ensemble of ensemble machine learning methods to predict outcomes of cardiac resynchronization. J Cardiovasc Electrophysiol 2021; 32:2504-2514. [PMID: 34260141 DOI: 10.1111/jce.15171] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/24/2021] [Revised: 05/08/2021] [Accepted: 06/14/2021] [Indexed: 11/29/2022]
Abstract
INTRODUCTION The efficacy of cardiac resynchronization therapy (CRT) has been widely studied in the medical literature; however, about 30% of candidates fail to respond to this treatment strategy. Smart computational approaches based on clinical data can help expose hidden patterns useful for identifying CRT responders. METHODS We retrospectively analyzed the electronic health records of 1664 patients who underwent CRT procedures from January 1, 2002 to December 31, 2017. An ensemble of ensemble (EoE) machine learning (ML) system composed of a supervised and an unsupervised ML layers was developed to generate a prediction model for CRT response. RESULTS We compared the performance of EoE against traditional ML methods and the state-of-the-art convolutional neural network (CNN) model trained on raw electrocardiographic (ECG) waveforms. We observed that the models exhibited improvement in performance as more features were incrementally used for training. Using the most comprehensive set of predictors, the performance of the EoE model in terms of the area under the receiver operating characteristic curve and F1-score were 0.76 and 0.73, respectively. Direct application of the CNN model on the raw ECG waveforms did not generate promising results. CONCLUSION The proposed CRT risk calculator effectively discriminates which heart failure (HF) patient is likely to respond to CRT significantly better than using clinical guidelines and traditional ML methods, thus suggesting that the tool can enhanced care management of HF patients by helping to identify high-risk patients.
Collapse
Affiliation(s)
- Cheng Cai
- Department of Cardiology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China.,Department of Cardiovascular Medicine, Mayo Clinic, Rochester, Minnesota, USA
| | - Ahmad P Tafti
- College of Science, Technology, and Health, University of Southern Maine, Portland, Maine, USA
| | - Che Ngufor
- Department of Artificial Intelligence and Informatics, Mayo Clinic, Rochester, Minnesota, USA
| | - Pei Zhang
- Department of Cardiovascular Medicine, Mayo Clinic, Rochester, Minnesota, USA.,Department of Cardiology, Sir Run Run Shaw Hospital, School of Medicine Zhejiang University, Hangzhou, China
| | - Peilin Xiao
- Department of Cardiovascular Medicine, Mayo Clinic, Rochester, Minnesota, USA.,Department of Cardiology, The Second Affiliated Hospital of Chongqing Medical University, Chongqing, China
| | - Mingyan Dai
- Department of Cardiovascular Medicine, Mayo Clinic, Rochester, Minnesota, USA.,Department of Cardiology, Renmin Hospital of Wuhan University; Cardiovascular Research Institute, Wuhan University, Hubei Key Laboratory of Cardiology, Wuhan, China
| | - Hongfang Liu
- Department of Artificial Intelligence and Informatics, Mayo Clinic, Rochester, Minnesota, USA
| | - Peter Noseworthy
- Department of Cardiovascular Medicine, Mayo Clinic, Rochester, Minnesota, USA
| | - Minglong Chen
- Department of Cardiology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Paul A Friedman
- Department of Cardiovascular Medicine, Mayo Clinic, Rochester, Minnesota, USA
| | - Yong-Mei Cha
- Department of Cardiovascular Medicine, Mayo Clinic, Rochester, Minnesota, USA
| |
Collapse
|
29
|
Campbell JP, Mathenge C, Cherwek H, Balaskas K, Pasquale LR, Keane PA, Chiang MF. Artificial Intelligence to Reduce Ocular Health Disparities: Moving From Concept to Implementation. Transl Vis Sci Technol 2021; 10:19. [PMID: 34003953 PMCID: PMC7991919 DOI: 10.1167/tvst.10.3.19] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022] Open
Affiliation(s)
- John P Campbell
- Department of Ophthalmology, Oregon Health & Science University, Portland, OR, USA
| | - Ciku Mathenge
- Rwanda International Institute of Ophthalmology, Kigali, Rwanda
| | | | - Konstantinos Balaskas
- Institute of Ophthalmology, University College London, London, UK.,Medical Retina Service, Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Louis R Pasquale
- Eye and Vision Research Institute, New York Eye and Ear Infirmary at Mount Sinai, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Pearse A Keane
- Institute of Ophthalmology, University College London, London, UK.,Medical Retina Service, Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Michael F Chiang
- Department of Ophthalmology, Oregon Health & Science University, Portland, OR, USA.,National Eye Institute, National Institute of Health, Bethesda, MD
| | | |
Collapse
|
30
|
Chan EJJ, Najjar RP, Tang Z, Milea D. Deep Learning for Retinal Image Quality Assessment of Optic Nerve Head Disorders. Asia Pac J Ophthalmol (Phila) 2021; 10:282-288. [PMID: 34383719 DOI: 10.1097/apo.0000000000000404] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022] Open
Abstract
ABSTRACT Deep learning (DL)-based retinal image quality assessment (RIQA) algorithms have been gaining popularity, as a solution to reduce the frequency of diagnostically unusable images. Most existing RIQA tools target retinal conditions, with a dearth of studies looking into RIQA models for optic nerve head (ONH) disorders. The recent success of DL systems in detecting ONH abnormalities on color fundus images prompts the development of tailored RIQA algorithms for these specific conditions. In this review, we discuss recent progress in DL-based RIQA models in general and the need for RIQA models tailored for ONH disorders. Finally, we propose suggestions for such models in the future.
Collapse
Affiliation(s)
| | - Raymond P Najjar
- Duke-NUS School of Medicine, Singapore
- Visual Neuroscience Group, Singapore Eye Research Institute, Singapore
| | - Zhiqun Tang
- Visual Neuroscience Group, Singapore Eye Research Institute, Singapore
| | - Dan Milea
- Duke-NUS School of Medicine, Singapore
- Visual Neuroscience Group, Singapore Eye Research Institute, Singapore
- Ophthalmology Department, Singapore National Eye Centre, Singapore
- Rigshospitalet, Copenhagen University, Denmark
| |
Collapse
|
31
|
Li JPO, Liu H, Ting DSJ, Jeon S, Chan RVP, Kim JE, Sim DA, Thomas PBM, Lin H, Chen Y, Sakomoto T, Loewenstein A, Lam DSC, Pasquale LR, Wong TY, Lam LA, Ting DSW. Digital technology, tele-medicine and artificial intelligence in ophthalmology: A global perspective. Prog Retin Eye Res 2021; 82:100900. [PMID: 32898686 PMCID: PMC7474840 DOI: 10.1016/j.preteyeres.2020.100900] [Citation(s) in RCA: 201] [Impact Index Per Article: 67.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2020] [Revised: 08/25/2020] [Accepted: 08/31/2020] [Indexed: 12/29/2022]
Abstract
The simultaneous maturation of multiple digital and telecommunications technologies in 2020 has created an unprecedented opportunity for ophthalmology to adapt to new models of care using tele-health supported by digital innovations. These digital innovations include artificial intelligence (AI), 5th generation (5G) telecommunication networks and the Internet of Things (IoT), creating an inter-dependent ecosystem offering opportunities to develop new models of eye care addressing the challenges of COVID-19 and beyond. Ophthalmology has thrived in some of these areas partly due to its many image-based investigations. Tele-health and AI provide synchronous solutions to challenges facing ophthalmologists and healthcare providers worldwide. This article reviews how countries across the world have utilised these digital innovations to tackle diabetic retinopathy, retinopathy of prematurity, age-related macular degeneration, glaucoma, refractive error correction, cataract and other anterior segment disorders. The review summarises the digital strategies that countries are developing and discusses technologies that may increasingly enter the clinical workflow and processes of ophthalmologists. Furthermore as countries around the world have initiated a series of escalating containment and mitigation measures during the COVID-19 pandemic, the delivery of eye care services globally has been significantly impacted. As ophthalmic services adapt and form a "new normal", the rapid adoption of some of telehealth and digital innovation during the pandemic is also discussed. Finally, challenges for validation and clinical implementation are considered, as well as recommendations on future directions.
Collapse
Affiliation(s)
- Ji-Peng Olivia Li
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
| | - Hanruo Liu
- Beijing Tongren Hospital; Capital Medical University; Beijing Institute of Ophthalmology; Beijing, China
| | - Darren S J Ting
- Academic Ophthalmology, University of Nottingham, United Kingdom
| | - Sohee Jeon
- Keye Eye Center, Seoul, Republic of Korea
| | | | - Judy E Kim
- Medical College of Wisconsin, Milwaukee, WI, USA
| | - Dawn A Sim
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom
| | - Peter B M Thomas
- NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom
| | - Haotian Lin
- Zhongshan Ophthalmic Center, State Key Laboratory of Ophthalmology, Guangzhou, China
| | - Youxin Chen
- Peking Union Medical College Hospital, Beijing, China
| | - Taiji Sakomoto
- Department of Ophthalmology, Kagoshima University Graduate School of Medical and Dental Sciences, Japan
| | | | - Dennis S C Lam
- C-MER Dennis Lam Eye Center, C-Mer International Eye Care Group Limited, Hong Kong, Hong Kong; International Eye Research Institute of the Chinese University of Hong Kong (Shenzhen), Shenzhen, China
| | - Louis R Pasquale
- Department of Ophthalmology, Icahn School of Medicine at Mount Sinai, New York, USA
| | - Tien Y Wong
- Singapore National Eye Center, Duke-NUS Medical School Singapore, Singapore
| | - Linda A Lam
- USC Roski Eye Institute, University of Southern California (USC) Keck School of Medicine, Los Angeles, CA, USA
| | - Daniel S W Ting
- Singapore National Eye Center, Duke-NUS Medical School Singapore, Singapore.
| |
Collapse
|
32
|
Dong L, Yang Q, Zhang RH, Wei WB. Artificial intelligence for the detection of age-related macular degeneration in color fundus photographs: A systematic review and meta-analysis. EClinicalMedicine 2021; 35:100875. [PMID: 34027334 PMCID: PMC8129891 DOI: 10.1016/j.eclinm.2021.100875] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/22/2021] [Revised: 04/14/2021] [Accepted: 04/15/2021] [Indexed: 02/06/2023] Open
Abstract
BACKGROUND Age-related macular degeneration (AMD) is one of the leading causes of vision loss in the elderly population. The application of artificial intelligence (AI) provides convenience for the diagnosis of AMD. This systematic review and meta-analysis aimed to quantify the performance of AI in detecting AMD in fundus photographs. METHODS We searched PubMed, Embase, Web of Science and the Cochrane Library before December 31st, 2020 for studies reporting the application of AI in detecting AMD in color fundus photographs. Then, we pooled the data for analysis. PROSPERO registration number: CRD42020197532. FINDINGS 19 studies were finally selected for systematic review and 13 of them were included in the quantitative synthesis. All studies adopted human graders as reference standard. The pooled area under the receiver operating characteristic curve (AUROC) was 0.983 (95% confidence interval (CI):0.979-0.987). The pooled sensitivity, specificity, and diagnostic odds ratio (DOR) were 0.88 (95% CI:0.88-0.88), 0.90 (95% CI:0.90-0.91), and 275.27 (95% CI:158.43-478.27), respectively. Threshold analysis was performed and a potential threshold effect was detected among the studies (Spearman correlation coefficient: -0.600, P = 0.030), which was the main cause for the heterogeneity. For studies applying convolutional neural networks in the Age-Related Eye Disease Study database, the pooled AUROC, sensitivity, specificity, and DOR were 0.983 (95% CI:0.978-0.988), 0.88 (95% CI:0.88-0.88), 0.91 (95% CI:0.91-0.91), and 273.14 (95% CI:130.79-570.43), respectively. INTERPRETATION Our data indicated that AI was able to detect AMD in color fundus photographs. The application of AI-based automatic tools is beneficial for the diagnosis of AMD. FUNDING Capital Health Research and Development of Special (2020-1-2052).
Collapse
|
33
|
Li Z, Jiang J, Chen K, Zheng Q, Liu X, Weng H, Wu S, Chen W. Development of a deep learning-based image quality control system to detect and filter out ineligible slit-lamp images: A multicenter study. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 203:106048. [PMID: 33765481 DOI: 10.1016/j.cmpb.2021.106048] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/15/2020] [Accepted: 03/08/2021] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVE Previous studies developed artificial intelligence (AI) diagnostic systems only using eligible slit-lamp images for detecting corneal diseases. However, images of ineligible quality (including poor-field, defocused, and poor-location images), which are inevitable in the real world, can cause diagnostic information loss and thus affect downstream AI-based image analysis. Manual evaluation for the eligibility of slit-lamp images often requires an ophthalmologist, and this procedure can be time-consuming and labor-intensive when applied on a large scale. Here, we aimed to develop a deep learning-based image quality control system (DLIQCS) to automatically detect and filter out ineligible slit-lamp images (poor-field, defocused, and poor-location images). METHODS We developed and externally evaluated the DLIQCS based on 48,530 slit-lamp images (19,890 individuals) that were derived from 4 independent institutions using different types of digital slit lamp cameras. To find the best deep learning model for the DLIQCS, we used 3 algorithms (AlexNet, DenseNet121, and InceptionV3) to train models. The area under the receiver operating characteristic curve (AUC), sensitivity, specificity, and accuracy were leveraged to assess the performance of each algorithm for the classification of poor-field, defocused, poor-location, and eligible images. RESULTS In an internal test dataset, the best algorithm DenseNet121 had AUCs of 0.999, 1.000, 1.000, and 1.000 in the detection of poor-field, defocused, poor-location, and eligible images, respectively. In external test datasets, the AUCs of the best algorithm DenseNet121 for identifying poor-field, defocused, poor-location, and eligible images were ranged from 0.997 to 0.997, 0.983 to 0.995, 0.995 to 0.998, and 0.999 to 0.999, respectively. CONCLUSIONS Our DLIQCS can accurately detect poor-field, defocused, poor-location, and eligible slit-lamp images in an automated fashion. This system may serve as a prescreening tool to filter out ineligible images and enable that only eligible images would be transferred to the subsequent AI diagnostic systems.
Collapse
Affiliation(s)
- Zhongwen Li
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, 315000, China; School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China
| | - Jiewei Jiang
- School of Electronics Engineering, Xi'an University of Posts and Telecommunications, Xi'an, 710121, China
| | - Kuan Chen
- School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China
| | - Qinxiang Zheng
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, 315000, China; School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China
| | - Xiaotian Liu
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, 315000, China
| | - Hongfei Weng
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, 315000, China
| | - Shanjun Wu
- School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China
| | - Wei Chen
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, 315000, China; School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China.
| |
Collapse
|
34
|
Aggarwal R, Sounderajah V, Martin G, Ting DSW, Karthikesalingam A, King D, Ashrafian H, Darzi A. Diagnostic accuracy of deep learning in medical imaging: a systematic review and meta-analysis. NPJ Digit Med 2021; 4:65. [PMID: 33828217 PMCID: PMC8027892 DOI: 10.1038/s41746-021-00438-z] [Citation(s) in RCA: 229] [Impact Index Per Article: 76.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2020] [Accepted: 02/25/2021] [Indexed: 12/19/2022] Open
Abstract
Deep learning (DL) has the potential to transform medical diagnostics. However, the diagnostic accuracy of DL is uncertain. Our aim was to evaluate the diagnostic accuracy of DL algorithms to identify pathology in medical imaging. Searches were conducted in Medline and EMBASE up to January 2020. We identified 11,921 studies, of which 503 were included in the systematic review. Eighty-two studies in ophthalmology, 82 in breast disease and 115 in respiratory disease were included for meta-analysis. Two hundred twenty-four studies in other specialities were included for qualitative review. Peer-reviewed studies that reported on the diagnostic accuracy of DL algorithms to identify pathology using medical imaging were included. Primary outcomes were measures of diagnostic accuracy, study design and reporting standards in the literature. Estimates were pooled using random-effects meta-analysis. In ophthalmology, AUC's ranged between 0.933 and 1 for diagnosing diabetic retinopathy, age-related macular degeneration and glaucoma on retinal fundus photographs and optical coherence tomography. In respiratory imaging, AUC's ranged between 0.864 and 0.937 for diagnosing lung nodules or lung cancer on chest X-ray or CT scan. For breast imaging, AUC's ranged between 0.868 and 0.909 for diagnosing breast cancer on mammogram, ultrasound, MRI and digital breast tomosynthesis. Heterogeneity was high between studies and extensive variation in methodology, terminology and outcome measures was noted. This can lead to an overestimation of the diagnostic accuracy of DL algorithms on medical imaging. There is an immediate need for the development of artificial intelligence-specific EQUATOR guidelines, particularly STARD, in order to provide guidance around key issues in this field.
Collapse
Affiliation(s)
- Ravi Aggarwal
- Institute of Global Health Innovation, Imperial College London, London, UK
| | | | - Guy Martin
- Institute of Global Health Innovation, Imperial College London, London, UK
| | - Daniel S W Ting
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore, Singapore
| | | | - Dominic King
- Institute of Global Health Innovation, Imperial College London, London, UK
| | - Hutan Ashrafian
- Institute of Global Health Innovation, Imperial College London, London, UK.
| | - Ara Darzi
- Institute of Global Health Innovation, Imperial College London, London, UK
| |
Collapse
|
35
|
Gong D, Kras A, Miller JB. Application of Deep Learning for Diagnosing, Classifying, and Treating Age-Related Macular Degeneration. Semin Ophthalmol 2021; 36:198-204. [PMID: 33617390 DOI: 10.1080/08820538.2021.1889617] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/20/2023]
Abstract
Age-related macular degeneration (AMD) affects nearly 200 million people and is the third leading cause of irreversible vision loss worldwide. Deep learning, a branch of artificial intelligence that can learn image recognition based on pre-existing datasets, creates an opportunity for more accurate and efficient diagnosis, classification, and treatment of AMD on both individual and population levels. Current algorithms based on fundus photography and optical coherence tomography imaging have already achieved diagnostic accuracy levels comparable to human graders. This accuracy can be further increased when deep learning algorithms are simultaneously applied to multiple diagnostic imaging modalities. Combined with advances in telemedicine and imaging technology, deep learning can enable large populations of patients to be screened than would otherwise be possible and allow ophthalmologists to focus on seeing those patients who are in need of treatment, thus reducing the number of patients with significant visual impairment from AMD.
Collapse
Affiliation(s)
- Dan Gong
- Department of Ophthalmology, Retina Service, Massachusetts Eye and Ear Infirmary, Harvard Medical School, Boston, MA,USA
| | - Ashley Kras
- Harvard Retinal Imaging Lab, Massachusetts Eye and Ear Infirmary, Boston, MA
| | - John B Miller
- Department of Ophthalmology, Retina Service, Massachusetts Eye and Ear Infirmary, Harvard Medical School, Boston, MA,USA.,Harvard Retinal Imaging Lab, Massachusetts Eye and Ear Infirmary, Boston, MA
| |
Collapse
|
36
|
Tham YC, Anees A, Zhang L, Goh JHL, Rim TH, Nusinovici S, Hamzah H, Chee ML, Tjio G, Li S, Xu X, Goh R, Tang F, Cheung CYL, Wang YX, Nangia V, Jonas JB, Gopinath B, Mitchell P, Husain R, Lamoureux E, Sabanayagam C, Wang JJ, Aung T, Liu Y, Wong TY, Cheng CY. Referral for disease-related visual impairment using retinal photograph-based deep learning: a proof-of-concept, model development study. LANCET DIGITAL HEALTH 2021; 3:e29-e40. [DOI: 10.1016/s2589-7500(20)30271-5] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/24/2020] [Revised: 10/14/2020] [Accepted: 10/24/2020] [Indexed: 11/26/2022]
|
37
|
Li Z, Jiang J, Zhou H, Zheng Q, Liu X, Chen K, Weng H, Chen W. Development of a deep learning-based image eligibility verification system for detecting and filtering out ineligible fundus images: A multicentre study. Int J Med Inform 2020; 147:104363. [PMID: 33388480 DOI: 10.1016/j.ijmedinf.2020.104363] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2020] [Revised: 12/07/2020] [Accepted: 12/08/2020] [Indexed: 12/26/2022]
Abstract
BACKGROUND Recent advances in artificial intelligence (AI) have shown great promise in detecting some diseases based on medical images. Most studies developed AI diagnostic systems only using eligible images. However, in real-world settings, ineligible images (including poor-quality and poor-location images) that can compromise downstream analysis are inevitable, leading to uncertainty about the performance of these AI systems. This study aims to develop a deep learning-based image eligibility verification system (DLIEVS) for detecting and filtering out ineligible fundus images. METHODS A total of 18,031 fundus images (9,188 subjects) collected from 4 clinical centres were used to develop and evaluate the DLIEVS for detecting eligible, poor-location, and poor-quality fundus images. Four deep learning algorithms (AlexNet, DenseNet121, Inception V3, and ResNet50) were leveraged to train models to obtain the best model for the DLIEVS. The performance of the DLIEVS was evaluated using the area under the receiver operating characteristic curve (AUC), sensitivity, and specificity, as compared with a reference standard determined by retina experts. RESULTS In the internal test dataset, the best algorithm (DenseNet121) achieved AUCs of 1.000, 0.999, and 1.000 for the classification of eligible, poor-location, and poor-quality images, respectively. In the external test datasets, the AUCs of the best algorithm (DenseNet121) for detecting eligible, poor-location, and poor-quality images were ranged from 0.999-1.000, 0.997-1.000, and 0.997-0.999, respectively. CONCLUSIONS Our DLIEVS can accurately discriminate poor-quality and poor-location images from eligible images. This system has the potential to serve as a pre-screening technique to filter out ineligible images obtained from real-world settings, ensuring only eligible images will be applied in the subsequent image-based AI diagnostic analyses.
Collapse
Affiliation(s)
- Zhongwen Li
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, 315000, China; School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China.
| | - Jiewei Jiang
- School of Electronics Engineering, Xi'an University of Posts and Telecommunications, Xi'an, 710121, China
| | - Heding Zhou
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, 315000, China
| | - Qinxiang Zheng
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, 315000, China; School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China
| | - Xiaotian Liu
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, 315000, China
| | - Kuan Chen
- School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China
| | - Hongfei Weng
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, 315000, China
| | - Wei Chen
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, 315000, China; School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China.
| |
Collapse
|
38
|
Gertig D, Smith JR. Screening and avoidance of blindness: One cannot exist without the other. Clin Exp Ophthalmol 2020; 48:1133-1135. [PMID: 33191539 DOI: 10.1111/ceo.13881] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Affiliation(s)
- Demi Gertig
- Flinders University, Adelaide, South Australia, Australia
| | | |
Collapse
|
39
|
Li Z, Guo C, Nie D, Lin D, Zhu Y, Chen C, Zhao L, Wu X, Dongye M, Xu F, Jin C, Zhang P, Han Y, Yan P, Lin H. Deep learning from "passive feeding" to "selective eating" of real-world data. NPJ Digit Med 2020; 3:143. [PMID: 33145439 PMCID: PMC7603327 DOI: 10.1038/s41746-020-00350-y] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2020] [Accepted: 09/24/2020] [Indexed: 12/23/2022] Open
Abstract
Artificial intelligence (AI) based on deep learning has shown excellent diagnostic performance in detecting various diseases with good-quality clinical images. Recently, AI diagnostic systems developed from ultra-widefield fundus (UWF) images have become popular standard-of-care tools in screening for ocular fundus diseases. However, in real-world settings, these systems must base their diagnoses on images with uncontrolled quality ("passive feeding"), leading to uncertainty about their performance. Here, using 40,562 UWF images, we develop a deep learning-based image filtering system (DLIFS) for detecting and filtering out poor-quality images in an automated fashion such that only good-quality images are transferred to the subsequent AI diagnostic system ("selective eating"). In three independent datasets from different clinical institutions, the DLIFS performed well with sensitivities of 96.9%, 95.6% and 96.6%, and specificities of 96.6%, 97.9% and 98.8%, respectively. Furthermore, we show that the application of our DLIFS significantly improves the performance of established AI diagnostic systems in real-world settings. Our work demonstrates that "selective eating" of real-world data is necessary and needs to be considered in the development of image-based AI systems.
Collapse
Affiliation(s)
- Zhongwen Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, 510060 Guangzhou, China
| | - Chong Guo
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, 510060 Guangzhou, China
| | - Danyao Nie
- Shenzhen Eye Hospital, Shenzhen Key Laboratory of Ophthalmology, Affiliated Shenzhen Eye Hospital of Jinan University, 518001 Shenzhen, China
| | - Duoru Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, 510060 Guangzhou, China
| | - Yi Zhu
- Department of Molecular and Cellular Pharmacology, University of Miami Miller School of Medicine, Miami, FL 33136 USA
| | - Chuan Chen
- Sylvester Comprehensive Cancer Centre, University of Miami Miller School of Medicine, Miami, FL 33136 USA
| | - Lanqin Zhao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, 510060 Guangzhou, China
| | - Xiaohang Wu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, 510060 Guangzhou, China
| | - Meimei Dongye
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, 510060 Guangzhou, China
| | - Fabao Xu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, 510060 Guangzhou, China
| | - Chenjin Jin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, 510060 Guangzhou, China
| | - Ping Zhang
- Xudong Ophthalmic Hospital, 015000 Inner Mongolia, China
| | - Yu Han
- EYE and ENT Hospital of Fudan University, 200031 Shanghai, China
| | - Pisong Yan
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, 510060 Guangzhou, China
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, 510060 Guangzhou, China
- Centre for Precision Medicine, Sun Yat-sen University, 510060 Guangzhou, China
| |
Collapse
|
40
|
Abstract
PURPOSE OF REVIEW To summarize how big data and artificial intelligence technologies have evolved, their current state, and next steps to enable future generations of artificial intelligence for ophthalmology. RECENT FINDINGS Big data in health care is ever increasing in volume and variety, enabled by the widespread adoption of electronic health records (EHRs) and standards for health data information exchange, such as Digital Imaging and Communications in Medicine and Fast Healthcare Interoperability Resources. Simultaneously, the development of powerful cloud-based storage and computing architectures supports a fertile environment for big data and artificial intelligence in health care. The high volume and velocity of imaging and structured data in ophthalmology and is one of the reasons why ophthalmology is at the forefront of artificial intelligence research. Still needed are consensus labeling conventions for performing supervised learning on big data, promotion of data sharing and reuse, standards for sharing artificial intelligence model architectures, and access to artificial intelligence models through open application program interfaces (APIs). SUMMARY Future requirements for big data and artificial intelligence include fostering reproducible science, continuing open innovation, and supporting the clinical use of artificial intelligence by promoting standards for data labels, data sharing, artificial intelligence model architecture sharing, and accessible code and APIs.
Collapse
|
41
|
Optical coherence tomography and color fundus photography in the screening of age-related macular degeneration: A comparative, population-based study. PLoS One 2020; 15:e0237352. [PMID: 32797085 PMCID: PMC7428158 DOI: 10.1371/journal.pone.0237352] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2020] [Accepted: 07/23/2020] [Indexed: 12/02/2022] Open
Abstract
Purpose To analyze the individual value and the contribution of color fundus photography (CFP) and optical coherence tomography (OCT) in the screening of age-related macular degeneration (AMD) of an unselected population. Methods CFP and OCT images of 15957 eyes of 8069 subjects older than 55 years, obtained during a population-based screening for AMD using a single diagnostic non-mydriatic imaging device, were analyzed by a blinded examiner. The two techniques were preliminary evaluated considering the dichotomous parameter "gradable/ungradable", then gradable images were classified. CFP were graded according to the standardized classification of AMD lesions. OCT images were also categorized considering the presence of signs of early/intermediate AMD, late AMD, or other retinal diseases. Another blinded operator re-graded 1978 randomly selected images (for both CFP and OCT), to assess test reproducibility. Results Of the 15957 eyes, 8356 CFP (52.4%) and 15594 (97.7%) OCT scans were gradable. Moreover, most of the eyes with ungradable CFP (7339, 96.6%) were gradable at OCT. AMD signs were revealed in 7.4% of gradable CFP and in 10.4% of gradable OCT images. Moreover, at OCT, AMD signs were found in 1110 (6.9%) eyes whose CFP were ungradable or without AMD (847 and 263 eyes, respectively). The inter-operator agreement was good for the gradable versus ungradable parameter, and optimal for the AMD grading parameter of CFP. The agreement was optimal for all OCT parameters. Conclusions OCT provided gradable images in almost all examined eyes, compared to limited CFP efficiency. Moreover, OCT images allowed to detect more AMD eyes compared to gradable photos. OCT imaging appears to significantly improve the power of AMD screening in a general, unselected population, compared to CFP alone.
Collapse
|
42
|
Abstract
PURPOSE OF REVIEW As artificial intelligence continues to develop new applications in ophthalmic image recognition, we provide here an introduction for ophthalmologists and a primer on the mechanisms of deep learning systems. RECENT FINDINGS Deep learning has lent itself to the automated interpretation of various retinal imaging modalities, including fundus photography and optical coherence tomography. Convolutional neural networks (CNN) represent the primary class of deep neural networks applied to these image analyses. These have been configured to aid in the detection of diabetes retinopathy, AMD, retinal detachment, glaucoma, and ROP, among other ocular disorders. Predictive models for retinal disease prognosis and treatment are also being validated. SUMMARY Deep learning systems have begun to demonstrate a reliable level of diagnostic accuracy equal or better to human graders for narrow image recognition tasks. However, challenges regarding the use of deep learning systems in ophthalmology remain. These include trust of unsupervised learning systems and the limited ability to recognize broad ranges of disorders.
Collapse
|
43
|
Rim TH, Soh ZD, Tham YC, Yang HHS, Lee G, Kim Y, Nusinovici S, Ting DSW, Wong TY, Cheng CY. Deep Learning for Automated Sorting of Retinal Photographs. Ophthalmol Retina 2020; 4:793-800. [PMID: 32362553 DOI: 10.1016/j.oret.2020.03.007] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2019] [Revised: 02/04/2020] [Accepted: 03/06/2020] [Indexed: 06/11/2023]
Abstract
PURPOSE Though the domain of big data and artificial intelligence in health care continues to evolve, there is a lack of systemic methods to improve data quality and streamline the preparation process. To address this, we aimed to develop an automated sorting system (RetiSort) that accurately labels the type and laterality of retinal photographs. DESIGN Cross-sectional study. PARTICIPANTS RetiSort was developed with retinal photographs from the Singapore Epidemiology of Eye Diseases (SEED) study. METHODS The development of RetiSort was composed of 3 steps: 2 deep-learning (DL) algorithms and 1 rule-based classifier. For step 1, a DL algorithm was developed to locate the optic disc, the "landmark feature." For step 2, based on the location of the optic disc derived from step 1, a rule-based classifier was developed to sort retinal photographs into 3 types: macular-centered, optic disc-centered, or related to other fields. Step 2 concurrently distinguished laterality (i.e., the left or right eye) of macular-centered photographs. For step 3, an additional DL algorithm was developed to differentiate the laterality of disc-centered photographs. Via the 3 steps, RetiSort sorted and labeled retinal images into (1) right macular-centered, (2) left macular-centered, (3) right optic disc-centered, (4) left optic disc-centered, and (5) images relating to other fields. Subsequently, the accuracy of RetiSort was evaluated on 5000 randomly selected retinal images from SEED as well as on 3 publicly available image databases (DIARETDB0, HEI-MED, and Drishti-GS). The main outcome measure was the accuracy for sorting of retinal photographs. RESULTS RetiSort mislabeled 48 out of 5000 retinal images from SEED, representing an overall accuracy of 99.0% (95% confidence interval [CI], 98.7-99.3). In external tests, RetiSort mislabeled 1, 0, and 2 images, respectively, from DIARETDB0, HEI-MED, and Drishti-GS, representing an accuracy of 99.2% (95% CI, 95.8-99.9), 100%, and 98.0% (95% CI, 93.1-99.8), respectively. Saliency maps consistently showed that the DL algorithm in step 3 required pixels in the central left lateral border and optic disc of optic disc-centered retinal photographs to differentiate the laterality. CONCLUSIONS RetiSort is a highly accurate automated sorting system. It can aid in data preparation and has practical applications in DL research that uses retinal photographs.
Collapse
Affiliation(s)
- Tyler Hyungtaek Rim
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Ophthalmology & Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore
| | - Zhi Da Soh
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Yih-Chung Tham
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Ophthalmology & Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore
| | | | | | | | - Simon Nusinovici
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Daniel Shu Wei Ting
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Ophthalmology & Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore
| | - Tien Yin Wong
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Ophthalmology & Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore
| | - Ching-Yu Cheng
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Ophthalmology & Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore; Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore.
| |
Collapse
|
44
|
He M, Li Z, Liu C, Shi D, Tan Z. Deployment of Artificial Intelligence in Real-World Practice: Opportunity and Challenge. Asia Pac J Ophthalmol (Phila) 2020; 9:299-307. [PMID: 32694344 DOI: 10.1097/apo.0000000000000301] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022] Open
Abstract
Artificial intelligence has rapidly evolved from the experimental phase to the implementation phase in many image-driven clinical disciplines, including ophthalmology. A combination of the increasing availability of large datasets and computing power with revolutionary progress in deep learning has created unprecedented opportunities for major breakthrough improvements in the performance and accuracy of automated diagnoses that primarily focus on image recognition and feature detection. Such an automated disease classification would significantly improve the accessibility, efficiency, and cost-effectiveness of eye care systems where it is less dependent on human input, potentially enabling diagnosis to be cheaper, quicker, and more consistent. Although this technology will have a profound impact on clinical flow and practice patterns sooner or later, translating such a technology into clinical practice is challenging and requires similar levels of accountability and effectiveness as any new medication or medical device due to the potential problems of bias, and ethical, medical, and legal issues that might arise. The objective of this review is to summarize the opportunities and challenges of this transition and to facilitate the integration of artificial intelligence (AI) into routine clinical practice based on our best understanding and experience in this area.
Collapse
Affiliation(s)
- Mingguang He
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
- Centre for Eye Research Australia, Royal Victorian Eye & Ear Hospital, Melbourne, Australia
| | - Zhixi Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Chi Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
- School of Computer Science, University of Technology Sydney, Ultimo NSW, Australia
| | - Danli Shi
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Zachary Tan
- Faculty of Medicine, The University of Queensland, Brisbane, Australia
- Schwarzman College, Tsinghua University, Beijing, China
| |
Collapse
|
45
|
Cai L, Hinkle JW, Arias D, Gorniak RJ, Lakhani PC, Flanders AE, Kuriyan AE. Applications of Artificial Intelligence for the Diagnosis, Prognosis, and Treatment of Age-related Macular Degeneration. Int Ophthalmol Clin 2020; 60:147-168. [PMID: 33093323 DOI: 10.1097/iio.0000000000000334] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
|
46
|
Li Z, Guo C, Nie D, Lin D, Zhu Y, Chen C, Zhang L, Xu F, Jin C, Zhang X, Xiao H, Zhang K, Zhao L, Yu S, Zhang G, Wang J, Lin H. A deep learning system for identifying lattice degeneration and retinal breaks using ultra-widefield fundus images. ANNALS OF TRANSLATIONAL MEDICINE 2019; 7:618. [PMID: 31930019 DOI: 10.21037/atm.2019.11.28] [Citation(s) in RCA: 34] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
Background Lattice degeneration and/or retinal breaks, defined as notable peripheral retinal lesions (NPRLs), are prone to evolving into rhegmatogenous retinal detachment which can cause severe visual loss. However, screening NPRLs is time-consuming and labor-intensive. Therefore, we aimed to develop and evaluate a deep learning (DL) system for automated identifying NPRLs based on ultra-widefield fundus (UWF) images. Methods A total of 5,606 UWF images from 2,566 participants were used to train and verify a DL system. All images were classified by 3 experienced ophthalmologists. The reference standard was determined when an agreement was achieved among all 3 ophthalmologists, or adjudicated by another retinal specialist if disagreements existed. An independent test set of 750 images was applied to verify the performance of 12 DL models trained using 4 different DL algorithms (InceptionResNetV2, InceptionV3, ResNet50, and VGG16) with 3 preprocessing techniques (original, augmented, and histogram-equalized images). Heatmaps were generated to visualize the process of the best DL system in the identification of NPRLs. Results In the test set, the best DL system for identifying NPRLs achieved an area under the curve (AUC) of 0.999 with a sensitivity and specificity of 98.7% and 99.2%, respectively. The best preprocessing method in each algorithm was the application of original image augmentation (average AUC =0.996). The best algorithm in each preprocessing method was InceptionResNetV2 (average AUC =0.996). In the test set, 150 of 154 true-positive cases (97.4%) displayed heatmap visualization in the NPRL regions. Conclusions A DL system has high accuracy in identifying NPRLs based on UWF images. This system may help to prevent the development of rhegmatogenous retinal detachment by early detection of NPRLs.
Collapse
Affiliation(s)
- Zhongwen Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou 510060, China
| | - Chong Guo
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou 510060, China
| | - Danyao Nie
- Shenzhen Ophthalmic Center, Jinan University, Shenzhen 518001, China
| | - Duoru Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou 510060, China
| | - Yi Zhu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou 510060, China.,Department of Molecular and Cellular Pharmacology, University of Miami Miller School of Medicine, Miami, Florida, USA
| | - Chuan Chen
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou 510060, China.,Department of Molecular and Cellular Pharmacology, University of Miami Miller School of Medicine, Miami, Florida, USA
| | - Li Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou 510060, China
| | - Fabao Xu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou 510060, China
| | - Chenjin Jin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou 510060, China
| | - Xiayin Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou 510060, China
| | - Hui Xiao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou 510060, China
| | - Kai Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou 510060, China.,School of Computer Science and Technology, Xidian University, Xi'an 710071, China
| | - Lanqin Zhao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou 510060, China
| | - Shanshan Yu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou 510060, China
| | - Guoming Zhang
- Shenzhen Ophthalmic Center, Jinan University, Shenzhen 518001, China
| | - Jiantao Wang
- Shenzhen Ophthalmic Center, Jinan University, Shenzhen 518001, China
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou 510060, China
| |
Collapse
|