1
|
Grzybowski A, Jin K, Zhou J, Pan X, Wang M, Ye J, Wong TY. Retina Fundus Photograph-Based Artificial Intelligence Algorithms in Medicine: A Systematic Review. Ophthalmol Ther 2024; 13:2125-2149. [PMID: 38913289 PMCID: PMC11246322 DOI: 10.1007/s40123-024-00981-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2024] [Accepted: 04/15/2024] [Indexed: 06/25/2024] Open
Abstract
We conducted a systematic review of research in artificial intelligence (AI) for retinal fundus photographic images. We highlighted the use of various AI algorithms, including deep learning (DL) models, for application in ophthalmic and non-ophthalmic (i.e., systemic) disorders. We found that the use of AI algorithms for the interpretation of retinal images, compared to clinical data and physician experts, represents an innovative solution with demonstrated superior accuracy in identifying many ophthalmic (e.g., diabetic retinopathy (DR), age-related macular degeneration (AMD), optic nerve disorders), and non-ophthalmic disorders (e.g., dementia, cardiovascular disease). There has been a significant amount of clinical and imaging data for this research, leading to the potential incorporation of AI and DL for automated analysis. AI has the potential to transform healthcare by improving accuracy, speed, and workflow, lowering cost, increasing access, reducing mistakes, and transforming healthcare worker education and training.
Collapse
Affiliation(s)
- Andrzej Grzybowski
- Institute for Research in Ophthalmology, Foundation for Ophthalmology Development, Poznań , Poland.
| | - Kai Jin
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China
| | - Jingxin Zhou
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China
| | - Xiangji Pan
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China
| | - Meizhu Wang
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China
| | - Juan Ye
- Eye Center, School of Medicine, The Second Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang, China.
| | - Tien Y Wong
- School of Clinical Medicine, Tsinghua Medicine, Tsinghua University, Beijing, China
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore, Singapore
| |
Collapse
|
2
|
Hoffmann L, Runkel CB, Künzel S, Kabiri P, Rübsam A, Bonaventura T, Marquardt P, Haas V, Biniaminov N, Biniaminov S, Joussen AM, Zeitz O. Using Deep Learning to Distinguish Highly Malignant Uveal Melanoma from Benign Choroidal Nevi. J Clin Med 2024; 13:4141. [PMID: 39064181 DOI: 10.3390/jcm13144141] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2024] [Revised: 06/24/2024] [Accepted: 07/11/2024] [Indexed: 07/28/2024] Open
Abstract
Background: This study aimed to evaluate the potential of human-machine interaction (HMI) in a deep learning software for discerning the malignancy of choroidal melanocytic lesions based on fundus photographs. Methods: The study enrolled individuals diagnosed with a choroidal melanocytic lesion at a tertiary clinic between 2011 and 2023, resulting in a cohort of 762 eligible cases. A deep learning-based assistant integrated into the software underwent training using a dataset comprising 762 color fundus photographs (CFPs) of choroidal lesions captured by various fundus cameras. The dataset was categorized into benign nevi, untreated choroidal melanomas, and irradiated choroidal melanomas. The reference standard for evaluation was established by retinal specialists using multimodal imaging. Trinary and binary models were trained, and their classification performance was evaluated on a test set consisting of 100 independent images. The discriminative performance of deep learning models was evaluated based on accuracy, recall, and specificity. Results: The final accuracy rates on the independent test set for multi-class and binary (benign vs. malignant) classification were 84.8% and 90.9%, respectively. Recall and specificity ranged from 0.85 to 0.90 and 0.91 to 0.92, respectively. The mean area under the curve (AUC) values were 0.96 and 0.99, respectively. Optimal discriminative performance was observed in binary classification with the incorporation of a single imaging modality, achieving an accuracy of 95.8%. Conclusions: The deep learning models demonstrated commendable performance in distinguishing the malignancy of choroidal lesions. The software exhibits promise for resource-efficient and cost-effective pre-stratification.
Collapse
Affiliation(s)
- Laura Hoffmann
- Department of Ophthalmology, Charité University Hospital Berlin, 12203 Berlin, Germany
| | - Constance B Runkel
- Department of Ophthalmology, Charité University Hospital Berlin, 12203 Berlin, Germany
| | - Steffen Künzel
- Department of Ophthalmology, Charité University Hospital Berlin, 12203 Berlin, Germany
| | - Payam Kabiri
- Department of Ophthalmology, Charité University Hospital Berlin, 12203 Berlin, Germany
| | - Anne Rübsam
- Department of Ophthalmology, Charité University Hospital Berlin, 12203 Berlin, Germany
| | - Theresa Bonaventura
- Department of Ophthalmology, Charité University Hospital Berlin, 12203 Berlin, Germany
| | | | | | | | | | - Antonia M Joussen
- Department of Ophthalmology, Charité University Hospital Berlin, 12203 Berlin, Germany
| | - Oliver Zeitz
- Department of Ophthalmology, Charité University Hospital Berlin, 12203 Berlin, Germany
| |
Collapse
|
3
|
Martin E, Cook AG, Frost SM, Turner AW, Chen FK, McAllister IL, Nolde JM, Schlaich MP. Ocular biomarkers: useful incidental findings by deep learning algorithms in fundus photographs. Eye (Lond) 2024:10.1038/s41433-024-03085-2. [PMID: 38734746 DOI: 10.1038/s41433-024-03085-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2023] [Revised: 04/03/2024] [Accepted: 04/11/2024] [Indexed: 05/13/2024] Open
Abstract
BACKGROUND/OBJECTIVES Artificial intelligence can assist with ocular image analysis for screening and diagnosis, but it is not yet capable of autonomous full-spectrum screening. Hypothetically, false-positive results may have unrealized screening potential arising from signals persisting despite training and/or ambiguous signals such as from biomarker overlap or high comorbidity. The study aimed to explore the potential to detect clinically useful incidental ocular biomarkers by screening fundus photographs of hypertensive adults using diabetic deep learning algorithms. SUBJECTS/METHODS Patients referred for treatment-resistant hypertension were imaged at a hospital unit in Perth, Australia, between 2016 and 2022. The same 45° colour fundus photograph selected for each of the 433 participants imaged was processed by three deep learning algorithms. Two expert retinal specialists graded all false-positive results for diabetic retinopathy in non-diabetic participants. RESULTS Of the 29 non-diabetic participants misclassified as positive for diabetic retinopathy, 28 (97%) had clinically useful retinal biomarkers. The models designed to screen for fewer diseases captured more incidental disease. All three algorithms showed a positive correlation between severity of hypertensive retinopathy and misclassified diabetic retinopathy. CONCLUSIONS The results suggest that diabetic deep learning models may be responsive to hypertensive and other clinically useful retinal biomarkers within an at-risk, hypertensive cohort. Observing that models trained for fewer diseases captured more incidental pathology increases confidence in signalling hypotheses aligned with using self-supervised learning to develop autonomous comprehensive screening. Meanwhile, non-referable and false-positive outputs of other deep learning screening models could be explored for immediate clinical use in other populations.
Collapse
Affiliation(s)
- Eve Martin
- Commonwealth Scientific and Industrial Research Organisation (CSIRO), Kensington, WA, Australia.
- School of Population and Global Health, The University of Western Australia, Crawley, Australia.
- Dobney Hypertension Centre - Royal Perth Hospital Unit, Medical School, The University of Western Australia, Perth, Australia.
- Australian e-Health Research Centre, Floreat, WA, Australia.
| | - Angus G Cook
- School of Population and Global Health, The University of Western Australia, Crawley, Australia
| | - Shaun M Frost
- Commonwealth Scientific and Industrial Research Organisation (CSIRO), Kensington, WA, Australia
- Australian e-Health Research Centre, Floreat, WA, Australia
| | - Angus W Turner
- Lions Eye Institute, Nedlands, WA, Australia
- Centre for Ophthalmology and Visual Science, The University of Western Australia, Perth, Australia
| | - Fred K Chen
- Lions Eye Institute, Nedlands, WA, Australia
- Centre for Ophthalmology and Visual Science, The University of Western Australia, Perth, Australia
- Centre for Eye Research Australia, The Royal Victorian Eye and Ear Hospital, East Melbourne, VIC, Australia
- Ophthalmology, Department of Surgery, The University of Melbourne, East Melbourne, VIC, Australia
- Ophthalmology Department, Royal Perth Hospital, Perth, Australia
| | - Ian L McAllister
- Lions Eye Institute, Nedlands, WA, Australia
- Centre for Ophthalmology and Visual Science, The University of Western Australia, Perth, Australia
| | - Janis M Nolde
- Dobney Hypertension Centre - Royal Perth Hospital Unit, Medical School, The University of Western Australia, Perth, Australia
- Departments of Cardiology and Nephrology, Royal Perth Hospital, Perth, Australia
| | - Markus P Schlaich
- Dobney Hypertension Centre - Royal Perth Hospital Unit, Medical School, The University of Western Australia, Perth, Australia
- Departments of Cardiology and Nephrology, Royal Perth Hospital, Perth, Australia
| |
Collapse
|
4
|
Driban M, Yan A, Selvam A, Ong J, Vupparaboina KK, Chhablani J. Artificial intelligence in chorioretinal pathology through fundoscopy: a comprehensive review. Int J Retina Vitreous 2024; 10:36. [PMID: 38654344 PMCID: PMC11036694 DOI: 10.1186/s40942-024-00554-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2024] [Accepted: 04/02/2024] [Indexed: 04/25/2024] Open
Abstract
BACKGROUND Applications for artificial intelligence (AI) in ophthalmology are continually evolving. Fundoscopy is one of the oldest ocular imaging techniques but remains a mainstay in posterior segment imaging due to its prevalence, ease of use, and ongoing technological advancement. AI has been leveraged for fundoscopy to accomplish core tasks including segmentation, classification, and prediction. MAIN BODY In this article we provide a review of AI in fundoscopy applied to representative chorioretinal pathologies, including diabetic retinopathy and age-related macular degeneration, among others. We conclude with a discussion of future directions and current limitations. SHORT CONCLUSION As AI evolves, it will become increasingly essential for the modern ophthalmologist to understand its applications and limitations to improve patient outcomes and continue to innovate.
Collapse
Affiliation(s)
- Matthew Driban
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA
| | - Audrey Yan
- Department of Medicine, West Virginia School of Osteopathic Medicine, Lewisburg, WV, USA
| | - Amrish Selvam
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA
| | - Joshua Ong
- Michigan Medicine, University of Michigan, Ann Arbor, USA
| | | | - Jay Chhablani
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA.
| |
Collapse
|
5
|
Ayhan MS, Neubauer J, Uzel MM, Gelisken F, Berens P. Interpretable detection of epiretinal membrane from optical coherence tomography with deep neural networks. Sci Rep 2024; 14:8484. [PMID: 38605115 PMCID: PMC11009346 DOI: 10.1038/s41598-024-57798-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Accepted: 03/21/2024] [Indexed: 04/13/2024] Open
Abstract
This study aimed to automatically detect epiretinal membranes (ERM) in various OCT-scans of the central and paracentral macula region and classify them by size using deep-neural-networks (DNNs). To this end, 11,061 OCT-images were included and graded according to the presence of an ERM and its size (small 100-1000 µm, large > 1000 µm). The data set was divided into training, validation and test sets (75%, 10%, 15% of the data, respectively). An ensemble of DNNs was trained and saliency maps were generated using Guided-Backprob. OCT-scans were also transformed into a one-dimensional-value using t-SNE analysis. The DNNs' receiver-operating-characteristics on the test set showed a high performance for no-ERM, small-ERM and large-ERM cases (AUC: 0.99, 0.92, 0.99, respectively; 3-way accuracy: 89%), with small-ERMs being the most difficult ones to detect. t-SNE analysis sorted cases by size and, in particular, revealed increased classification uncertainty at the transitions between groups. Saliency maps reliably highlighted ERM, regardless of the presence of other OCT features (i.e. retinal-thickening, intraretinal pseudo-cysts, epiretinal-proliferation) and entities such as ERM-retinoschisis, macular-pseudohole and lamellar-macular-hole. This study showed therefore that DNNs can reliably detect and grade ERMs according to their size not only in the fovea but also in the paracentral region. This is also achieved in cases of hard-to-detect, small-ERMs. In addition, the generated saliency maps can be used to highlight small-ERMs that might otherwise be missed. The proposed model could be used for screening-programs or decision-support-systems in the future.
Collapse
Affiliation(s)
- Murat Seçkin Ayhan
- Institute for Ophthalmic Research, University of Tübingen, Elfriede Aulhorn Str. 7, 72076, Tübingen, Germany
| | - Jonas Neubauer
- University Eye Clinic, University of Tübingen, Tübingen, Germany
| | - Mehmet Murat Uzel
- University Eye Clinic, University of Tübingen, Tübingen, Germany
- Department of Ophthalmology, Balıkesir University School of Medicine, Balıkesir, Turkey
| | - Faik Gelisken
- University Eye Clinic, University of Tübingen, Tübingen, Germany.
| | - Philipp Berens
- Institute for Ophthalmic Research, University of Tübingen, Elfriede Aulhorn Str. 7, 72076, Tübingen, Germany.
- Tübingen AI Center, Tübingen, Germany.
| |
Collapse
|
6
|
Bae SH, Go S, Kim J, Park KH, Lee S, Park SJ. A novel vector field analysis for quantitative structure changes after macular epiretinal membrane surgery. Sci Rep 2024; 14:8242. [PMID: 38589440 PMCID: PMC11002028 DOI: 10.1038/s41598-024-58089-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2023] [Accepted: 03/25/2024] [Indexed: 04/10/2024] Open
Abstract
The aim of this study was to introduce novel vector field analysis for the quantitative measurement of retinal displacement after epiretinal membrane (ERM) removal. We developed a novel framework to measure retinal displacement from retinal fundus images as follows: (1) rigid registration of preoperative retinal fundus images in reference to postoperative retinal fundus images, (2) extraction of retinal vessel segmentation masks from these retinal fundus images, (3) non-rigid registration of preoperative vessel masks in reference to postoperative vessel masks, and (4) calculation of the transformation matrix required for non-rigid registration for each pixel. These pixel-wise vector field results were summarized according to predefined 24 sectors after standardization. We applied this framework to 20 patients who underwent ERM removal to obtain their retinal displacement vector fields between retinal fundus images taken preoperatively and at postoperative 1, 4, 10, and 22 months. The mean direction of displacement vectors was in the nasal direction. The mean standardized magnitudes of retinal displacement between preoperative and postoperative 1 month, postoperative 1 and 4, 4 and 10, and 10 and 22 months were 38.6, 14.9, 7.6, and 5.4, respectively. In conclusion, the proposed method provides a computerized, reproducible, and scalable way to analyze structural changes in the retina with a powerful visualization tool. Retinal structural changes were mostly concentrated in the early postoperative period and tended to move nasally.
Collapse
Affiliation(s)
- Seok Hyun Bae
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul National University Bundang Hospital, 173-82 Gumi-ro, Bundang-gu, Seongnam-si, Gyeonggi-do, 13620, South Korea
- Department of Ophthalmology, HanGil Eye Hospital, Incheon, South Korea
| | - Sojung Go
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul National University Bundang Hospital, 173-82 Gumi-ro, Bundang-gu, Seongnam-si, Gyeonggi-do, 13620, South Korea
| | - Jooyoung Kim
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul National University Bundang Hospital, 173-82 Gumi-ro, Bundang-gu, Seongnam-si, Gyeonggi-do, 13620, South Korea
| | - Kyu Hyung Park
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul National University Hospital, Seoul, South Korea
| | - Soochahn Lee
- School of Electrical Engineering, Kookmin University, Seoul, South Korea
| | - Sang Jun Park
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul National University Bundang Hospital, 173-82 Gumi-ro, Bundang-gu, Seongnam-si, Gyeonggi-do, 13620, South Korea.
| |
Collapse
|
7
|
Liu Y, Xie H, Zhao X, Tang J, Yu Z, Wu Z, Tian R, Chen Y, Chen M, Ntentakis DP, Du Y, Chen T, Hu Y, Zhang S, Lei B, Zhang G. Automated detection of nine infantile fundus diseases and conditions in retinal images using a deep learning system. EPMA J 2024; 15:39-51. [PMID: 38463622 PMCID: PMC10923762 DOI: 10.1007/s13167-024-00350-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2023] [Accepted: 01/21/2024] [Indexed: 03/12/2024]
Abstract
Purpose We developed an Infant Retinal Intelligent Diagnosis System (IRIDS), an automated system to aid early diagnosis and monitoring of infantile fundus diseases and health conditions to satisfy urgent needs of ophthalmologists. Methods We developed IRIDS by combining convolutional neural networks and transformer structures, using a dataset of 7697 retinal images (1089 infants) from four hospitals. It identifies nine fundus diseases and conditions, namely, retinopathy of prematurity (ROP) (mild ROP, moderate ROP, and severe ROP), retinoblastoma (RB), retinitis pigmentosa (RP), Coats disease, coloboma of the choroid, congenital retinal fold (CRF), and normal. IRIDS also includes depth attention modules, ResNet-18 (Res-18), and Multi-Axis Vision Transformer (MaxViT). Performance was compared to that of ophthalmologists using 450 retinal images. The IRIDS employed a five-fold cross-validation approach to generate the classification results. Results Several baseline models achieved the following metrics: accuracy, precision, recall, F1-score (F1), kappa, and area under the receiver operating characteristic curve (AUC) with best values of 94.62% (95% CI, 94.34%-94.90%), 94.07% (95% CI, 93.32%-94.82%), 90.56% (95% CI, 88.64%-92.48%), 92.34% (95% CI, 91.87%-92.81%), 91.15% (95% CI, 90.37%-91.93%), and 99.08% (95% CI, 99.07%-99.09%), respectively. In comparison, IRIDS showed promising results compared to ophthalmologists, demonstrating an average accuracy, precision, recall, F1, kappa, and AUC of 96.45% (95% CI, 96.37%-96.53%), 95.86% (95% CI, 94.56%-97.16%), 94.37% (95% CI, 93.95%-94.79%), 95.03% (95% CI, 94.45%-95.61%), 94.43% (95% CI, 93.96%-94.90%), and 99.51% (95% CI, 99.51%-99.51%), respectively, in multi-label classification on the test dataset, utilizing the Res-18 and MaxViT models. These results suggest that, particularly in terms of AUC, IRIDS achieved performance that warrants further investigation for the detection of retinal abnormalities. Conclusions IRIDS identifies nine infantile fundus diseases and conditions accurately. It may aid non-ophthalmologist personnel in underserved areas in infantile fundus disease screening. Thus, preventing severe complications. The IRIDS serves as an example of artificial intelligence integration into ophthalmology to achieve better outcomes in predictive, preventive, and personalized medicine (PPPM / 3PM) in the treatment of infantile fundus diseases. Supplementary Information The online version contains supplementary material available at 10.1007/s13167-024-00350-y.
Collapse
Affiliation(s)
- Yaling Liu
- Shenzhen Eye Hospital, Shenzhen Eye Institute, Jinan University, Shenzhen, 518040 China
| | - Hai Xie
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| | - Xinyu Zhao
- Shenzhen Eye Hospital, Shenzhen Eye Institute, Jinan University, Shenzhen, 518040 China
| | - Jiannan Tang
- Shenzhen Eye Hospital, Shenzhen Eye Institute, Jinan University, Shenzhen, 518040 China
| | - Zhen Yu
- Shenzhen Eye Hospital, Shenzhen Eye Institute, Jinan University, Shenzhen, 518040 China
| | - Zhenquan Wu
- Shenzhen Eye Hospital, Shenzhen Eye Institute, Jinan University, Shenzhen, 518040 China
| | - Ruyin Tian
- Shenzhen Eye Hospital, Shenzhen Eye Institute, Jinan University, Shenzhen, 518040 China
| | - Yi Chen
- Shenzhen Eye Hospital, Shenzhen Eye Institute, Jinan University, Shenzhen, 518040 China
- Guizhou Medical University, Guiyang, Guizhou China
| | - Miaohong Chen
- Shenzhen Eye Hospital, Shenzhen Eye Institute, Jinan University, Shenzhen, 518040 China
- Guizhou Medical University, Guiyang, Guizhou China
| | - Dimitrios P. Ntentakis
- Retina Service, Ines and Fred Yeatts Retina Research Laboratory, Angiogenesis Laboratory, Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School, Boston, MA USA
| | - Yueshanyi Du
- Shenzhen Eye Hospital, Shenzhen Eye Institute, Jinan University, Shenzhen, 518040 China
| | - Tingyi Chen
- Shenzhen Eye Hospital, Shenzhen Eye Institute, Jinan University, Shenzhen, 518040 China
- Guizhou Medical University, Guiyang, Guizhou China
| | - Yarou Hu
- Shenzhen Eye Hospital, Shenzhen Eye Institute, Jinan University, Shenzhen, 518040 China
| | - Sifan Zhang
- Guizhou Medical University, Guiyang, Guizhou China
- Southern University of Science and Technology School of Medicine, Shenzhen, China
| | - Baiying Lei
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| | - Guoming Zhang
- Shenzhen Eye Hospital, Shenzhen Eye Institute, Jinan University, Shenzhen, 518040 China
- Guizhou Medical University, Guiyang, Guizhou China
| |
Collapse
|
8
|
Pandey PU, Ballios BG, Christakis PG, Kaplan AJ, Mathew DJ, Ong Tone S, Wan MJ, Micieli JA, Wong JCY. Ensemble of deep convolutional neural networks is more accurate and reliable than board-certified ophthalmologists at detecting multiple diseases in retinal fundus photographs. Br J Ophthalmol 2024; 108:417-423. [PMID: 36720585 PMCID: PMC10894841 DOI: 10.1136/bjo-2022-322183] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Accepted: 01/11/2023] [Indexed: 02/02/2023]
Abstract
AIMS To develop an algorithm to classify multiple retinal pathologies accurately and reliably from fundus photographs and to validate its performance against human experts. METHODS We trained a deep convolutional ensemble (DCE), an ensemble of five convolutional neural networks (CNNs), to classify retinal fundus photographs into diabetic retinopathy (DR), glaucoma, age-related macular degeneration (AMD) and normal eyes. The CNN architecture was based on the InceptionV3 model, and initial weights were pretrained on the ImageNet dataset. We used 43 055 fundus images from 12 public datasets. Five trained ensembles were then tested on an 'unseen' set of 100 images. Seven board-certified ophthalmologists were asked to classify these test images. RESULTS Board-certified ophthalmologists achieved a mean accuracy of 72.7% over all classes, while the DCE achieved a mean accuracy of 79.2% (p=0.03). The DCE had a statistically significant higher mean F1-score for DR classification compared with the ophthalmologists (76.8% vs 57.5%; p=0.01) and greater but statistically non-significant mean F1-scores for glaucoma (83.9% vs 75.7%; p=0.10), AMD (85.9% vs 85.2%; p=0.69) and normal eyes (73.0% vs 70.5%; p=0.39). The DCE had a greater mean agreement between accuracy and confident of 81.6% vs 70.3% (p<0.001). DISCUSSION We developed a deep learning model and found that it could more accurately and reliably classify four categories of fundus images compared with board-certified ophthalmologists. This work provides proof-of-principle that an algorithm is capable of accurate and reliable recognition of multiple retinal diseases using only fundus photographs.
Collapse
Affiliation(s)
- Prashant U Pandey
- School of Biomedical Engineering, The University of British Columbia, Vancouver, British Columbia, Canada
| | - Brian G Ballios
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
- Krembil Research Institute, University Health Network, Toronto, Ontario, Canada
- Kensington Vision and Research Centre and Kensington Research Institute, Toronto, Ontario, Canada
| | - Panos G Christakis
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
- Kensington Vision and Research Centre and Kensington Research Institute, Toronto, Ontario, Canada
| | - Alexander J Kaplan
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
| | - David J Mathew
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
- Krembil Research Institute, University Health Network, Toronto, Ontario, Canada
- Kensington Vision and Research Centre and Kensington Research Institute, Toronto, Ontario, Canada
| | - Stephan Ong Tone
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
- Sunnybrook Research Institute, Toronto, Ontario, Canada
| | - Michael J Wan
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
| | - Jonathan A Micieli
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
- Kensington Vision and Research Centre and Kensington Research Institute, Toronto, Ontario, Canada
- Department of Ophthalmology, St. Michael's Hospital, Unity Health, Toronto, Ontario, Canada
| | - Jovi C Y Wong
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
9
|
Choi JY, Ryu IH, Kim JK, Lee IS, Yoo TK. Development of a generative deep learning model to improve epiretinal membrane detection in fundus photography. BMC Med Inform Decis Mak 2024; 24:25. [PMID: 38273286 PMCID: PMC10811871 DOI: 10.1186/s12911-024-02431-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2023] [Accepted: 01/17/2024] [Indexed: 01/27/2024] Open
Abstract
BACKGROUND The epiretinal membrane (ERM) is a common retinal disorder characterized by abnormal fibrocellular tissue at the vitreomacular interface. Most patients with ERM are asymptomatic at early stages. Therefore, screening for ERM will become increasingly important. Despite the high prevalence of ERM, few deep learning studies have investigated ERM detection in the color fundus photography (CFP) domain. In this study, we built a generative model to enhance ERM detection performance in the CFP. METHODS This deep learning study retrospectively collected 302 ERM and 1,250 healthy CFP data points from a healthcare center. The generative model using StyleGAN2 was trained using single-center data. EfficientNetB0 with StyleGAN2-based augmentation was validated using independent internal single-center data and external datasets. We randomly assigned healthcare center data to the development (80%) and internal validation (20%) datasets. Data from two publicly accessible sources were used as external validation datasets. RESULTS StyleGAN2 facilitated realistic CFP synthesis with the characteristic cellophane reflex features of the ERM. The proposed method with StyleGAN2-based augmentation outperformed the typical transfer learning without a generative adversarial network. The proposed model achieved an area under the receiver operating characteristic (AUC) curve of 0.926 for internal validation. AUCs of 0.951 and 0.914 were obtained for the two external validation datasets. Compared with the deep learning model without augmentation, StyleGAN2-based augmentation improved the detection performance and contributed to the focus on the location of the ERM. CONCLUSIONS We proposed an ERM detection model by synthesizing realistic CFP images with the pathological features of ERM through generative deep learning. We believe that our deep learning framework will help achieve a more accurate detection of ERM in a limited data setting.
Collapse
Affiliation(s)
- Joon Yul Choi
- Department of Biomedical Engineering, Yonsei University, Wonju, South Korea
| | - Ik Hee Ryu
- Department of Refractive Surgery, B&VIIT Eye Center, B2 GT Tower, 1317-23 Seocho-Dong, Seocho-Gu, Seoul, South Korea
- Research and development department, VISUWORKS, Seoul, South Korea
| | - Jin Kuk Kim
- Department of Refractive Surgery, B&VIIT Eye Center, B2 GT Tower, 1317-23 Seocho-Dong, Seocho-Gu, Seoul, South Korea
- Research and development department, VISUWORKS, Seoul, South Korea
| | - In Sik Lee
- Department of Refractive Surgery, B&VIIT Eye Center, B2 GT Tower, 1317-23 Seocho-Dong, Seocho-Gu, Seoul, South Korea
| | - Tae Keun Yoo
- Department of Refractive Surgery, B&VIIT Eye Center, B2 GT Tower, 1317-23 Seocho-Dong, Seocho-Gu, Seoul, South Korea.
- Research and development department, VISUWORKS, Seoul, South Korea.
| |
Collapse
|
10
|
Valentim CCS, Wu AK, Yu S, Manivannan N, Zhang Q, Cao J, Song W, Wang V, Kang H, Kalur A, Iyer AI, Conti T, Singh RP, Talcott KE. Deep learning-based algorithm for the detection of idiopathic full thickness macular holes in spectral domain optical coherence tomography. Int J Retina Vitreous 2024; 10:9. [PMID: 38263402 PMCID: PMC10804727 DOI: 10.1186/s40942-024-00526-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2023] [Accepted: 01/04/2024] [Indexed: 01/25/2024] Open
Abstract
BACKGROUND Automated identification of spectral domain optical coherence tomography (SD-OCT) features can improve retina clinic workflow efficiency as they are able to detect pathologic findings. The purpose of this study was to test a deep learning (DL)-based algorithm for the identification of Idiopathic Full Thickness Macular Hole (IFTMH) features and stages of severity in SD-OCT B-scans. METHODS In this cross-sectional study, subjects solely diagnosed with either IFTMH or Posterior Vitreous Detachment (PVD) were identified excluding secondary causes of macular holes, any concurrent maculopathies, or incomplete records. SD-OCT scans (512 × 128) from all subjects were acquired with CIRRUS™ HD-OCT (ZEISS, Dublin, CA) and reviewed for quality. In order to establish a ground truth classification, each SD-OCT B-scan was labeled by two trained graders and adjudicated by a retina specialist when applicable. Two test sets were built based on different gold-standard classification methods. The sensitivity, specificity and accuracy of the algorithm to identify IFTMH features in SD-OCT B-scans were determined. Spearman's correlation was run to examine if the algorithm's probability score was associated with the severity stages of IFTMH. RESULTS Six hundred and one SD-OCT cube scans from 601 subjects (299 with IFTMH and 302 with PVD) were used. A total of 76,928 individual SD-OCT B-scans were labeled gradable by the algorithm and yielded an accuracy of 88.5% (test set 1, 33,024 B-scans) and 91.4% (test set 2, 43,904 B-scans) in identifying SD-OCT features of IFTMHs. A Spearman's correlation coefficient of 0.15 was achieved between the algorithm's probability score and the stages of the 299 (47 [15.7%] stage 2, 56 [18.7%] stage 3 and 196 [65.6%] stage 4) IFTMHs cubes studied. CONCLUSIONS The DL-based algorithm was able to accurately detect IFTMHs features on individual SD-OCT B-scans in both test sets. However, there was a low correlation between the algorithm's probability score and IFTMH severity stages. The algorithm may serve as a clinical decision support tool that assists with the identification of IFTMHs. Further training is necessary for the algorithm to identify stages of IFTMHs.
Collapse
Affiliation(s)
- Carolina C S Valentim
- Center for Ophthalmic Bioinformatics, Cole Eye Institute, Cleveland Clinic Foundation, 9500 Euclid Ave. i32, Cleveland, OH, USA
| | - Anna K Wu
- Center for Ophthalmic Bioinformatics, Cole Eye Institute, Cleveland Clinic Foundation, 9500 Euclid Ave. i32, Cleveland, OH, USA
- Case Western Reserve University School of Medicine, Cleveland, OH, USA
| | - Sophia Yu
- Carl Zeiss Meditec, Inc, Dublin, CA, USA
| | | | | | - Jessica Cao
- Cole Eye Institute, Cleveland Clinic Foundation, Cleveland, OH, USA
| | - Weilin Song
- Cleveland Clinic Lerner College of Medicine, Cleveland, OH, USA
| | - Victoria Wang
- Case Western Reserve University School of Medicine, Cleveland, OH, USA
| | - Hannah Kang
- Case Western Reserve University School of Medicine, Cleveland, OH, USA
| | - Aneesha Kalur
- Center for Ophthalmic Bioinformatics, Cole Eye Institute, Cleveland Clinic Foundation, 9500 Euclid Ave. i32, Cleveland, OH, USA
| | - Amogh I Iyer
- Center for Ophthalmic Bioinformatics, Cole Eye Institute, Cleveland Clinic Foundation, 9500 Euclid Ave. i32, Cleveland, OH, USA
| | - Thais Conti
- Center for Ophthalmic Bioinformatics, Cole Eye Institute, Cleveland Clinic Foundation, 9500 Euclid Ave. i32, Cleveland, OH, USA
| | - Rishi P Singh
- Center for Ophthalmic Bioinformatics, Cole Eye Institute, Cleveland Clinic Foundation, 9500 Euclid Ave. i32, Cleveland, OH, USA
| | - Katherine E Talcott
- Center for Ophthalmic Bioinformatics, Cole Eye Institute, Cleveland Clinic Foundation, 9500 Euclid Ave. i32, Cleveland, OH, USA.
| |
Collapse
|
11
|
Li B, Chen H, Yu W, Zhang M, Lu F, Ma J, Hao Y, Li X, Hu B, Shen L, Mao J, He X, Wang H, Ding D, Li X, Chen Y. The performance of a deep learning system in assisting junior ophthalmologists in diagnosing 13 major fundus diseases: a prospective multi-center clinical trial. NPJ Digit Med 2024; 7:8. [PMID: 38212607 PMCID: PMC10784504 DOI: 10.1038/s41746-023-00991-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Accepted: 12/11/2023] [Indexed: 01/13/2024] Open
Abstract
Artificial intelligence (AI)-based diagnostic systems have been reported to improve fundus disease screening in previous studies. This multicenter prospective self-controlled clinical trial aims to evaluate the diagnostic performance of a deep learning system (DLS) in assisting junior ophthalmologists in detecting 13 major fundus diseases. A total of 1493 fundus images from 748 patients were prospectively collected from five tertiary hospitals in China. Nine junior ophthalmologists were trained and annotated the images with or without the suggestions proposed by the DLS. The diagnostic performance was evaluated among three groups: DLS-assisted junior ophthalmologist group (test group), junior ophthalmologist group (control group) and DLS group. The diagnostic consistency was 84.9% (95%CI, 83.0% ~ 86.9%), 72.9% (95%CI, 70.3% ~ 75.6%) and 85.5% (95%CI, 83.5% ~ 87.4%) in the test group, control group and DLS group, respectively. With the help of the proposed DLS, the diagnostic consistency of junior ophthalmologists improved by approximately 12% (95% CI, 9.1% ~ 14.9%) with statistical significance (P < 0.001). For the detection of 13 diseases, the test group achieved significant higher sensitivities (72.2% ~ 100.0%) and comparable specificities (90.8% ~ 98.7%) comparing with the control group (sensitivities, 50% ~ 100%; specificities 96.7 ~ 99.8%). The DLS group presented similar performance to the test group in the detection of any fundus abnormality (sensitivity, 95.7%; specificity, 87.2%) and each of the 13 diseases (sensitivity, 83.3% ~ 100.0%; specificity, 89.0 ~ 98.0%). The proposed DLS provided a novel approach for the automatic detection of 13 major fundus diseases with high diagnostic consistency and assisted to improve the performance of junior ophthalmologists, resulting especially in reducing the risk of missed diagnoses. ClinicalTrials.gov NCT04723160.
Collapse
Affiliation(s)
- Bing Li
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China
- Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing, China
| | - Huan Chen
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China
- Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing, China
| | - Weihong Yu
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China
- Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing, China
| | - Ming Zhang
- Department of Ophthalmology, West China Hospital, Sichuan University, Chengdu, China
| | - Fang Lu
- Department of Ophthalmology, West China Hospital, Sichuan University, Chengdu, China
| | - Jingxue Ma
- Department of Ophthalmology, Second Hospital of Hebei Medical University, Shijiazhuang, China
| | - Yuhua Hao
- Department of Ophthalmology, Second Hospital of Hebei Medical University, Shijiazhuang, China
| | - Xiaorong Li
- Department of Retina, Tianjin Medical University Eye Hospital, Tianjin, China
| | - Bojie Hu
- Department of Retina, Tianjin Medical University Eye Hospital, Tianjin, China
| | - Lijun Shen
- Department of Retina Center, Affiliated Eye Hospital of Wenzhou Medical University, Hangzhou, Zhejiang Province, China
| | - Jianbo Mao
- Department of Retina Center, Affiliated Eye Hospital of Wenzhou Medical University, Hangzhou, Zhejiang Province, China
| | - Xixi He
- School of Information Science and Technology, North China University of Technology, Beijing, China
- Beijing Key Laboratory on Integration and Analysis of Large-scale Stream Data, Beijing, China
| | - Hao Wang
- Visionary Intelligence Ltd., Beijing, China
| | | | - Xirong Li
- MoE Key Lab of DEKE, Renmin University of China, Beijing, China
| | - Youxin Chen
- Department of Ophthalmology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, China.
- Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing, China.
| |
Collapse
|
12
|
Fujinami-Yokokawa Y, Joo K, Liu X, Tsunoda K, Kondo M, Ahn SJ, Robson AG, Naka I, Ohashi J, Li H, Yang L, Arno G, Pontikos N, Park KH, Michaelides M, Tachimori H, Miyata H, Sui R, Woo SJ, Fujinami K. Distinct Clinical Effects of Two RP1L1 Hotspots in East Asian Patients With Occult Macular Dystrophy (Miyake Disease): EAOMD Report 4. Invest Ophthalmol Vis Sci 2024; 65:41. [PMID: 38265784 PMCID: PMC10810149 DOI: 10.1167/iovs.65.1.41] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2023] [Accepted: 12/20/2023] [Indexed: 01/25/2024] Open
Abstract
Purpose To characterize the clinical effects of two RP1L1 hotspots in patients with East Asian occult macular dystrophy (OMD). Methods Fifty-one patients diagnosed with OMD harboring monoallelic pathogenic RP1L1 variants (Miyake disease) from Japan, South Korea, and China were enrolled. Patients were classified into two genotype groups: group A, p.R45W, and group B, missense variants located between amino acids (aa) 1196 and 1201. The clinical parameters of the two genotypes were compared, and deep learning based on spectral-domain optical coherence tomographic (SD-OCT) images was used to distinguish the morphologic differences. Results Groups A and B included 29 and 22 patients, respectively. The median age of onset in groups A and B was 14.0 and 40.0 years, respectively. The median logMAR visual acuity of groups A and B was 0.70 and 0.51, respectively, and the survival curve analysis revealed a 15-year difference in vision loss (logMAR 0.22). A statistically significant difference was observed in the visual field classification, but no significant difference was found in the multifocal electroretinographic classification. High accuracy (75.4%) was achieved in classifying genotype groups based on SD-OCT images using machine learning. Conclusions Distinct clinical severities and morphologic phenotypes supported by artificial intelligence-based classification were derived from the two investigated RP1L1 hotspots: a more severe phenotype (p.R45W) and a milder phenotype (1196-1201 aa). This newly identified genotype-phenotype association will be valuable for medical care and the design of therapeutic trials.
Collapse
Affiliation(s)
- Yu Fujinami-Yokokawa
- Department of Health Policy and Management, Keio University School of Medicine, Tokyo, Japan
- Laboratory of Visual Physiology, Division of Vision Research, National Institute of Sensory Organs, NHO Tokyo Medical Center, Tokyo, Japan
- UCL Institute of Ophthalmology, London, United Kingdom
- Division of Public Health, Yokokawa Clinic, Suita, Japan
| | - Kwangsic Joo
- Department of Ophthalmology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, Seongnam, Republic of Korea
| | - Xiao Liu
- Laboratory of Visual Physiology, Division of Vision Research, National Institute of Sensory Organs, NHO Tokyo Medical Center, Tokyo, Japan
- Southwest Hospital, Army Medical University, Chongqing, China
- Key Lab of Visual Damage and Regeneration & Restoration of Chongqing, Chongqing, China
| | - Kazushige Tsunoda
- Division of Vision Research, National Institute of Sensory Organs, NHO Tokyo Medical Center, Tokyo, Japan
| | - Mineo Kondo
- Department of Ophthalmology, Mie University Graduate School of Medicine, Mie, Japan
| | - Seong Joon Ahn
- Department of Ophthalmology, Hanyang University Hospital, Hanyang University College of Medicine, Seoul, Republic of Korea
| | - Anthony G. Robson
- UCL Institute of Ophthalmology, London, United Kingdom
- Moorfields Eye Hospital, London, United Kingdom
| | - Izumi Naka
- Department of Biological Sciences, Graduate School of Science, The University of Tokyo, Tokyo, Japan
| | - Jun Ohashi
- Department of Biological Sciences, Graduate School of Science, The University of Tokyo, Tokyo, Japan
| | - Hui Li
- Department of Ophthalmology, Peking Union Medical College Hospital, Peking Union Medical College and Chinese Academy of Medical Sciences, Beijing, China
| | - Lizhu Yang
- Department of Ophthalmology, Peking Union Medical College Hospital, Peking Union Medical College and Chinese Academy of Medical Sciences, Beijing, China
| | - Gavin Arno
- Laboratory of Visual Physiology, Division of Vision Research, National Institute of Sensory Organs, NHO Tokyo Medical Center, Tokyo, Japan
- UCL Institute of Ophthalmology, London, United Kingdom
- Moorfields Eye Hospital, London, United Kingdom
| | - Nikolas Pontikos
- Laboratory of Visual Physiology, Division of Vision Research, National Institute of Sensory Organs, NHO Tokyo Medical Center, Tokyo, Japan
- UCL Institute of Ophthalmology, London, United Kingdom
- Moorfields Eye Hospital, London, United Kingdom
| | - Kyu Hyung Park
- Department of Ophthalmology, Seoul National University Hospital, Seoul National University College of Medicine, Seoul, Republic of Korea
| | - Michel Michaelides
- Laboratory of Visual Physiology, Division of Vision Research, National Institute of Sensory Organs, NHO Tokyo Medical Center, Tokyo, Japan
- UCL Institute of Ophthalmology, London, United Kingdom
- Moorfields Eye Hospital, London, United Kingdom
| | - Hisateru Tachimori
- Endowed Course for Health System Innovation, Keio University School of Medicine, Tokyo, Japan
| | - Hiroaki Miyata
- Department of Health Policy and Management, Keio University School of Medicine, Tokyo, Japan
| | - Ruifang Sui
- Department of Ophthalmology, Peking Union Medical College Hospital, Peking Union Medical College and Chinese Academy of Medical Sciences, Beijing, China
| | - Se Joon Woo
- Department of Ophthalmology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, Seongnam, Republic of Korea
| | - Kaoru Fujinami
- Laboratory of Visual Physiology, Division of Vision Research, National Institute of Sensory Organs, NHO Tokyo Medical Center, Tokyo, Japan
- UCL Institute of Ophthalmology, London, United Kingdom
- Moorfields Eye Hospital, London, United Kingdom
| | - for the East Asia Inherited Retinal Disease Society Study Group*
- Department of Health Policy and Management, Keio University School of Medicine, Tokyo, Japan
- Laboratory of Visual Physiology, Division of Vision Research, National Institute of Sensory Organs, NHO Tokyo Medical Center, Tokyo, Japan
- UCL Institute of Ophthalmology, London, United Kingdom
- Division of Public Health, Yokokawa Clinic, Suita, Japan
- Department of Ophthalmology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, Seongnam, Republic of Korea
- Southwest Hospital, Army Medical University, Chongqing, China
- Key Lab of Visual Damage and Regeneration & Restoration of Chongqing, Chongqing, China
- Division of Vision Research, National Institute of Sensory Organs, NHO Tokyo Medical Center, Tokyo, Japan
- Department of Ophthalmology, Mie University Graduate School of Medicine, Mie, Japan
- Department of Ophthalmology, Hanyang University Hospital, Hanyang University College of Medicine, Seoul, Republic of Korea
- Moorfields Eye Hospital, London, United Kingdom
- Department of Biological Sciences, Graduate School of Science, The University of Tokyo, Tokyo, Japan
- Department of Ophthalmology, Peking Union Medical College Hospital, Peking Union Medical College and Chinese Academy of Medical Sciences, Beijing, China
- Department of Ophthalmology, Seoul National University Hospital, Seoul National University College of Medicine, Seoul, Republic of Korea
- Endowed Course for Health System Innovation, Keio University School of Medicine, Tokyo, Japan
| |
Collapse
|
13
|
Peng Z, Ma R, Zhang Y, Yan M, Lu J, Cheng Q, Liao J, Zhang Y, Wang J, Zhao Y, Zhu J, Qin B, Jiang Q, Shi F, Qian J, Chen X, Zhao C. Development and evaluation of multimodal AI for diagnosis and triage of ophthalmic diseases using ChatGPT and anterior segment images: protocol for a two-stage cross-sectional study. Front Artif Intell 2023; 6:1323924. [PMID: 38145231 PMCID: PMC10748413 DOI: 10.3389/frai.2023.1323924] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Accepted: 11/22/2023] [Indexed: 12/26/2023] Open
Abstract
Introduction Artificial intelligence (AI) technology has made rapid progress for disease diagnosis and triage. In the field of ophthalmic diseases, image-based diagnosis has achieved high accuracy but still encounters limitations due to the lack of medical history. The emergence of ChatGPT enables human-computer interaction, allowing for the development of a multimodal AI system that integrates interactive text and image information. Objective To develop a multimodal AI system using ChatGPT and anterior segment images for diagnosing and triaging ophthalmic diseases. To assess the AI system's performance through a two-stage cross-sectional study, starting with silent evaluation and followed by early clinical evaluation in outpatient clinics. Methods and analysis Our study will be conducted across three distinct centers in Shanghai, Nanjing, and Suqian. The development of the smartphone-based multimodal AI system will take place in Shanghai with the goal of achieving ≥90% sensitivity and ≥95% specificity for diagnosing and triaging ophthalmic diseases. The first stage of the cross-sectional study will explore the system's performance in Shanghai's outpatient clinics. Medical histories will be collected without patient interaction, and anterior segment images will be captured using slit lamp equipment. This stage aims for ≥85% sensitivity and ≥95% specificity with a sample size of 100 patients. The second stage will take place at three locations, with Shanghai serving as the internal validation dataset, and Nanjing and Suqian as the external validation dataset. Medical history will be collected through patient interviews, and anterior segment images will be captured via smartphone devices. An expert panel will establish reference standards and assess AI accuracy for diagnosis and triage throughout all stages. A one-vs.-rest strategy will be used for data analysis, and a post-hoc power calculation will be performed to evaluate the impact of disease types on AI performance. Discussion Our study may provide a user-friendly smartphone-based multimodal AI system for diagnosis and triage of ophthalmic diseases. This innovative system may support early detection of ocular abnormalities, facilitate establishment of a tiered healthcare system, and reduce the burdens on tertiary facilities. Trial registration The study was registered in ClinicalTrials.gov on June 25th, 2023 (NCT05930444).
Collapse
Affiliation(s)
- Zhiyu Peng
- Department of Ophthalmology, Fudan Eye & ENT Hospital, Shanghai, China
- Department of Ophthalmology, The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, Zhejiang, China
- Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China
- NHC Key Laboratory of Myopia, Fudan University, Shanghai, China
| | - Ruiqi Ma
- Department of Ophthalmology, Fudan Eye & ENT Hospital, Shanghai, China
- Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China
- NHC Key Laboratory of Myopia, Fudan University, Shanghai, China
| | - Yihan Zhang
- Department of Ophthalmology, Fudan Eye & ENT Hospital, Shanghai, China
- Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China
- NHC Key Laboratory of Myopia, Fudan University, Shanghai, China
| | - Mingxu Yan
- Department of Ophthalmology, Fudan Eye & ENT Hospital, Shanghai, China
- Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China
- NHC Key Laboratory of Myopia, Fudan University, Shanghai, China
- School of Basic Medical Sciences, Fudan University, Shanghai, China
| | - Jie Lu
- Department of Ophthalmology, Fudan Eye & ENT Hospital, Shanghai, China
- Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China
- NHC Key Laboratory of Myopia, Fudan University, Shanghai, China
- School of Public Health, Fudan University, Shanghai, China
| | - Qian Cheng
- Medical Image Processing, Analysis, and Visualization (MIVAP) Lab, School of Electronics and Information Engineering, Soochow University, Suzhou, China
| | - Jingjing Liao
- Medical Image Processing, Analysis, and Visualization (MIVAP) Lab, School of Electronics and Information Engineering, Soochow University, Suzhou, China
| | - Yunqiu Zhang
- School of Public Health, Fudan University, Shanghai, China
| | - Jinghan Wang
- Department of Ophthalmology, Fudan Eye & ENT Hospital, Shanghai, China
- Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China
- NHC Key Laboratory of Myopia, Fudan University, Shanghai, China
| | - Yue Zhao
- The Affiliated Eye Hospital, Nanjing Medical University, Nanjing, China
| | - Jiang Zhu
- Department of Ophthalmology, Suqian First Hospital, Suqian, China
| | - Bing Qin
- Department of Ophthalmology, Suqian First Hospital, Suqian, China
| | - Qin Jiang
- The Affiliated Eye Hospital, Nanjing Medical University, Nanjing, China
- The Fourth School of Clinical Medicine, Nanjing Medical University, Nanjing, China
| | - Fei Shi
- Medical Image Processing, Analysis, and Visualization (MIVAP) Lab, School of Electronics and Information Engineering, Soochow University, Suzhou, China
| | - Jiang Qian
- Department of Ophthalmology, Fudan Eye & ENT Hospital, Shanghai, China
- Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China
- NHC Key Laboratory of Myopia, Fudan University, Shanghai, China
| | - Xinjian Chen
- Medical Image Processing, Analysis, and Visualization (MIVAP) Lab, School of Electronics and Information Engineering, Soochow University, Suzhou, China
- State Key Laboratory of Radiation Medicine and Protection, Soochow University, Suzhou, China
| | - Chen Zhao
- Department of Ophthalmology, Fudan Eye & ENT Hospital, Shanghai, China
- Laboratory of Myopia, Chinese Academy of Medical Sciences, Shanghai, China
- NHC Key Laboratory of Myopia, Fudan University, Shanghai, China
| |
Collapse
|
14
|
Yamashita T, Asaoka R, Terasaki H, Yoshihara N, Kakiuchi N, Sakamoto T. Three-year changes in sex judgment using color fundus parameters in elementary school students. PLoS One 2023; 18:e0295123. [PMID: 38033010 PMCID: PMC10688721 DOI: 10.1371/journal.pone.0295123] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2023] [Accepted: 11/14/2023] [Indexed: 12/02/2023] Open
Abstract
PURPOSE In a previous cross-sectional study, we reported that the sexes can be distinguished using known factors obtained from color fundus photography (CFP). However, it is not clear how sex differences in fundus parameters appear across the human lifespan. Therefore, we conducted a cohort study to investigate sex determination based on fundus parameters in elementary school students. METHODS This prospective observational longitudinal study investigated 109 right eyes of elementary school students over 4 years (age, 8.5 to 11.5 years). From each CFP, the tessellation fundus index was calculated as red/red + green + blue (R/[R+G+B]) using the mean value of red-green-blue intensity in eight locations around the optic disc and macular region. Optic disc area, ovality ratio, papillomacular angle, and retinal vessel angles and distances were quantified according to the data in our previous report. Using 54 fundus parameters, sex was predicted by L2 regularized binomial logistic regression for each grade. RESULTS The right eyes of 53 boys and 56 girls were analyzed. The discrimination accuracy rate significantly increased with age: 56.3% at 8.5 years, 46.1% at 9.5 years, 65.5% at 10.5 years and 73.1% at 11.5 years. CONCLUSIONS The accuracy of sex discrimination by fundus photography improved during a 3-year cohort study of elementary school students.
Collapse
Affiliation(s)
- Takehiro Yamashita
- Department of Ophthalmology, Kagoshima University Graduate School of Medical and Dental Sciences, Kagoshima-shi, Kagoshima, Japan
| | - Ryo Asaoka
- Department of Ophthalmology, Seirei Hamamatsu General Hospital, Hamamatsu, Shizuoka, Japan
- School of Nursing, Seirei Christopher University, Hamamatsu, Shizuoka, Japan
- Nanovision Research Division, Research Institute of Electronics, Shizuoka University, Hamamatsu, Shizuoka, Japan
- The Graduate School for the Creation of New Photonics Industries, Hamamatsu, Shizuoka, Japan
| | - Hiroto Terasaki
- Department of Ophthalmology, Kagoshima University Graduate School of Medical and Dental Sciences, Kagoshima-shi, Kagoshima, Japan
| | - Naoya Yoshihara
- Department of Ophthalmology, Kagoshima University Graduate School of Medical and Dental Sciences, Kagoshima-shi, Kagoshima, Japan
| | - Naoko Kakiuchi
- Department of Ophthalmology, Kagoshima University Graduate School of Medical and Dental Sciences, Kagoshima-shi, Kagoshima, Japan
| | - Taiji Sakamoto
- Department of Ophthalmology, Kagoshima University Graduate School of Medical and Dental Sciences, Kagoshima-shi, Kagoshima, Japan
| |
Collapse
|
15
|
Li L, Lin D, Lin Z, Li M, Lian Z, Zhao L, Wu X, Liu L, Liu J, Wei X, Luo M, Zeng D, Yan A, Iao WC, Shang Y, Xu F, Xiang W, He M, Fu Z, Wang X, Deng Y, Fan X, Ye Z, Wei M, Zhang J, Liu B, Li J, Ding X, Lin H. DeepQuality improves infant retinopathy screening. NPJ Digit Med 2023; 6:192. [PMID: 37845275 PMCID: PMC10579317 DOI: 10.1038/s41746-023-00943-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Accepted: 10/05/2023] [Indexed: 10/18/2023] Open
Abstract
Image quality variation is a prominent cause of performance degradation for intelligent disease diagnostic models in clinical applications. Image quality issues are particularly prominent in infantile fundus photography due to poor patient cooperation, which poses a high risk of misdiagnosis. Here, we developed a deep learning-based image quality assessment and enhancement system (DeepQuality) for infantile fundus images to improve infant retinopathy screening. DeepQuality can accurately detect various quality defects concerning integrity, illumination, and clarity with area under the curve (AUC) values ranging from 0.933 to 0.995. It can also comprehensively score the overall quality of each fundus photograph. By analyzing 2,015,758 infantile fundus photographs from real-world settings using DeepQuality, we found that 58.3% of them had varying degrees of quality defects, and large variations were observed among different regions and categories of hospitals. Additionally, DeepQuality provides quality enhancement based on the results of quality assessment. After quality enhancement, the performance of retinopathy of prematurity (ROP) diagnosis of clinicians was significantly improved. Moreover, the integration of DeepQuality and AI diagnostic models can effectively improve the model performance for detecting ROP. This study may be an important reference for the future development of other image-based intelligent disease screening systems.
Collapse
Affiliation(s)
- Longhui Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Duoru Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China.
| | - Zhenzhe Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Mingyuan Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Zhangkai Lian
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Lanqin Zhao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Xiaohang Wu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Lixue Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Jiali Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Xiaoyue Wei
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Mingjie Luo
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Danqi Zeng
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Anqi Yan
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Wai Cheng Iao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Yuanjun Shang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Fabao Xu
- Department of Ophthalmology, Qilu Hospital, Shandong University, Jinan, Shandong, China
| | - Wei Xiang
- Department of Clinical Laboratory Medicine, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
| | - Muchen He
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Zhe Fu
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Xueyu Wang
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Yaru Deng
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Xinyan Fan
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Zhijun Ye
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Meirong Wei
- Department of Ophthalmology, Maternal and Children's Hospital, Liuzhou, Guangxi, China
| | - Jianping Zhang
- Department of Ophthalmology, Maternal and Children's Hospital, Liuzhou, Guangxi, China
| | - Baohai Liu
- Department of Ophthalmology, Maternal and Children's Hospital, Linyi, Shandong, China
| | - Jianqiao Li
- Department of Ophthalmology, Qilu Hospital, Shandong University, Jinan, Shandong, China
| | - Xiaoyan Ding
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China.
- Hainan Eye Hospital and Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Haikou, Hainan, China.
- Center for Precision Medicine and Department of Genetics and Biomedical Informatics, Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China.
| |
Collapse
|
16
|
An L, Qin J, Jiang W, Luo P, Luo X, Lai Y, Jin M. Non-invasive and accurate risk evaluation of cerebrovascular disease using retinal fundus photo based on deep learning. Front Neurol 2023; 14:1257388. [PMID: 37745652 PMCID: PMC10513168 DOI: 10.3389/fneur.2023.1257388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2023] [Accepted: 08/25/2023] [Indexed: 09/26/2023] Open
Abstract
Background Cerebrovascular disease (CeVD) is a prominent contributor to global mortality and profound disability. Extensive research has unveiled a connection between CeVD and retinal microvascular abnormalities. Nonetheless, manual analysis of fundus images remains a laborious and time-consuming task. Consequently, our objective is to develop a risk prediction model that utilizes retinal fundus photo to noninvasively and accurately assess cerebrovascular risks. Materials and methods To leverage retinal fundus photo for CeVD risk evaluation, we proposed a novel model called Efficient Attention which combines the convolutional neural network with attention mechanism. This combination aims to reinforce the salient features present in fundus photos, consequently improving the accuracy and effectiveness of cerebrovascular risk assessment. Result Our proposed model demonstrates notable advancements compared to the conventional ResNet and Efficient-Net architectures. The accuracy (ACC) of our model is 0.834 ± 0.03, surpassing Efficient-Net by a margin of 3.6%. Additionally, our model exhibits an improved area under the receiver operating characteristic curve (AUC) of 0.904 ± 0.02, surpassing other methods by a margin of 2.2%. Conclusion This paper provides compelling evidence that Efficient-Attention methods can serve as effective and accurate tool for cerebrovascular risk. The results of the study strongly support the notion that retinal fundus photo holds great potential as a reliable predictor of CeVD, which offers a noninvasive, convenient and low-cost solution for large scale screening of CeVD.
Collapse
Affiliation(s)
- Lin An
- Guangdong Weiren Meditech Co., Ltd, Foshan, Guangdong, China
| | - Jia Qin
- Guangdong Weiren Meditech Co., Ltd, Foshan, Guangdong, China
| | - Weili Jiang
- Foshan Weizhi Meditech Co., Ltd, Foshan, Guangdong, China
| | - Penghao Luo
- Foshan Weizhi Meditech Co., Ltd, Foshan, Guangdong, China
| | - Xiaoyan Luo
- Department of Ophthalmology, Guangdong Provincial Hospital of Integrated Traditional Chinese and Western Medicine, Foshan, Guangdong, China
| | - Yuzheng Lai
- Department of Neurology, Guangdong Provincial Hospital of Integrated Traditional Chinese and Western Medicine, Foshan, Guangdong, China
| | - Mei Jin
- Department of Ophthalmology, Guangdong Provincial Hospital of Integrated Traditional Chinese and Western Medicine, Foshan, Guangdong, China
| |
Collapse
|
17
|
Danese C, Kale AU, Aslam T, Lanzetta P, Barratt J, Chou YB, Eldem B, Eter N, Gale R, Korobelnik JF, Kozak I, Li X, Li X, Loewenstein A, Ruamviboonsuk P, Sakamoto T, Ting DS, van Wijngaarden P, Waldstein SM, Wong D, Wu L, Zapata MA, Zarranz-Ventura J. The impact of artificial intelligence on retinal disease management: Vision Academy retinal expert consensus. Curr Opin Ophthalmol 2023; 34:396-402. [PMID: 37326216 PMCID: PMC10399953 DOI: 10.1097/icu.0000000000000980] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
PURPOSE OF REVIEW The aim of this review is to define the "state-of-the-art" in artificial intelligence (AI)-enabled devices that support the management of retinal conditions and to provide Vision Academy recommendations on the topic. RECENT FINDINGS Most of the AI models described in the literature have not been approved for disease management purposes by regulatory authorities. These new technologies are promising as they may be able to provide personalized treatments as well as a personalized risk score for various retinal diseases. However, several issues still need to be addressed, such as the lack of a common regulatory pathway and a lack of clarity regarding the applicability of AI-enabled medical devices in different populations. SUMMARY It is likely that current clinical practice will need to change following the application of AI-enabled medical devices. These devices are likely to have an impact on the management of retinal disease. However, a consensus needs to be reached to ensure they are safe and effective for the overall population.
Collapse
Affiliation(s)
- Carla Danese
- Department of Medicine – Ophthalmology, University of Udine, Udine, Italy
- Department of Ophthalmology, AP-HP Hôpital Lariboisière, Université Paris Cité, Paris, France
| | - Aditya U. Kale
- Academic Unit of Ophthalmology, Institute of Inflammation & Ageing, College of Medical and Dental Sciences, University of Birmingham, Birmingham
| | - Tariq Aslam
- Division of Pharmacy and Optometry, Faculty of Biology, Medicine and Health, University of Manchester School of Health Sciences, Manchester, UK
| | - Paolo Lanzetta
- Department of Medicine – Ophthalmology, University of Udine, Udine, Italy
- Istituto Europeo di Microchirurgia Oculare, Udine, Italy
| | - Jane Barratt
- International Federation on Ageing, Toronto, Canada
| | - Yu-Bai Chou
- Department of Ophthalmology, Taipei Veterans General Hospital
- School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Bora Eldem
- Department of Ophthalmology, Hacettepe University, Ankara, Turkey
| | - Nicole Eter
- Department of Ophthalmology, University of Münster Medical Center, Münster, Germany
| | - Richard Gale
- Department of Ophthalmology, York Teaching Hospital NHS Foundation Trust, York, UK
| | - Jean-François Korobelnik
- Service d’ophtalmologie, CHU Bordeaux
- University of Bordeaux, INSERM, BPH, UMR1219, F-33000 Bordeaux, France
| | - Igor Kozak
- Moorfields Eye Hospital Centre, Abu Dhabi, UAE
| | - Xiaorong Li
- Tianjin Key Laboratory of Retinal Functions and Diseases, Tianjin Branch of National Clinical Research Center for Ocular Disease, Eye Institute and School of Optometry, Tianjin Medical University Eye Hospital, Tianjin
| | - Xiaoxin Li
- Xiamen Eye Center, Xiamen University, Xiamen, China
| | - Anat Loewenstein
- Division of Ophthalmology, Tel Aviv Sourasky Medical Center, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
| | - Paisan Ruamviboonsuk
- Department of Ophthalmology, College of Medicine, Rangsit University, Rajavithi Hospital, Bangkok, Thailand
| | - Taiji Sakamoto
- Department of Ophthalmology, Kagoshima University, Kagoshima, Japan
| | - Daniel S.W. Ting
- Singapore National Eye Center, Duke-NUS Medical School, Singapore
| | - Peter van Wijngaarden
- Ophthalmology, Department of Surgery, University of Melbourne, Melbourne, Australia
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Victoria, Australia
| | | | - David Wong
- Unity Health Toronto – St. Michael's Hospital, University of Toronto, Toronto, Canada
| | - Lihteh Wu
- Macula, Vitreous and Retina Associates of Costa Rica, San José, Costa Rica
| | | | | |
Collapse
|
18
|
Chou YB, Kale AU, Lanzetta P, Aslam T, Barratt J, Danese C, Eldem B, Eter N, Gale R, Korobelnik JF, Kozak I, Li X, Li X, Loewenstein A, Ruamviboonsuk P, Sakamoto T, Ting DS, van Wijngaarden P, Waldstein SM, Wong D, Wu L, Zapata MA, Zarranz-Ventura J. Current status and practical considerations of artificial intelligence use in screening and diagnosing retinal diseases: Vision Academy retinal expert consensus. Curr Opin Ophthalmol 2023; 34:403-413. [PMID: 37326222 PMCID: PMC10399944 DOI: 10.1097/icu.0000000000000979] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
PURPOSE OF REVIEW The application of artificial intelligence (AI) technologies in screening and diagnosing retinal diseases may play an important role in telemedicine and has potential to shape modern healthcare ecosystems, including within ophthalmology. RECENT FINDINGS In this article, we examine the latest publications relevant to AI in retinal disease and discuss the currently available algorithms. We summarize four key requirements underlining the successful application of AI algorithms in real-world practice: processing massive data; practicability of an AI model in ophthalmology; policy compliance and the regulatory environment; and balancing profit and cost when developing and maintaining AI models. SUMMARY The Vision Academy recognizes the advantages and disadvantages of AI-based technologies and gives insightful recommendations for future directions.
Collapse
Affiliation(s)
- Yu-Bai Chou
- Department of Ophthalmology, Taipei Veterans General Hospital
- School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Aditya U. Kale
- Academic Unit of Ophthalmology, Institute of Inflammation & Ageing, College of Medical and Dental Sciences, University of Birmingham, Birmingham, UK
| | - Paolo Lanzetta
- Department of Medicine – Ophthalmology, University of Udine
- Istituto Europeo di Microchirurgia Oculare, Udine, Italy
| | - Tariq Aslam
- Division of Pharmacy and Optometry, Faculty of Biology, Medicine and Health, University of Manchester School of Health Sciences, Manchester, UK
| | - Jane Barratt
- International Federation on Ageing, Toronto, Canada
| | - Carla Danese
- Department of Medicine – Ophthalmology, University of Udine
- Department of Ophthalmology, AP-HP Hôpital Lariboisière, Université Paris Cité, Paris, France
| | - Bora Eldem
- Department of Ophthalmology, Hacettepe University, Ankara, Turkey
| | - Nicole Eter
- Department of Ophthalmology, University of Münster Medical Center, Münster, Germany
| | - Richard Gale
- Department of Ophthalmology, York Teaching Hospital NHS Foundation Trust, York, UK
| | - Jean-François Korobelnik
- Service d’ophtalmologie, CHU Bordeaux
- University of Bordeaux, INSERM, BPH, UMR1219, F-33000 Bordeaux, France
| | - Igor Kozak
- Moorfields Eye Hospital Centre, Abu Dhabi, UAE
| | - Xiaorong Li
- Tianjin Key Laboratory of Retinal Functions and Diseases, Tianjin Branch of National Clinical Research Center for Ocular Disease, Eye Institute and School of Optometry, Tianjin Medical University Eye Hospital, Tianjin
| | - Xiaoxin Li
- Xiamen Eye Center, Xiamen University, Xiamen, China
| | - Anat Loewenstein
- Division of Ophthalmology, Tel Aviv Sourasky Medical Center, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
| | - Paisan Ruamviboonsuk
- Department of Ophthalmology, College of Medicine, Rangsit University, Rajavithi Hospital, Bangkok, Thailand
| | - Taiji Sakamoto
- Department of Ophthalmology, Kagoshima University, Kagoshima, Japan
| | - Daniel S.W. Ting
- Singapore National Eye Center, Duke-NUS Medical School, Singapore
| | - Peter van Wijngaarden
- Ophthalmology, Department of Surgery, University of Melbourne, Melbourne, Australia
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, Victoria, Australia
| | | | - David Wong
- Unity Health Toronto – St. Michael's Hospital, University of Toronto, Toronto, Canada
| | - Lihteh Wu
- Macula, Vitreous and Retina Associates of Costa Rica, San José, Costa Rica
| | | | | |
Collapse
|
19
|
Hadi MU, Qureshi R, Ahmed A, Iftikhar N. A lightweight CORONA-NET for COVID-19 detection in X-ray images. EXPERT SYSTEMS WITH APPLICATIONS 2023; 225:120023. [PMID: 37063778 PMCID: PMC10088342 DOI: 10.1016/j.eswa.2023.120023] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/08/2022] [Revised: 03/28/2023] [Accepted: 03/31/2023] [Indexed: 06/19/2023]
Abstract
Since December 2019, COVID-19 has posed the most serious threat to living beings. With the advancement of vaccination programs around the globe, the need to quickly diagnose COVID-19 in general with little logistics is fore important. As a consequence, the fastest diagnostic option to stop COVID-19 from spreading, especially among senior patients, should be the development of an automated detection system. This study aims to provide a lightweight deep learning method that incorporates a convolutional neural network (CNN), discrete wavelet transform (DWT), and a long short-term memory (LSTM), called CORONA-NET for diagnosing COVID-19 from chest X-ray images. In this system, deep feature extraction is performed by CNN, the feature vector is reduced yet strengthened by DWT, and the extracted feature is detected by LSTM for prediction. The dataset included 3000 X-rays, 1000 of which were COVID-19 obtained locally. Within minutes of the test, the proposed test platform's prototype can accurately detect COVID-19 patients. The proposed method achieves state-of-the-art performance in comparison with the existing deep learning methods. We hope that the suggested method will hasten clinical diagnosis and may be used for patients in remote areas where clinical labs are not easily accessible due to a lack of resources, location, or other factors.
Collapse
Affiliation(s)
- Muhammad Usman Hadi
- Nanotechnology and Integrated Bio-Engineering Centre (NIBEC), School of Engineering, Ulster University, BT15 1AP Belfast, UK
| | - Rizwan Qureshi
- Department of Imaging Physics, MD Anderson Cancer Center, The University of Texas, Houston, TX 77030, USA
| | - Ayesha Ahmed
- Department of Radiology, Aalborg University Hospital, Aalborg 9000, Denmark
| | - Nadeem Iftikhar
- University College of Northern Denmark, Aalborg 9200, Denmark
| |
Collapse
|
20
|
Matta S, Lamard M, Conze PH, Le Guilcher A, Lecat C, Carette R, Basset F, Massin P, Rottier JB, Cochener B, Quellec G. Towards population-independent, multi-disease detection in fundus photographs. Sci Rep 2023; 13:11493. [PMID: 37460629 DOI: 10.1038/s41598-023-38610-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2022] [Accepted: 07/11/2023] [Indexed: 07/20/2023] Open
Abstract
Independent validation studies of automatic diabetic retinopathy screening systems have recently shown a drop of screening performance on external data. Beyond diabetic retinopathy, this study investigates the generalizability of deep learning (DL) algorithms for screening various ocular anomalies in fundus photographs, across heterogeneous populations and imaging protocols. The following datasets are considered: OPHDIAT (France, diabetic population), OphtaMaine (France, general population), RIADD (India, general population) and ODIR (China, general population). Two multi-disease DL algorithms were developed: a Single-Dataset (SD) network, trained on the largest dataset (OPHDIAT), and a Multiple-Dataset (MD) network, trained on multiple datasets simultaneously. To assess their generalizability, both algorithms were evaluated whenever training and test data originate from overlapping datasets or from disjoint datasets. The SD network achieved a mean per-disease area under the receiver operating characteristic curve (mAUC) of 0.9571 on OPHDIAT. However, it generalized poorly to the other three datasets (mAUC < 0.9). When all four datasets were involved in training, the MD network significantly outperformed the SD network (p = 0.0058), indicating improved generality. However, in leave-one-dataset-out experiments, performance of the MD network was significantly lower on populations unseen during training than on populations involved in training (p < 0.0001), indicating imperfect generalizability.
Collapse
Affiliation(s)
- Sarah Matta
- Université de Bretagne Occidentale, Brest, Bretagne, France.
- INSERM, UMR 1101, Brest, F-29 200, France.
| | - Mathieu Lamard
- Université de Bretagne Occidentale, Brest, Bretagne, France
- INSERM, UMR 1101, Brest, F-29 200, France
| | - Pierre-Henri Conze
- INSERM, UMR 1101, Brest, F-29 200, France
- IMT Atlantique, Brest, F-29200, France
| | | | - Clément Lecat
- Evolucare Technologies, Villers-Bretonneux, F-80800, France
| | | | - Fabien Basset
- Evolucare Technologies, Villers-Bretonneux, F-80800, France
| | - Pascale Massin
- Service d'Ophtalmologie, Hôpital Lariboisière, APHP, Paris, F-75475, France
| | - Jean-Bernard Rottier
- Bâtiment de consultation porte 14 Pôle Santé Sud CMCM, 28 Rue de Guetteloup, Le Mans, F-72100, France
| | - Béatrice Cochener
- Université de Bretagne Occidentale, Brest, Bretagne, France
- INSERM, UMR 1101, Brest, F-29 200, France
- Service d'Ophtalmologie, CHRU Brest, Brest, F-29200, France
| | | |
Collapse
|
21
|
Gomes RFT, Schuch LF, Martins MD, Honório EF, de Figueiredo RM, Schmith J, Machado GN, Carrard VC. Use of Deep Neural Networks in the Detection and Automated Classification of Lesions Using Clinical Images in Ophthalmology, Dermatology, and Oral Medicine-A Systematic Review. J Digit Imaging 2023; 36:1060-1070. [PMID: 36650299 PMCID: PMC10287602 DOI: 10.1007/s10278-023-00775-3] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Revised: 01/03/2023] [Accepted: 01/04/2023] [Indexed: 01/19/2023] Open
Abstract
Artificial neural networks (ANN) are artificial intelligence (AI) techniques used in the automated recognition and classification of pathological changes from clinical images in areas such as ophthalmology, dermatology, and oral medicine. The combination of enterprise imaging and AI is gaining notoriety for its potential benefits in healthcare areas such as cardiology, dermatology, ophthalmology, pathology, physiatry, radiation oncology, radiology, and endoscopic. The present study aimed to analyze, through a systematic literature review, the application of performance of ANN and deep learning in the recognition and automated classification of lesions from clinical images, when comparing to the human performance. The PRISMA 2020 approach (Preferred Reporting Items for Systematic Reviews and Meta-analyses) was used by searching four databases of studies that reference the use of IA to define the diagnosis of lesions in ophthalmology, dermatology, and oral medicine areas. A quantitative and qualitative analyses of the articles that met the inclusion criteria were performed. The search yielded the inclusion of 60 studies. It was found that the interest in the topic has increased, especially in the last 3 years. We observed that the performance of IA models is promising, with high accuracy, sensitivity, and specificity, most of them had outcomes equivalent to human comparators. The reproducibility of the performance of models in real-life practice has been reported as a critical point. Study designs and results have been progressively improved. IA resources have the potential to contribute to several areas of health. In the coming years, it is likely to be incorporated into everyday life, contributing to the precision and reducing the time required by the diagnostic process.
Collapse
Affiliation(s)
- Rita Fabiane Teixeira Gomes
- Graduate Program in Dentistry, School of Dentistry, Federal University of Rio Grande Do Sul, Barcelos 2492/503, Bairro Santana, Porto Alegre, RS, CEP 90035-003, Brazil.
| | - Lauren Frenzel Schuch
- Department of Oral Diagnosis, Piracicaba Dental School, University of Campinas, Piracicaba, Brazil
| | - Manoela Domingues Martins
- Graduate Program in Dentistry, School of Dentistry, Federal University of Rio Grande Do Sul, Barcelos 2492/503, Bairro Santana, Porto Alegre, RS, CEP 90035-003, Brazil
- Department of Oral Diagnosis, Piracicaba Dental School, University of Campinas, Piracicaba, Brazil
| | | | - Rodrigo Marques de Figueiredo
- Technology in Automation and Electronics Laboratory - TECAE Lab, University of Vale Do Rio Dos Sinos - UNISINOS, São Leopoldo, Brazil
| | - Jean Schmith
- Technology in Automation and Electronics Laboratory - TECAE Lab, University of Vale Do Rio Dos Sinos - UNISINOS, São Leopoldo, Brazil
| | - Giovanna Nunes Machado
- Technology in Automation and Electronics Laboratory - TECAE Lab, University of Vale Do Rio Dos Sinos - UNISINOS, São Leopoldo, Brazil
| | - Vinicius Coelho Carrard
- Graduate Program in Dentistry, School of Dentistry, Federal University of Rio Grande Do Sul, Barcelos 2492/503, Bairro Santana, Porto Alegre, RS, CEP 90035-003, Brazil
- Department of Epidemiology, School of Medicine, TelessaúdeRS-UFRGS, Federal University of Rio Grande Do Sul, Porto Alegre, RS, Brazil
- Department of Oral Medicine, Otorhinolaryngology Service, Hospital de Clínicas de Porto Alegre (HCPA), Porto Alegre, RS, Brazil
| |
Collapse
|
22
|
Chłopowiec AR, Karanowski K, Skrzypczak T, Grzesiuk M, Chłopowiec AB, Tabakov M. Counteracting Data Bias and Class Imbalance-Towards a Useful and Reliable Retinal Disease Recognition System. Diagnostics (Basel) 2023; 13:diagnostics13111904. [PMID: 37296756 DOI: 10.3390/diagnostics13111904] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Revised: 05/22/2023] [Accepted: 05/25/2023] [Indexed: 06/12/2023] Open
Abstract
Multiple studies presented satisfactory performances for the treatment of various ocular diseases. To date, there has been no study that describes a multiclass model, medically accurate, and trained on large diverse dataset. No study has addressed a class imbalance problem in one giant dataset originating from multiple large diverse eye fundus image collections. To ensure a real-life clinical environment and mitigate the problem of biased medical image data, 22 publicly available datasets were merged. To secure medical validity only Diabetic Retinopathy (DR), Age-Related Macular Degeneration (AMD) and Glaucoma (GL) were included. The state-of-the-art models ConvNext, RegNet and ResNet were utilized. In the resulting dataset, there were 86,415 normal, 3787 GL, 632 AMD and 34,379 DR fundus images. ConvNextTiny achieved the best results in terms of recognizing most of the examined eye diseases with the most metrics. The overall accuracy was 80.46 ± 1.48. Specific accuracy values were: 80.01 ± 1.10 for normal eye fundus, 97.20 ± 0.66 for GL, 98.14 ± 0.31 for AMD, 80.66 ± 1.27 for DR. A suitable screening model for the most prevalent retinal diseases in ageing societies was designed. The model was developed on a diverse, combined large dataset which made the obtained results less biased and more generalizable.
Collapse
Affiliation(s)
- Adam R Chłopowiec
- Department of Artificial Intelligence, Wroclaw University of Science and Technology, Wybrzeże Wyspianskiego 27, 50-370 Wroclaw, Poland
| | - Konrad Karanowski
- Department of Artificial Intelligence, Wroclaw University of Science and Technology, Wybrzeże Wyspianskiego 27, 50-370 Wroclaw, Poland
| | - Tomasz Skrzypczak
- Faculty of Medicine, Wroclaw Medical University, Wybrzeże Ludwika Pasteura 1, 50-367 Wroclaw, Poland
| | - Mateusz Grzesiuk
- Department of Artificial Intelligence, Wroclaw University of Science and Technology, Wybrzeże Wyspianskiego 27, 50-370 Wroclaw, Poland
| | - Adrian B Chłopowiec
- Department of Artificial Intelligence, Wroclaw University of Science and Technology, Wybrzeże Wyspianskiego 27, 50-370 Wroclaw, Poland
| | - Martin Tabakov
- Department of Artificial Intelligence, Wroclaw University of Science and Technology, Wybrzeże Wyspianskiego 27, 50-370 Wroclaw, Poland
| |
Collapse
|
23
|
Wang Y, Jia X, Wei S, Li X. A deep learning model established for evaluating lid margin signs with colour anterior segment photography. Eye (Lond) 2023; 37:1377-1382. [PMID: 35739245 PMCID: PMC10170093 DOI: 10.1038/s41433-022-02088-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2021] [Revised: 03/30/2022] [Accepted: 05/04/2022] [Indexed: 11/09/2022] Open
Abstract
OBJECTIVES To evaluate the feasibility of applying a deep learning model to identify lid margin signs from colour anterior segment photography. METHODS We collected a total of 832 colour anterior segment photographs from 428 dry eye patients. Eight lid margin signs were labelled by human ophthalmologists. Eight deep learning models were constructed based on VGGNet-13 and trained to identify lid margin signs. Sensitivity, specificity, receiver operative characteristic (ROC) curves and area under the curve (AUC) were applied to evaluate the models. RESULTS The AUC for rounding of posterior lid margin was 0.979 and was 0.977 and 0.980 for lid margin irregularity and vascularization. For hyperkeratinization, the AUC was 0.964. The AUCs for meibomian gland orifice (MGO) retroplacement and plugging were 0.963 and 0.968. For the mucocutaneous junction (MCJ) anteroplacement and retroplacement model, the AUCs were 0.950 and 0.978. The sensitivity and specificity for rounding of posterior lid margin were 0.974 and 0.921. For irregularity, the sensitivity and specificity were 0.930 and 0.938, and those for vascularization were 0.923 and 0.961. The hyperkeratinization model achieved a sensitivity and specificity of 0.889 and 0.948. The model identifying MGO plugging and retroplacement achieved a sensitivity of 0.979 and 0.909 with a specificity of 0.867 and 0.967. The sensitivity of MCJ anteroplacement and retroplacement were 0.875/0.969, with a specificity of 0.966/0.888. CONCLUSIONS The deep learning model could identify lid margin signs with high sensitivity and specificity. The study provided the potentiality of applying artificial intelligence in lid margin evaluation to assist dry eye decision-making.
Collapse
Affiliation(s)
- Yuexin Wang
- Department of Ophthalmology, Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Peking University Third Hospital, Beijing, China
| | - Xingheng Jia
- School of Vehicle and Mobility, Tsinghua University, Beijing, China
| | - Shanshan Wei
- Beijing Tongren Eye Center, Beijing Tongren Hospital, Beijing, China
| | - Xuemin Li
- Department of Ophthalmology, Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Peking University Third Hospital, Beijing, China.
| |
Collapse
|
24
|
Cao S, Zhang R, Jiang A, Kuerban M, Wumaier A, Wu J, Xie K, Aizezi M, Tuersun A, Liang X, Chen R. Application effect of an artificial intelligence-based fundus screening system: evaluation in a clinical setting and population screening. Biomed Eng Online 2023; 22:38. [PMID: 37095516 PMCID: PMC10127070 DOI: 10.1186/s12938-023-01097-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2022] [Accepted: 03/24/2023] [Indexed: 04/26/2023] Open
Abstract
BACKGROUND To investigate the application effect of artificial intelligence (AI)-based fundus screening system in real-world clinical environment. METHODS A total of 637 color fundus images were included in the analysis of the application of the AI-based fundus screening system in the clinical environment and 20,355 images were analyzed in the population screening. RESULTS The AI-based fundus screening system demonstrated superior diagnostic effectiveness for diabetic retinopathy (DR), retinal vein occlusion (RVO) and pathological myopia (PM) according to gold standard referral. The sensitivity, specificity, accuracy, positive predictive value (PPV) and negative predictive value (NPV) of three fundus abnormalities were greater (all > 80%) than those for age-related macular degeneration (ARMD), referable glaucoma and other abnormalities. The percentages of different diagnostic conditions were similar in both the clinical environment and the population screening. CONCLUSIONS In a real-world setting, our AI-based fundus screening system could detect 7 conditions, with better performance for DR, RVO and PM. Testing in the clinical environment and through population screening demonstrated the clinical utility of our AI-based fundus screening system in the early detection of ocular fundus abnormalities and the prevention of blindness.
Collapse
Affiliation(s)
- Shujuan Cao
- Ophthalmologic Center, The Affiliated Kashi Hospital of Sun Yat-sen University, The First People's Hospital of Kashi Prefecture, Kashi, 844000, China
| | - Rongpei Zhang
- State Key Laboratory of Ophthalmology, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, 510060, China
- Ophthalmologic Center, The Affiliated Kashi Hospital of Sun Yat-sen University, The First People's Hospital of Kashi Prefecture, Kashi, 844000, China
| | - Aixin Jiang
- Ophthalmologic Center, The Affiliated Kashi Hospital of Sun Yat-sen University, The First People's Hospital of Kashi Prefecture, Kashi, 844000, China
| | - Mayila Kuerban
- Ophthalmologic Center, The Affiliated Kashi Hospital of Sun Yat-sen University, The First People's Hospital of Kashi Prefecture, Kashi, 844000, China
| | - Aizezi Wumaier
- Ophthalmologic Center, The Affiliated Kashi Hospital of Sun Yat-sen University, The First People's Hospital of Kashi Prefecture, Kashi, 844000, China
| | - Jianhua Wu
- Ophthalmologic Center, The Affiliated Kashi Hospital of Sun Yat-sen University, The First People's Hospital of Kashi Prefecture, Kashi, 844000, China
| | - Kaihua Xie
- Ophthalmologic Center, The Affiliated Kashi Hospital of Sun Yat-sen University, The First People's Hospital of Kashi Prefecture, Kashi, 844000, China
| | - Mireayi Aizezi
- Ophthalmologic Center, The Affiliated Kashi Hospital of Sun Yat-sen University, The First People's Hospital of Kashi Prefecture, Kashi, 844000, China
| | - Abudurexiti Tuersun
- Ophthalmologic Center, The Affiliated Kashi Hospital of Sun Yat-sen University, The First People's Hospital of Kashi Prefecture, Kashi, 844000, China
| | - Xuanwei Liang
- State Key Laboratory of Ophthalmology, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, 510060, China.
- Ophthalmologic Center, The Affiliated Kashi Hospital of Sun Yat-sen University, The First People's Hospital of Kashi Prefecture, Kashi, 844000, China.
| | - Rongxin Chen
- State Key Laboratory of Ophthalmology, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, 510060, China.
- Ophthalmologic Center, The Affiliated Kashi Hospital of Sun Yat-sen University, The First People's Hospital of Kashi Prefecture, Kashi, 844000, China.
| |
Collapse
|
25
|
Alam MN, Yamashita R, Ramesh V, Prabhune T, Lim JI, Chan RVP, Hallak J, Leng T, Rubin D. Contrastive learning-based pretraining improves representation and transferability of diabetic retinopathy classification models. Sci Rep 2023; 13:6047. [PMID: 37055475 PMCID: PMC10102012 DOI: 10.1038/s41598-023-33365-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Accepted: 04/12/2023] [Indexed: 04/15/2023] Open
Abstract
Diabetic retinopathy (DR) is a major cause of vision impairment in diabetic patients worldwide. Due to its prevalence, early clinical diagnosis is essential to improve treatment management of DR patients. Despite recent demonstration of successful machine learning (ML) models for automated DR detection, there is a significant clinical need for robust models that can be trained with smaller cohorts of dataset and still perform with high diagnostic accuracy in independent clinical datasets (i.e., high model generalizability). Towards this need, we have developed a self-supervised contrastive learning (CL) based pipeline for classification of referable vs non-referable DR. Self-supervised CL based pretraining allows enhanced data representation, therefore, the development of robust and generalized deep learning (DL) models, even with small, labeled datasets. We have integrated a neural style transfer (NST) augmentation in the CL pipeline to produce models with better representations and initializations for the detection of DR in color fundus images. We compare our CL pretrained model performance with two state of the art baseline models pretrained with Imagenet weights. We further investigate the model performance with reduced labeled training data (down to 10 percent) to test the robustness of the model when trained with small, labeled datasets. The model is trained and validated on the EyePACS dataset and tested independently on clinical datasets from the University of Illinois, Chicago (UIC). Compared to baseline models, our CL pretrained FundusNet model had higher area under the receiver operating characteristics (ROC) curve (AUC) (CI) values (0.91 (0.898 to 0.930) vs 0.80 (0.783 to 0.820) and 0.83 (0.801 to 0.853) on UIC data). At 10 percent labeled training data, the FundusNet AUC was 0.81 (0.78 to 0.84) vs 0.58 (0.56 to 0.64) and 0.63 (0.60 to 0.66) in baseline models, when tested on the UIC dataset. CL based pretraining with NST significantly improves DL classification performance, helps the model generalize well (transferable from EyePACS to UIC data), and allows training with small, annotated datasets, therefore reducing ground truth annotation burden of the clinicians.
Collapse
Affiliation(s)
- Minhaj Nur Alam
- Department of Biomedical Data Science, Stanford University School of Medicine, 1265 Welch Road, Stanford, CA, 94305, USA.
- Department of Electrical and Computer Engineering, University of North Carolina at Charlotte, 9201 University City Boulevard, Charlotte, NC, 28223, USA.
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL, 60612, USA.
| | - Rikiya Yamashita
- Department of Biomedical Data Science, Stanford University School of Medicine, 1265 Welch Road, Stanford, CA, 94305, USA
| | - Vignav Ramesh
- Department of Biomedical Data Science, Stanford University School of Medicine, 1265 Welch Road, Stanford, CA, 94305, USA
| | - Tejas Prabhune
- Department of Biomedical Data Science, Stanford University School of Medicine, 1265 Welch Road, Stanford, CA, 94305, USA
| | - Jennifer I Lim
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL, 60612, USA
| | - R V P Chan
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL, 60612, USA
| | - Joelle Hallak
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL, 60612, USA
| | - Theodore Leng
- Department of Ophthalmology, Stanford University School of Medicine, Stanford, CA, 94305, USA
| | - Daniel Rubin
- Department of Biomedical Data Science, Stanford University School of Medicine, 1265 Welch Road, Stanford, CA, 94305, USA
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, 94305, USA
| |
Collapse
|
26
|
Son J, Shin JY, Kong ST, Park J, Kwon G, Kim HD, Park KH, Jung KH, Park SJ. An interpretable and interactive deep learning algorithm for a clinically applicable retinal fundus diagnosis system by modelling finding-disease relationship. Sci Rep 2023; 13:5934. [PMID: 37045856 PMCID: PMC10097752 DOI: 10.1038/s41598-023-32518-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2023] [Accepted: 03/28/2023] [Indexed: 04/14/2023] Open
Abstract
The identification of abnormal findings manifested in retinal fundus images and diagnosis of ophthalmic diseases are essential to the management of potentially vision-threatening eye conditions. Recently, deep learning-based computer-aided diagnosis systems (CADs) have demonstrated their potential to reduce reading time and discrepancy amongst readers. However, the obscure reasoning of deep neural networks (DNNs) has been the leading cause to reluctance in its clinical use as CAD systems. Here, we present a novel architectural and algorithmic design of DNNs to comprehensively identify 15 abnormal retinal findings and diagnose 8 major ophthalmic diseases from macula-centered fundus images with the accuracy comparable to experts. We then define a notion of counterfactual attribution ratio (CAR) which luminates the system's diagnostic reasoning, representing how each abnormal finding contributed to its diagnostic prediction. By using CAR, we show that both quantitative and qualitative interpretation and interactive adjustment of the CAD result can be achieved. A comparison of the model's CAR with experts' finding-disease diagnosis correlation confirms that the proposed model identifies the relationship between findings and diseases similarly as ophthalmologists do.
Collapse
Affiliation(s)
| | - Joo Young Shin
- Department of Ophthalmology, Seoul Metropolitan Government Seoul National University Boramae Medical Center, Seoul, Republic of Korea
| | | | | | | | - Hoon Dong Kim
- Department of Ophthalmology, College of Medicine, Soonchunhyang University, Cheonan, Republic of Korea
| | - Kyu Hyung Park
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul National University Bundang Hospital, 82, Gumi-ro 173 Beon-gil, Bundang-gu, Seongnam-si, Gyeonggi-do, 13620, Republic of Korea
| | - Kyu-Hwan Jung
- Department of Medical Device Research and Management, Samsung Advanced Institute for Health Sciences and Technology, Sungkyunkwan University, 81 Irwon-ro, Gangnam-gu, Seoul, Republic of Korea.
| | - Sang Jun Park
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul National University Bundang Hospital, 82, Gumi-ro 173 Beon-gil, Bundang-gu, Seongnam-si, Gyeonggi-do, 13620, Republic of Korea.
| |
Collapse
|
27
|
Qu JH, Qin XR, Li CD, Peng RM, Xiao GG, Cheng J, Gu SF, Wang HK, Hong J. Fully automated grading system for the evaluation of punctate epithelial erosions using deep neural networks. Br J Ophthalmol 2023; 107:453-460. [PMID: 34670751 PMCID: PMC10086304 DOI: 10.1136/bjophthalmol-2021-319755] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2021] [Accepted: 10/08/2021] [Indexed: 11/04/2022]
Abstract
PURPOSE The goal was to develop a fully automated grading system for the evaluation of punctate epithelial erosions (PEEs) using deep neural networks. METHODS A fully automated system was developed to detect corneal position and grade staining severity given a corneal fluorescein staining image. The fully automated pipeline consists of the following three steps: a corneal segmentation model extracts corneal area; five image patches are cropped from the staining image based on the five subregions of extracted cornea; a staining grading model predicts a score for each image patch from 0 to 3, and automated grading score for the whole cornea is obtained from 0 to 15. Finally, the clinical grading scores annotated by three ophthalmologists were compared with automated grading scores. RESULTS For corneal segmentation, the segmentation model achieved an intersection over union of 0.937. For punctate staining grading, the grading model achieved a classification accuracy of 76.5% and an area under the receiver operating characteristic curve of 0.940 (95% CI 0.932 to 0.949). For the fully automated pipeline, Pearson's correlation coefficient between the clinical and automated grading scores was 0.908 (p<0.01). Bland-Altman analysis revealed 95% limits of agreement between the clinical and automated grading scores of between -4.125 and 3.720 (concordance correlation coefficient=0.904). The average time required for processing a single stained image during pipeline was 0.58 s. CONCLUSION A fully automated grading system was developed to evaluate PEEs. The grading results may serve as a reference for ophthalmologists in clinical trials and residency training procedures.
Collapse
Affiliation(s)
- Jing-Hao Qu
- Department of Ophthalmology, Peking University Third Hospital, Beijing, China
- Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Peking University Third Hospital, Beijing, China
| | - Xiao-Ran Qin
- Research Center for Brain-inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Chen-Di Li
- Department of Ophthalmology, Peking University Third Hospital, Beijing, China
- Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Peking University Third Hospital, Beijing, China
| | - Rong-Mei Peng
- Department of Ophthalmology, Peking University Third Hospital, Beijing, China
- Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Peking University Third Hospital, Beijing, China
| | - Ge-Ge Xiao
- Department of Ophthalmology, Peking University Third Hospital, Beijing, China
- Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Peking University Third Hospital, Beijing, China
| | - Jian Cheng
- Research Center for Brain-inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Shao-Feng Gu
- Department of Ophthalmology, Peking University Third Hospital, Beijing, China
- Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Peking University Third Hospital, Beijing, China
| | - Hai-Kun Wang
- Department of Ophthalmology, Peking University Third Hospital, Beijing, China
- Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Peking University Third Hospital, Beijing, China
| | - Jing Hong
- Department of Ophthalmology, Peking University Third Hospital, Beijing, China
- Beijing Key Laboratory of Restoration of Damaged Ocular Nerve, Peking University Third Hospital, Beijing, China
| |
Collapse
|
28
|
Chan YK, Cheng CY, Sabanayagam C. Eyes as the windows into cardiovascular disease in the era of big data. Taiwan J Ophthalmol 2023; 13:151-167. [PMID: 37484607 PMCID: PMC10361436 DOI: 10.4103/tjo.tjo-d-23-00018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Accepted: 04/11/2023] [Indexed: 07/25/2023] Open
Abstract
Cardiovascular disease (CVD) is a major cause of mortality and morbidity worldwide and imposes significant socioeconomic burdens, especially with late diagnoses. There is growing evidence of strong correlations between ocular images, which are information-dense, and CVD progression. The accelerating development of deep learning algorithms (DLAs) is a promising avenue for research into CVD biomarker discovery, early CVD diagnosis, and CVD prognostication. We review a selection of 17 recent DLAs on the less-explored realm of DL as applied to ocular images to produce CVD outcomes, potential challenges in their clinical deployment, and the path forward. The evidence for CVD manifestations in ocular images is well documented. Most of the reviewed DLAs analyze retinal fundus photographs to predict CV risk factors, in particular hypertension. DLAs can predict age, sex, smoking status, alcohol status, body mass index, mortality, myocardial infarction, stroke, chronic kidney disease, and hematological disease with significant accuracy. While the cardio-oculomics intersection is now burgeoning, very much remain to be explored. The increasing availability of big data, computational power, technological literacy, and acceptance all prime this subfield for rapid growth. We pinpoint the specific areas of improvement toward ubiquitous clinical deployment: increased generalizability, external validation, and universal benchmarking. DLAs capable of predicting CVD outcomes from ocular inputs are of great interest and promise to individualized precision medicine and efficiency in the provision of health care with yet undetermined real-world efficacy with impactful initial results.
Collapse
Affiliation(s)
- Yarn Kit Chan
- Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore
| | - Ching-Yu Cheng
- Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
- Center for Innovation and Precision Eye Health, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | - Charumathi Sabanayagam
- Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| |
Collapse
|
29
|
Field EL, Tam W, Moore N, McEntee M. Efficacy of Artificial Intelligence in the Categorisation of Paediatric Pneumonia on Chest Radiographs: A Systematic Review. CHILDREN 2023; 10:children10030576. [PMID: 36980134 PMCID: PMC10047666 DOI: 10.3390/children10030576] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Revised: 03/04/2023] [Accepted: 03/15/2023] [Indexed: 03/19/2023]
Abstract
This study aimed to systematically review the literature to synthesise and summarise the evidence surrounding the efficacy of artificial intelligence (AI) in classifying paediatric pneumonia on chest radiographs (CXRs). Following the initial search of studies that matched the pre-set criteria, their data were extracted using a data extraction tool, and the included studies were assessed via critical appraisal tools and risk of bias. Results were accumulated, and outcome measures analysed included sensitivity, specificity, accuracy, and area under the curve (AUC). Five studies met the inclusion criteria. The highest sensitivity was by an ensemble AI algorithm (96.3%). DenseNet201 obtained the highest level of specificity and accuracy (94%, 95%). The most outstanding AUC value was achieved by the VGG16 algorithm (96.2%). Some of the AI models achieved close to 100% diagnostic accuracy. To assess the efficacy of AI in a clinical setting, these AI models should be compared to that of radiologists. The included and evaluated AI algorithms showed promising results. These algorithms can potentially ease and speed up diagnosis once the studies are replicated and their performances are assessed in clinical settings, potentially saving millions of lives.
Collapse
Affiliation(s)
- Erica Louise Field
- Discipline of Medical Imaging and Radiation Therapy, University College Cork, College Road, T12 K8AF Cork, Ireland
| | - Winnie Tam
- Department of Midwifery and Radiography, University of London, Northampton Square, London EC1V 0HB, UK
- Correspondence:
| | - Niamh Moore
- Discipline of Medical Imaging and Radiation Therapy, University College Cork, College Road, T12 K8AF Cork, Ireland
| | - Mark McEntee
- Discipline of Medical Imaging and Radiation Therapy, University College Cork, College Road, T12 K8AF Cork, Ireland
| |
Collapse
|
30
|
Nespolo RG, Yi D, Cole E, Wang D, Warren A, Leiderman YI. Feature Tracking and Segmentation in Real Time via Deep Learning in Vitreoretinal Surgery: A Platform for Artificial Intelligence-Mediated Surgical Guidance. Ophthalmol Retina 2023; 7:236-242. [PMID: 36241132 DOI: 10.1016/j.oret.2022.10.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2022] [Revised: 09/28/2022] [Accepted: 10/03/2022] [Indexed: 11/15/2022]
Abstract
PURPOSE This study investigated whether a deep-learning neural network can detect and segment surgical instrumentation and relevant tissue boundaries and landmarks within the retina using imaging acquired from a surgical microscope in real time, with the goal of providing image-guided vitreoretinal (VR) microsurgery. DESIGN Retrospective analysis via a prospective, single-center study. PARTICIPANTS One hundred and one patients undergoing VR surgery, inclusive of core vitrectomy, membrane peeling, and endolaser application, in a university-based ophthalmology department between July 1, 2020, and September 1, 2021. METHODS A dataset composed of 606 surgical image frames was annotated by 3 VR surgeons. Annotation consisted of identifying the location and area of the following features, when present in-frame: vitrector-, forceps-, and endolaser tooltips, optic disc, fovea, retinal tears, retinal detachment, fibrovascular proliferation, endolaser spots, area where endolaser was applied, and macular hole. An instance segmentation fully convolutional neural network (YOLACT++) was adapted and trained, and fivefold cross-validation was employed to generate metrics for accuracy. MAIN OUTCOME MEASURES Area under the precision-recall curve (AUPR) for the detection of elements tracked and segmented in the final test dataset; the frames per second (FPS) for the assessment of suitability for real-time performance of the model. RESULTS The platform detected and classified the vitrector tooltip with a mean AUPR of 0.972 ± 0.009. The segmentation of target tissues, such as the optic disc, fovea, and macular hole reached mean AUPR values of 0.928 ± 0.013, 0.844 ± 0.039, and 0.916 ± 0.021, respectively. The postprocessed image was rendered at a full high-definition resolution of 1920 × 1080 pixels at 38.77 ± 1.52 FPS when attached to a surgical visualization system, reaching up to 87.44 ± 3.8 FPS. CONCLUSIONS Neural Networks can localize, classify, and segment tissues and instruments during VR procedures in real time. We propose a framework for developing surgical guidance and assessment platform that may guide surgical decision-making and help in formulating tools for systematic analyses of VR surgery. Potential applications include collision avoidance to prevent unintended instrument-tissue interactions and the extraction of spatial localization and movement of surgical instruments for surgical data science research. FINANCIAL DISCLOSURE(S) Proprietary or commercial disclosure may be found after the references.
Collapse
Affiliation(s)
- Rogerio Garcia Nespolo
- Department of Ophthalmology and Visual Sciences - Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago, Illinois; Richard and Loan Hill Department of Biomedical Engineering, University of Illinois at Chicago, Chicago, Illinois
| | - Darvin Yi
- Department of Ophthalmology and Visual Sciences - Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago, Illinois; Richard and Loan Hill Department of Biomedical Engineering, University of Illinois at Chicago, Chicago, Illinois
| | - Emily Cole
- Department of Ophthalmology and Visual Sciences - Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago, Illinois
| | - Daniel Wang
- Department of Ophthalmology and Visual Sciences - Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago, Illinois
| | - Alexis Warren
- Department of Ophthalmology and Visual Sciences - Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago, Illinois
| | - Yannek I Leiderman
- Department of Ophthalmology and Visual Sciences - Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago, Illinois; Richard and Loan Hill Department of Biomedical Engineering, University of Illinois at Chicago, Chicago, Illinois.
| |
Collapse
|
31
|
Li Z, Chen W. Solving data quality issues of fundus images in real-world settings by ophthalmic AI. Cell Rep Med 2023; 4:100951. [PMID: 36812885 PMCID: PMC9975325 DOI: 10.1016/j.xcrm.2023.100951] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/23/2023]
Abstract
Liu et al.1 develop a deep-learning-based flow cytometry-like image quality classifier, DeepFundus, for the automated, high-throughput, and multidimensional classification of fundus image quality. DeepFundus significantly improves the real-world performance of established artificial intelligence diagnostics in detecting multiple retinopathies.
Collapse
Affiliation(s)
- Zhongwen Li
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, China; School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou, China
| | - Wei Chen
- Ningbo Eye Hospital, Wenzhou Medical University, Ningbo, China; School of Ophthalmology and Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou, China.
| |
Collapse
|
32
|
Cavichini M, Bartsch DUG, Warter A, Singh S, An C, Wang Y, Zhang J, Nguyen T, Freeman WR. Accuracy and Time Comparison Between Side-by-Side and Artificial Intelligence Overlayed Images. Ophthalmic Surg Lasers Imaging Retina 2023; 54:108-113. [PMID: 36780638 DOI: 10.3928/23258160-20230130-03] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/15/2023]
Abstract
BACKGROUND AND OBJECTIVE The purpose of this study was to evaluate the accuracy and the time to find a lesion, taken in different platforms, color fundus photographs and infrared scanning laser ophthalmoscope images, using the traditional side-by-side (SBS) colocalization technique to an artificial intelligence (AI)-assisted technique. PATIENTS AND METHODS Fifty-three pathological lesions were studied in 11 eyes. Images were aligned using SBS and AI overlaid methods. The location of each color fundus lesion on the corresponding infrared scanning laser ophthalmoscope image was analyzed twice, one time for each method, on different days, for two specialists, in random order. The outcomes for each method were measured and recorded by an independent observer. RESULTS The colocalization AI method was superior to the conventional in accuracy and time (P < .001), with a mean time to colocalize 37% faster. The error rate using AI was 0% compared with 18% in SBS measurements. CONCLUSIONS AI permitted a more accurate and faster colocalization of pathologic lesions than the conventional method. [Ophthalmic Surg Lasers Imaging Retina 2023;54:108-113.].
Collapse
|
33
|
A Deep Learning Model for Evaluating Meibomian Glands Morphology from Meibography. J Clin Med 2023; 12:jcm12031053. [PMID: 36769701 PMCID: PMC9918190 DOI: 10.3390/jcm12031053] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2022] [Revised: 01/03/2023] [Accepted: 01/20/2023] [Indexed: 02/03/2023] Open
Abstract
To develop a deep learning model for automatically segmenting tarsus and meibomian gland areas on meibography, we included 1087 meibography images from dry eye patients. The contour of the tarsus and each meibomian gland was labeled manually by human experts. The dataset was divided into training, validation, and test sets. We built a convolutional neural network-based U-net and trained the model to segment the tarsus and meibomian gland area. Accuracy, sensitivity, specificity, and receiver operating characteristic curve (ROC) were calculated to evaluate the model. The area under the curve (AUC) values for models segmenting the tarsus and meibomian gland area were 0.985 and 0.938, respectively. The deep learning model achieved a sensitivity and specificity of 0.975 and 0.99, respectively, with an accuracy of 0.985 for segmenting the tarsus area. For meibomian gland area segmentation, the model obtained a high specificity of 0.96, with high accuracy of 0.937 and a moderate sensitivity of 0.751. The present research trained a deep learning model to automatically segment tarsus and the meibomian gland area from infrared meibography, and the model demonstrated outstanding accuracy in segmentation. With further improvement, the model could potentially be applied to assess the meibomian gland that facilitates dry eye evaluation in various clinical and research scenarios.
Collapse
|
34
|
Deep learning-based hemorrhage detection for diabetic retinopathy screening. Sci Rep 2023; 13:1479. [PMID: 36707608 PMCID: PMC9883230 DOI: 10.1038/s41598-023-28680-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2022] [Accepted: 01/23/2023] [Indexed: 01/29/2023] Open
Abstract
Diabetic retinopathy is a retinal compilation that causes visual impairment. Hemorrhage is one of the pathological symptoms of diabetic retinopathy that emerges during disease development. Therefore, hemorrhage detection reveals the presence of diabetic retinopathy in the early phase. Diagnosing the disease in its initial stage is crucial to adopt proper treatment so the repercussions can be prevented. The automatic deep learning-based hemorrhage detection method is proposed that can be used as the second interpreter for ophthalmologists to reduce the time and complexity of conventional screening methods. The quality of the images was enhanced, and the prospective hemorrhage locations were estimated in the preprocessing stage. Modified gamma correction adaptively illuminates fundus images by using gradient information to address the nonuniform brightness levels of images. The algorithm estimated the locations of potential candidates by using a Gaussian match filter, entropy thresholding, and mathematical morphology. The required objects were segmented using the regional diversity at estimated locations. The novel hemorrhage network is propounded for hemorrhage classification and compared with the renowned deep models. Two datasets benchmarked the model's performance using sensitivity, specificity, precision, and accuracy metrics. Despite being the shallowest network, the proposed network marked competitive results than LeNet-5, AlexNet, ResNet50, and VGG-16. The hemorrhage network was assessed using training time and classification accuracy through synthetic experimentation. Results showed promising accuracy in the classification stage while significantly reducing training time. The research concluded that increasing deep network layers does not guarantee good results but rather increases training time. The suitable architecture of a deep model and its appropriate parameters are critical for obtaining excellent outcomes.
Collapse
|
35
|
Lu Z, Miao J, Dong J, Zhu S, Wu P, Wang X, Feng J. Automatic Multilabel Classification of Multiple Fundus Diseases Based on Convolutional Neural Network With Squeeze-and-Excitation Attention. Transl Vis Sci Technol 2023; 12:22. [PMID: 36662513 PMCID: PMC9872849 DOI: 10.1167/tvst.12.1.22] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2022] [Accepted: 11/06/2022] [Indexed: 01/21/2023] Open
Abstract
Purpose Automatic multilabel classification of multiple fundus diseases is of importance for ophthalmologists. This study aims to design an effective multilabel classification model that can automatically classify multiple fundus diseases based on color fundus images. Methods We proposed a multilabel fundus disease classification model based on a convolutional neural network to classify normal and seven categories of common fundus diseases. Specifically, an attention mechanism was introduced into the network to further extract information features from color fundus images. The fundus images with eight categories of labels were applied to train, validate, and test our model. We employed the validation accuracy, area under the receiver operating characteristic curve (AUC), and F1-score as performance metrics to evaluate our model. Results Our proposed model achieved better performance with a validation accuracy of 94.27%, an AUC of 85.80%, and an F1-score of 86.08%, compared to two state-of-the-art models. Most important, the number of training parameters has dramatically dropped by three and eight times compared to the two state-of-the-art models. Conclusions This model can automatically classify multiple fundus diseases with not only excellent accuracy, AUC, and F1-score but also significantly fewer training parameters and lower computational cost, providing a reliable assistant in clinical screening. Translational Relevance The proposed model can be widely applied in large-scale multiple fundus disease screening, helping to create more efficient diagnostics in primary care settings.
Collapse
Affiliation(s)
- Zhenzhen Lu
- Department of Biomedical Engineering, Beijing International Science and Technology Cooperation Base for Intelligent Physiological Measurement and Clinical Transformation, Beijing University of Technology, Beijing, China
| | - Jingpeng Miao
- Beijing Tongren Eye Center, Beijing Ophthalmology & Visual Sciences Key Lab, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Jingran Dong
- Department of Biomedical Engineering, Beijing International Science and Technology Cooperation Base for Intelligent Physiological Measurement and Clinical Transformation, Beijing University of Technology, Beijing, China
| | - Shuyuan Zhu
- Department of Biomedical Engineering, Beijing International Science and Technology Cooperation Base for Intelligent Physiological Measurement and Clinical Transformation, Beijing University of Technology, Beijing, China
| | - Penghan Wu
- Fan Gongxiu Honors College, Beijing University of Technology, Beijing, China
| | - Xiaobing Wang
- Sports and Medicine Integrative Innovation Center, Capital University of Physical Education and Sports, Beijing, China
- Department of Ophthalmology, Beijing Boai Hospital, China Rehabilitation Research Center, School of Rehabilitation Medicine, Capital Medical University, Beijing, China
| | - Jihong Feng
- Department of Biomedical Engineering, Beijing International Science and Technology Cooperation Base for Intelligent Physiological Measurement and Clinical Transformation, Beijing University of Technology, Beijing, China
| |
Collapse
|
36
|
An empirical study of preprocessing techniques with convolutional neural networks for accurate detection of chronic ocular diseases using fundus images. APPL INTELL 2023; 53:1548-1566. [PMID: 35528131 PMCID: PMC9059700 DOI: 10.1007/s10489-022-03490-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/08/2022] [Indexed: 01/07/2023]
Abstract
Chronic Ocular Diseases (COD) such as myopia, diabetic retinopathy, age-related macular degeneration, glaucoma, and cataract can affect the eye and may even lead to severe vision impairment or blindness. According to a recent World Health Organization (WHO) report on vision, at least 2.2 billion individuals worldwide suffer from vision impairment. Often, overt signs indicative of COD do not manifest until the disease has progressed to an advanced stage. However, if COD is detected early, vision impairment can be avoided by early intervention and cost-effective treatment. Ophthalmologists are trained to detect COD by examining certain minute changes in the retina, such as microaneurysms, macular edema, hemorrhages, and alterations in the blood vessels. The range of eye conditions is diverse, and each of these conditions requires a unique patient-specific treatment. Convolutional neural networks (CNNs) have demonstrated significant potential in multi-disciplinary fields, including the detection of a variety of eye diseases. In this study, we combined several preprocessing approaches with convolutional neural networks to accurately detect COD in eye fundus images. To the best of our knowledge, this is the first work that provides a qualitative analysis of preprocessing approaches for COD classification using CNN models. Experimental results demonstrate that CNNs trained on the region of interest segmented images outperform the models trained on the original input images by a substantial margin. Additionally, an ensemble of three preprocessing techniques outperformed other state-of-the-art approaches by 30% and 3%, in terms of Kappa and F 1 scores, respectively. The developed prototype has been extensively tested and can be evaluated on more comprehensive COD datasets for deployment in the clinical setup.
Collapse
|
37
|
Selvachandran G, Quek SG, Paramesran R, Ding W, Son LH. Developments in the detection of diabetic retinopathy: a state-of-the-art review of computer-aided diagnosis and machine learning methods. Artif Intell Rev 2023; 56:915-964. [PMID: 35498558 PMCID: PMC9038999 DOI: 10.1007/s10462-022-10185-6] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/04/2022] [Indexed: 02/02/2023]
Abstract
The exponential increase in the number of diabetics around the world has led to an equally large increase in the number of diabetic retinopathy (DR) cases which is one of the major complications caused by diabetes. Left unattended, DR worsens the vision and would lead to partial or complete blindness. As the number of diabetics continue to increase exponentially in the coming years, the number of qualified ophthalmologists need to increase in tandem in order to meet the demand for screening of the growing number of diabetic patients. This makes it pertinent to develop ways to automate the detection process of DR. A computer aided diagnosis system has the potential to significantly reduce the burden currently placed on the ophthalmologists. Hence, this review paper is presented with the aim of summarizing, classifying, and analyzing all the recent development on automated DR detection using fundus images from 2015 up to this date. Such work offers an unprecedentedly thorough review of all the recent works on DR, which will potentially increase the understanding of all the recent studies on automated DR detection, particularly on those that deploys machine learning algorithms. Firstly, in this paper, a comprehensive state-of-the-art review of the methods that have been introduced in the detection of DR is presented, with a focus on machine learning models such as convolutional neural networks (CNN) and artificial neural networks (ANN) and various hybrid models. Each AI will then be classified according to its type (e.g. CNN, ANN, SVM), its specific task(s) in performing DR detection. In particular, the models that deploy CNN will be further analyzed and classified according to some important properties of the respective CNN architectures of each model. A total of 150 research articles related to the aforementioned areas that were published in the recent 5 years have been utilized in this review to provide a comprehensive overview of the latest developments in the detection of DR. Supplementary Information The online version contains supplementary material available at 10.1007/s10462-022-10185-6.
Collapse
Affiliation(s)
- Ganeshsree Selvachandran
- Department of Actuarial Science and Applied Statistics, Faculty of Business & Management, UCSI University, Jalan Menara Gading, Cheras, 56000 Kuala Lumpur, Malaysia
| | - Shio Gai Quek
- Department of Actuarial Science and Applied Statistics, Faculty of Business & Management, UCSI University, Jalan Menara Gading, Cheras, 56000 Kuala Lumpur, Malaysia
| | - Raveendran Paramesran
- Institute of Computer Science and Digital Innovation, UCSI University, Jalan Menara Gading, Cheras, 56000 Kuala Lumpur, Malaysia
| | - Weiping Ding
- School of Information Science and Technology, Nantong University, Nantong, 226019 People’s Republic of China
| | - Le Hoang Son
- VNU Information Technology Institute, Vietnam National University, Hanoi, Vietnam
| |
Collapse
|
38
|
Kurup AR, Wigdahl J, Benson J, Martínez-Ramón M, Solíz P, Joshi V. Automated malarial retinopathy detection using transfer learning and multi-camera retinal images. Biocybern Biomed Eng 2023; 43:109-123. [PMID: 36685736 PMCID: PMC9851283 DOI: 10.1016/j.bbe.2022.12.003] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Cerebral malaria (CM) is a fatal syndrome found commonly in children less than 5 years old in Sub-saharan Africa and Asia. The retinal signs associated with CM are known as malarial retinopathy (MR), and they include highly specific retinal lesions such as whitening and hemorrhages. Detecting these lesions allows the detection of CM with high specificity. Up to 23% of CM, patients are over-diagnosed due to the presence of clinical symptoms also related to pneumonia, meningitis, or others. Therefore, patients go untreated for these pathologies, resulting in death or neurological disability. It is essential to have a low-cost and high-specificity diagnostic technique for CM detection, for which We developed a method based on transfer learning (TL). Models pre-trained with TL select the good quality retinal images, which are fed into another TL model to detect CM. This approach shows a 96% specificity with low-cost retinal cameras.
Collapse
Affiliation(s)
| | - Jeff Wigdahl
- VisionQuest Biomedical Inc., Albuquerque, NM, USA
| | | | | | - Peter Solíz
- VisionQuest Biomedical Inc., Albuquerque, NM, USA
| | | |
Collapse
|
39
|
Hua R, Xiong J, Li G, Zhu Y, Ge Z, Ma Y, Fu M, Li C, Wang B, Dong L, Zhao X, Ma Z, Chen J, Gao X, He C, Wang Z, Wei W, Wang F, Gao X, Chen Y, Zeng Q, Xie W. Development and validation of a deep learning algorithm based on fundus photographs for estimating the CAIDE dementia risk score. Age Ageing 2022; 51:6936402. [PMID: 36580391 DOI: 10.1093/ageing/afac282] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2022] [Revised: 09/08/2022] [Indexed: 12/30/2022] Open
Abstract
BACKGROUND the Cardiovascular Risk Factors, Aging, and Incidence of Dementia (CAIDE) dementia risk score is a recognised tool for dementia risk stratification. However, its application is limited due to the requirements for multidimensional information and fasting blood draw. Consequently, an effective and non-invasive tool for screening individuals with high dementia risk in large population-based settings is urgently needed. METHODS a deep learning algorithm based on fundus photographs for estimating the CAIDE dementia risk score was developed and internally validated by a medical check-up dataset included 271,864 participants in 19 province-level administrative regions of China, and externally validated based on an independent dataset included 20,690 check-up participants in Beijing. The performance for identifying individuals with high dementia risk (CAIDE dementia risk score ≥ 10 points) was evaluated by area under the receiver operating curve (AUC) with 95% confidence interval (CI). RESULTS the algorithm achieved an AUC of 0.944 (95% CI: 0.939-0.950) in the internal validation group and 0.926 (95% CI: 0.913-0.939) in the external group, respectively. Besides, the estimated CAIDE dementia risk score derived from the algorithm was significantly associated with both comprehensive cognitive function and specific cognitive domains. CONCLUSIONS this algorithm trained via fundus photographs could well identify individuals with high dementia risk in a population setting. Therefore, it has the potential to be utilised as a non-invasive and more expedient method for dementia risk stratification. It might also be adopted in dementia clinical trials, incorporated as inclusion criteria to efficiently select eligible participants.
Collapse
Affiliation(s)
- Rong Hua
- Peking University Clinical Research Institute, Peking University First Hospital, Beijing 100191, China.,PUCRI Heart and Vascular Health Research Center at Peking University Shougang Hospital, Beijing, China
| | | | - Gail Li
- Departments of Psychiatry and Behavioral Sciences, University of Washington, Seattle, WA, USA.,Division of Gerontology and Geriatric Medicine, University of Washington, Seattle, WA, USA
| | - Yidan Zhu
- Peking University Clinical Research Institute, Peking University First Hospital, Beijing 100191, China.,PUCRI Heart and Vascular Health Research Center at Peking University Shougang Hospital, Beijing, China
| | - Zongyuan Ge
- Beijing Airdoc Technology Co., Ltd., Beijing, China
| | - Yanjun Ma
- Peking University Clinical Research Institute, Peking University First Hospital, Beijing 100191, China.,PUCRI Heart and Vascular Health Research Center at Peking University Shougang Hospital, Beijing, China
| | - Meng Fu
- Beijing Airdoc Technology Co., Ltd., Beijing, China
| | - Chenglong Li
- Peking University Clinical Research Institute, Peking University First Hospital, Beijing 100191, China.,PUCRI Heart and Vascular Health Research Center at Peking University Shougang Hospital, Beijing, China
| | - Bin Wang
- Beijing Airdoc Technology Co., Ltd., Beijing, China
| | - Li Dong
- Beijing Tongren Eye Center, Beijing Tongren Hospital, Beijing, China
| | - Xin Zhao
- Beijing Airdoc Technology Co., Ltd., Beijing, China
| | - Zhiqiang Ma
- iKang Guobin Healthcare Group Co., Ltd., Beijing, China
| | - Jili Chen
- Shibei Hospital, Jingan District, Shanghai, China
| | - Xinxiao Gao
- Department of Ophthalmology, Beijing Anzhen Hospital, Capital Medical University, Beijing, China
| | - Chao He
- Beijing Airdoc Technology Co., Ltd., Beijing, China
| | - Zhaohui Wang
- iKang Guobin Healthcare Group Co., Ltd., Beijing, China
| | - Wenbin Wei
- Beijing Tongren Eye Center, Beijing Tongren Hospital, Beijing, China
| | - Fei Wang
- Health Management Institute, The Second Medical Center & National Clinical Research Center for Geriatric Diseases, Chinese PLA General Hospital, Beijing 100853, China
| | - Xiangyang Gao
- Health Management Institute, The Second Medical Center & National Clinical Research Center for Geriatric Diseases, Chinese PLA General Hospital, Beijing 100853, China
| | - Yuzhong Chen
- Beijing Airdoc Technology Co., Ltd., Beijing, China
| | - Qiang Zeng
- Health Management Institute, The Second Medical Center & National Clinical Research Center for Geriatric Diseases, Chinese PLA General Hospital, Beijing 100853, China
| | - Wuxiang Xie
- Peking University Clinical Research Institute, Peking University First Hospital, Beijing 100191, China.,PUCRI Heart and Vascular Health Research Center at Peking University Shougang Hospital, Beijing, China
| |
Collapse
|
40
|
Saleem R, Yuan B, Kurugollu F, Anjum A, Liu L. Explaining deep neural networks: A survey on the global interpretation methods. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.09.129] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2022]
|
41
|
Cao J, You K, Zhou J, Xu M, Xu P, Wen L, Wang S, Jin K, Lou L, Wang Y, Ye J. A cascade eye diseases screening system with interpretability and expandability in ultra-wide field fundus images: A multicentre diagnostic accuracy study. EClinicalMedicine 2022; 53:101633. [PMID: 36110868 PMCID: PMC9468501 DOI: 10.1016/j.eclinm.2022.101633] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Revised: 08/08/2022] [Accepted: 08/08/2022] [Indexed: 12/09/2022] Open
Abstract
BACKGROUND Clinical application of artificial intelligence is limited due to the lack of interpretability and expandability in complex clinical settings. We aimed to develop an eye diseases screening system with improved interpretability and expandability based on a lesion-level dissection and tested the clinical expandability and auxiliary ability of the system. METHODS The four-hierarchical interpretable eye diseases screening system (IEDSS) based on a novel structural pattern named lesion atlas was developed to identify 30 eye diseases and conditions using a total of 32,026 ultra-wide field images collected from the Second Affiliated Hospital of Zhejiang University, School of Medicine (SAHZU), the First Affiliated Hospital of University of Science and Technology of China (FAHUSTC), and the Affiliated People's Hospital of Ningbo University (APHNU) in China between November 1, 2016 to February 28, 2022. The performance of IEDSS was compared with ophthalmologists and classic models trained with image-level labels. We further evaluated IEDSS in two external datasets, and tested it in a real-world scenario and an extended dataset with new phenotypes beyond the training categories. The accuracy (ACC), F1 score and confusion matrix were calculated to assess the performance of IEDSS. FINDINGS IEDSS reached average ACCs (aACC) of 0·9781 (95%CI 0·9739-0·9824), 0·9660 (95%CI 0·9591-0·9730) and 0·9709 (95%CI 0·9655-0·9763), frequency-weighted average F1 scores of 0·9042 (95%CI 0·8957-0·9127), 0·8837 (95%CI 0·8714-0·8960) and 0·8874 (95%CI 0·8772-0·8972) in datasets of SAHZU, APHNU and FAHUSTC, respectively. IEDSS reached a higher aACC (0·9781, 95%CI 0·9739-0·9824) compared with a multi-class image-level model (0·9398, 95%CI 0·9329-0·9467), a classic multi-label image-level model (0·9278, 95%CI 0·9189-0·9366), a novel multi-label image-level model (0·9241, 95%CI 0·9151-0·9331) and a lesion-level model without Adaboost (0·9381, 95%CI 0·9299-0·9463). In the real-world scenario, the aACC of IEDSS (0·9872, 95%CI 0·9828-0·9915) was higher than that of the senior ophthalmologist (SO) (0·9413, 95%CI 0·9321-0·9504, p = 0·000) and the junior ophthalmologist (JO) (0·8846, 95%CI 0·8722-0·8971, p = 0·000). IEDSS remained strong performance (ACC = 0·8560, 95%CI 0·8252-0·8868) compared with JO (ACC = 0·784, 95%CI 0·7479-0·8201, p= 0·003) and SO (ACC = 0·8500, 95%CI 0·8187-0·8813, p = 0·789) in the extended dataset. INTERPRETATION IEDSS showed excellent and stable performance in identifying common eye conditions and conditions beyond the training categories. The transparency and expandability of IEDSS could tremendously increase the clinical application range and the practical clinical value of it. It would enhance the efficiency and reliability of clinical practice, especially in remote areas with a lack of experienced specialists. FUNDING National Natural Science Foundation Regional Innovation and Development Joint Fund (U20A20386), Key research and development program of Zhejiang Province (2019C03020), Clinical Medical Research Centre for Eye Diseases of Zhejiang Province (2021E50007).
Collapse
Affiliation(s)
- Jing Cao
- Department of Ophthalmology, the Second Affiliated Hospital of Zhejiang University, College of Medicine, Hangzhou, Zhejiang, China
| | - Kun You
- Zhejiang Feitu Medical Imaging Co.,LTD, Hangzhou, Zhejiang, China
| | - Jingxin Zhou
- Department of Ophthalmology, the Second Affiliated Hospital of Zhejiang University, College of Medicine, Hangzhou, Zhejiang, China
| | - Mingyu Xu
- Department of Ophthalmology, the Second Affiliated Hospital of Zhejiang University, College of Medicine, Hangzhou, Zhejiang, China
| | - Peifang Xu
- Department of Ophthalmology, the Second Affiliated Hospital of Zhejiang University, College of Medicine, Hangzhou, Zhejiang, China
| | - Lei Wen
- The First Affiliated Hospital of University of Science and Technology of China, Hefei, Anhui, China
| | - Shengzhan Wang
- The Affiliated People's Hospital of Ningbo University, Ningbo, Zhejiang, China
| | - Kai Jin
- Department of Ophthalmology, the Second Affiliated Hospital of Zhejiang University, College of Medicine, Hangzhou, Zhejiang, China
| | - Lixia Lou
- Department of Ophthalmology, the Second Affiliated Hospital of Zhejiang University, College of Medicine, Hangzhou, Zhejiang, China
| | - Yao Wang
- Department of Ophthalmology, the Second Affiliated Hospital of Zhejiang University, College of Medicine, Hangzhou, Zhejiang, China
| | - Juan Ye
- Department of Ophthalmology, the Second Affiliated Hospital of Zhejiang University, College of Medicine, Hangzhou, Zhejiang, China
- Corresponding author at: No. 1 West Lake Avenue, Hangzhou, Zhejiang Province, China, 310009.
| |
Collapse
|
42
|
Fang H, Li F, Fu H, Sun X, Cao X, Lin F, Son J, Kim S, Quellec G, Matta S, Shankaranarayana SM, Chen YT, Wang CH, Shah NA, Lee CY, Hsu CC, Xie H, Lei B, Baid U, Innani S, Dang K, Shi W, Kamble R, Singhal N, Wang CW, Lo SC, Orlando JI, Bogunovic H, Zhang X, Xu Y. ADAM Challenge: Detecting Age-Related Macular Degeneration From Fundus Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:2828-2847. [PMID: 35507621 DOI: 10.1109/tmi.2022.3172773] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Age-related macular degeneration (AMD) is the leading cause of visual impairment among elderly in the world. Early detection of AMD is of great importance, as the vision loss caused by this disease is irreversible and permanent. Color fundus photography is the most cost-effective imaging modality to screen for retinal disorders. Cutting edge deep learning based algorithms have been recently developed for automatically detecting AMD from fundus images. However, there are still lack of a comprehensive annotated dataset and standard evaluation benchmarks. To deal with this issue, we set up the Automatic Detection challenge on Age-related Macular degeneration (ADAM), which was held as a satellite event of the ISBI 2020 conference. The ADAM challenge consisted of four tasks which cover the main aspects of detecting and characterizing AMD from fundus images, including detection of AMD, detection and segmentation of optic disc, localization of fovea, and detection and segmentation of lesions. As part of the ADAM challenge, we have released a comprehensive dataset of 1200 fundus images with AMD diagnostic labels, pixel-wise segmentation masks for both optic disc and AMD-related lesions (drusen, exudates, hemorrhages and scars, among others), as well as the coordinates corresponding to the location of the macular fovea. A uniform evaluation framework has been built to make a fair comparison of different models using this dataset. During the ADAM challenge, 610 results were submitted for online evaluation, with 11 teams finally participating in the onsite challenge. This paper introduces the challenge, the dataset and the evaluation methods, as well as summarizes the participating methods and analyzes their results for each task. In particular, we observed that the ensembling strategy and the incorporation of clinical domain knowledge were the key to improve the performance of the deep learning models.
Collapse
|
43
|
Schneider L, Arsiwala-Scheppach L, Krois J, Meyer-Lueckel H, Bressem K, Niehues S, Schwendicke F. Benchmarking Deep Learning Models for Tooth Structure Segmentation. J Dent Res 2022; 101:1343-1349. [PMID: 35686357 PMCID: PMC9516600 DOI: 10.1177/00220345221100169] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
A wide range of deep learning (DL) architectures with varying depths are available, with developers usually choosing one or a few of them for their specific task in a nonsystematic way. Benchmarking (i.e., the systematic comparison of state-of-the art architectures on a specific task) may provide guidance in the model development process and may allow developers to make better decisions. However, comprehensive benchmarking has not been performed in dentistry yet. We aimed to benchmark a range of architecture designs for 1 specific, exemplary case: tooth structure segmentation on dental bitewing radiographs. We built 72 models for tooth structure (enamel, dentin, pulp, fillings, crowns) segmentation by combining 6 different DL network architectures (U-Net, U-Net++, Feature Pyramid Networks, LinkNet, Pyramid Scene Parsing Network, Mask Attention Network) with 12 encoders from 3 different encoder families (ResNet, VGG, DenseNet) of varying depth (e.g., VGG13, VGG16, VGG19). On each model design, 3 initialization strategies (ImageNet, CheXpert, random initialization) were applied, resulting overall into 216 trained models, which were trained up to 200 epochs with the Adam optimizer (learning rate = 0.0001) and a batch size of 32. Our data set consisted of 1,625 human-annotated dental bitewing radiographs. We used a 5-fold cross-validation scheme and quantified model performances primarily by the F1-score. Initialization with ImageNet or CheXpert weights significantly outperformed random initialization (P < 0.05). Deeper and more complex models did not necessarily perform better than less complex alternatives. VGG-based models were more robust across model configurations, while more complex models (e.g., from the ResNet family) achieved peak performances. In conclusion, initializing models with pretrained weights may be recommended when training models for dental radiographic analysis. Less complex model architectures may be competitive alternatives if computational resources and training time are restricting factors. Models developed and found superior on nondental data sets may not show this behavior for dental domain-specific tasks.
Collapse
Affiliation(s)
- L. Schneider
- Department of Oral Diagnostics,
Digital Health and Health Services Research, Charité–Universitätsmedizin,
Berlin, Germany
- ITU/WHO Focus Group on AI for
Health, Topic Group Dental Diagnostics and Digital Dentistry, Geneva,
Switzerland
| | - L. Arsiwala-Scheppach
- Department of Oral Diagnostics,
Digital Health and Health Services Research, Charité–Universitätsmedizin,
Berlin, Germany
- ITU/WHO Focus Group on AI for
Health, Topic Group Dental Diagnostics and Digital Dentistry, Geneva,
Switzerland
| | - J. Krois
- Department of Oral Diagnostics,
Digital Health and Health Services Research, Charité–Universitätsmedizin,
Berlin, Germany
- ITU/WHO Focus Group on AI for
Health, Topic Group Dental Diagnostics and Digital Dentistry, Geneva,
Switzerland
| | - H. Meyer-Lueckel
- Department of Restorative,
Preventive and Pediatric Dentistry, Zahnmedizinische Kliniken der
Universität Bern, University of Bern, Bern, Switzerland
| | - K.K. Bressem
- Charité–Universitätsmedizin
Berlin, Klinik für Radiologie, Berlin, Germany
- Berlin Institute of Health at
Charité–Universitätsmedizin Berlin, Berlin, Germany
| | - S.M. Niehues
- Charité–Universitätsmedizin
Berlin, Klinik für Radiologie, Berlin, Germany
| | - F. Schwendicke
- Department of Oral Diagnostics,
Digital Health and Health Services Research, Charité–Universitätsmedizin,
Berlin, Germany
- ITU/WHO Focus Group on AI for
Health, Topic Group Dental Diagnostics and Digital Dentistry, Geneva,
Switzerland
| |
Collapse
|
44
|
Zhou Q, Guo J, Chen Z, Chen W, Deng C, Yu T, Li F, Yan X, Hu T, Wang L, Rong Y, Ding M, Wang J, Zhang X. Deep learning-based classification of the anterior chamber angle in glaucoma gonioscopy. BIOMEDICAL OPTICS EXPRESS 2022; 13:4668-4683. [PMID: 36187252 PMCID: PMC9484423 DOI: 10.1364/boe.465286] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Revised: 07/30/2022] [Accepted: 08/03/2022] [Indexed: 06/16/2023]
Abstract
In the proposed network, the features were first extracted from the gonioscopically obtained anterior segment photographs using the densely-connected high-resolution network. Then the useful information is further strengthened using the hybrid attention module to improve the classification accuracy. Between October 30, 2020, and January 30, 2021, a total of 146 participants underwent glaucoma screening. One thousand seven hundred eighty original images of the ACA were obtained with the gonioscope and slit lamp microscope. After data augmentation, 4457 images are used for the training and validation of the HahrNet, and 497 images are used to evaluate our algorithm. Experimental results demonstrate that the proposed HahrNet exhibits a good performance of 96.2% accuracy, 99.0% specificity, 96.4% sensitivity, and 0.996 area under the curve (AUC) in classifying the ACA test dataset. Compared with several deep learning-based classification methods and nine human readers of different levels, the HahrNet achieves better or more competitive performance in terms of accuracy, specificity, and sensitivity. Indeed, the proposed ACA classification method will provide an automatic and accurate technology for the grading of glaucoma.
Collapse
Affiliation(s)
- Quan Zhou
- Department of Biomedical Engineering, College of Life Science and Technology, Ministry of Education Key Laboratory of Molecular Biophysics, Huazhong University of Science and Technology, Wuhan 430074, China
- These authors contribute equally to this work
| | - Jingmin Guo
- Department of Ophthalmology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430030, China
- These authors contribute equally to this work
| | - Zhiqi Chen
- Department of Ophthalmology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430030, China
| | - Wei Chen
- Department of Ophthalmology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430030, China
| | - Chaohua Deng
- Department of Ophthalmology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430030, China
| | - Tian Yu
- Department of Ophthalmology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430030, China
| | - Fei Li
- Department of Ophthalmology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430030, China
| | - Xiaoqin Yan
- Department of Ophthalmology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430030, China
| | - Tian Hu
- Department of Ophthalmology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430030, China
| | - Linhao Wang
- Department of Ophthalmology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430030, China
| | - Yan Rong
- Department of Ophthalmology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430030, China
| | - Mingyue Ding
- Department of Biomedical Engineering, College of Life Science and Technology, Ministry of Education Key Laboratory of Molecular Biophysics, Huazhong University of Science and Technology, Wuhan 430074, China
| | - Junming Wang
- Department of Ophthalmology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430030, China
| | - Xuming Zhang
- Department of Biomedical Engineering, College of Life Science and Technology, Ministry of Education Key Laboratory of Molecular Biophysics, Huazhong University of Science and Technology, Wuhan 430074, China
| |
Collapse
|
45
|
Chen HSL, Chen GA, Syu JY, Chuang LH, Su WW, Wu WC, Liu JH, Chen JR, Huang SC, Kang EYC. Early Glaucoma Detection by Using Style Transfer to Predict Retinal Nerve Fiber Layer Thickness Distribution on the Fundus Photograph. OPHTHALMOLOGY SCIENCE 2022; 2:100180. [PMID: 36245759 PMCID: PMC9559108 DOI: 10.1016/j.xops.2022.100180] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/06/2022] [Revised: 05/16/2022] [Accepted: 06/06/2022] [Indexed: 12/03/2022]
Abstract
Objective We aimed to develop a deep learning (DL)–based algorithm for early glaucoma detection based on color fundus photographs that provides information on defects in the retinal nerve fiber layer (RNFL) and its thickness from the mapping and translating relations of spectral domain OCT (SD-OCT) thickness maps. Design Developing and evaluating an artificial intelligence detection tool. Subjects Pretraining paired data of color fundus photographs and SD-OCT images from 189 healthy participants and 371 patients with early glaucoma were used. Methods The variational autoencoder (VAE) network training architecture was used for training, and the correlation between the fundus photographs and RNFL thickness distribution was determined through the deep neural network. The reference standard was defined as a vertical cup-to-disc ratio of ≥0.7, other typical changes in glaucomatous optic neuropathy, and RNFL defects. Convergence indicates that the VAE has learned a distribution that would enable us to produce corresponding synthetic OCT scans. Main Outcome Measures Similarly to wide-field OCT scanning, the proposed model can extract the results of RNFL thickness analysis. The structural similarity index measure (SSIM) and peak signal-to-noise ratio (PSNR) were used to assess signal strength and the similarity in the structure of the color fundus images converted to an RNFL thickness distribution model. The differences between the model-generated images and original images were quantified. Results We developed and validated a novel DL-based algorithm to extract thickness information from the color space of fundus images similarly to that from OCT images and to use this information to regenerate RNFL thickness distribution images. The generated thickness map was sufficient for clinical glaucoma detection, and the generated images were similar to ground truth (PSNR: 19.31 decibels; SSIM: 0.44). The inference results were similar to the OCT-generated original images in terms of the ability to predict RNFL thickness distribution. Conclusions The proposed technique may aid clinicians in early glaucoma detection, especially when only color fundus photographs are available.
Collapse
Affiliation(s)
- Henry Shen-Lih Chen
- Department of Ophthalmology, Chang Gung Memorial Hospital, Linkou Medical Center, Taoyuan, Taiwan
- College of Medicine, Chang Gung University, Taoyuan, Taiwan
- Henry Shen-Lih Chen, MD, MBA, Department of Ophthalmology, Chang Gung Memorial Hospital, No. 5, Fu-Hsin Rd., Taoyuan 333, Taiwan.
| | - Guan-An Chen
- Healthcare Service Division, Department of Intelligent Medical & Healthcare, Service Systems Technology Center, Industrial Technology Research Institute, Hsinchu, Taiwan
| | - Jhen-Yang Syu
- Healthcare Service Division, Department of Intelligent Medical & Healthcare, Service Systems Technology Center, Industrial Technology Research Institute, Hsinchu, Taiwan
| | - Lan-Hsin Chuang
- College of Medicine, Chang Gung University, Taoyuan, Taiwan
- Department of Ophthalmology, Keelung Chang Gung Memorial Hospital, Keelung, Taiwan
| | - Wei-Wen Su
- Department of Ophthalmology, Chang Gung Memorial Hospital, Linkou Medical Center, Taoyuan, Taiwan
- College of Medicine, Chang Gung University, Taoyuan, Taiwan
| | - Wei-Chi Wu
- Department of Ophthalmology, Chang Gung Memorial Hospital, Linkou Medical Center, Taoyuan, Taiwan
- College of Medicine, Chang Gung University, Taoyuan, Taiwan
| | - Jian-Hong Liu
- Healthcare Service Division, Department of Intelligent Medical & Healthcare, Service Systems Technology Center, Industrial Technology Research Institute, Hsinchu, Taiwan
| | - Jian-Ren Chen
- Healthcare Service Division, Department of Intelligent Medical & Healthcare, Service Systems Technology Center, Industrial Technology Research Institute, Hsinchu, Taiwan
| | - Su-Chen Huang
- Healthcare Service Division, Department of Intelligent Medical & Healthcare, Service Systems Technology Center, Industrial Technology Research Institute, Hsinchu, Taiwan
| | - Eugene Yu-Chuan Kang
- Department of Ophthalmology, Chang Gung Memorial Hospital, Linkou Medical Center, Taoyuan, Taiwan
- College of Medicine, Chang Gung University, Taoyuan, Taiwan
- Graduate Institute of Clinical Medical Sciences, College of Medicine, Chang Gung University, Taoyuan, Taiwan
- Correspondence: Eugene Yu-Chuan Kang, MD, Department of Ophthalmology, Chang Gung Memorial Hospital, No. 5, Fu-Hsin Rd., Taoyuan 333, Taiwan.
| |
Collapse
|
46
|
Wang J, Zhao R, Li P, Fang Z, Li Q, Han Y, Zhou R, Zhang Y. Clinical Progress and Optimization of Information Processing in Artificial Visual Prostheses. SENSORS (BASEL, SWITZERLAND) 2022; 22:6544. [PMID: 36081002 PMCID: PMC9460383 DOI: 10.3390/s22176544] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Revised: 08/22/2022] [Accepted: 08/26/2022] [Indexed: 06/15/2023]
Abstract
Visual prostheses, used to assist in restoring functional vision to the visually impaired, convert captured external images into corresponding electrical stimulation patterns that are stimulated by implanted microelectrodes to induce phosphenes and eventually visual perception. Detecting and providing useful visual information to the prosthesis wearer under limited artificial vision has been an important concern in the field of visual prosthesis. Along with the development of prosthetic device design and stimulus encoding methods, researchers have explored the possibility of the application of computer vision by simulating visual perception under prosthetic vision. Effective image processing in computer vision is performed to optimize artificial visual information and improve the ability to restore various important visual functions in implant recipients, allowing them to better achieve their daily demands. This paper first reviews the recent clinical implantation of different types of visual prostheses, summarizes the artificial visual perception of implant recipients, and especially focuses on its irregularities, such as dropout and distorted phosphenes. Then, the important aspects of computer vision in the optimization of visual information processing are reviewed, and the possibilities and shortcomings of these solutions are discussed. Ultimately, the development direction and emphasis issues for improving the performance of visual prosthesis devices are summarized.
Collapse
Affiliation(s)
- Jing Wang
- School of Information, Shanghai Ocean University, Shanghai 201306, China
- Key Laboratory of Fishery Information, Ministry of Agriculture, Shanghai 200335, China
| | - Rongfeng Zhao
- School of Information, Shanghai Ocean University, Shanghai 201306, China
| | - Peitong Li
- School of Information, Shanghai Ocean University, Shanghai 201306, China
| | - Zhiqiang Fang
- School of Information, Shanghai Ocean University, Shanghai 201306, China
| | - Qianqian Li
- School of Information, Shanghai Ocean University, Shanghai 201306, China
| | - Yanling Han
- School of Information, Shanghai Ocean University, Shanghai 201306, China
| | - Ruyan Zhou
- School of Information, Shanghai Ocean University, Shanghai 201306, China
| | - Yun Zhang
- School of Information, Shanghai Ocean University, Shanghai 201306, China
| |
Collapse
|
47
|
Sun K, He M, Xu Y, Wu Q, He Z, Li W, Liu H, Pi X. Multi-label classification of fundus images with graph convolutional network and LightGBM. Comput Biol Med 2022; 149:105909. [PMID: 35998479 DOI: 10.1016/j.compbiomed.2022.105909] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2022] [Revised: 07/03/2022] [Accepted: 07/16/2022] [Indexed: 12/01/2022]
Abstract
Early detection and treatment of retinal disorders are critical for avoiding irreversible visual impairment. Given that patients in the clinical setting may have various types of retinal illness, the development of multi-label fundus disease detection models capable of screening for multiple diseases is more in line with clinical needs. This article presented a composite model based on hybrid graph convolution for patient-level multi-label fundus illness identification. The composite model comprised a backbone module, a hybrid graph convolution module, and a classifier module. This article established the relationship between labels via graph convolution and then employed a self-attention mechanism to design a hybrid graph convolution structure. The backbone module extracted features using EfficientNet-B4, whereas the classifier module output multi-label using LightGBM. Additionally, this work investigated the input pattern of binocular images and the influence of label correlation on the model's identification performance. The proposed model MCGL-Net outperformed all other state-of-the-art methods on the publicly available ODIR dataset, with F1 reaching 91.60% on the test set. Ablation experiments were also performed in this paper. Experiments showed that the idea of hybrid graph convolutional structure and composite model designed in this paper promotes the model performance under any backbone CNN. The adoption of hybrid graph convolution can increase the F1 by 2.39% in trials using EfficientNet-B4 as the backbone. The composite model had a higher F1 index by 5.42% than the single EfficientNet-B4 model.
Collapse
Affiliation(s)
- Kai Sun
- Key Laboratory of Biorheological Science and Technology of Ministry of Education, College of Bioengineering, Chongqing University, Chongqing, China
| | - Mengjia He
- Key Laboratory of Biorheological Science and Technology of Ministry of Education, College of Bioengineering, Chongqing University, Chongqing, China
| | - Yao Xu
- Key Laboratory of Biorheological Science and Technology of Ministry of Education, College of Bioengineering, Chongqing University, Chongqing, China
| | - Qinying Wu
- Key Laboratory of Biorheological Science and Technology of Ministry of Education, College of Bioengineering, Chongqing University, Chongqing, China
| | - Zichun He
- Chongqing Red Cross Hospital (People's Hospital of Jiangbei District), Chongqing, China
| | - Wang Li
- School of Pharmacy and Bioengineering, Chongqing University of Technology, Chongqing, China
| | - Hongying Liu
- Key Laboratory of Biorheological Science and Technology of Ministry of Education, College of Bioengineering, Chongqing University, Chongqing, China; Chongqing Engineering Technology Research Center of Medical Electronic, Chongqing, 400030, People's Republic of China.
| | - Xitian Pi
- Key Laboratory of Biorheological Science and Technology of Ministry of Education, College of Bioengineering, Chongqing University, Chongqing, China; Chongqing Engineering Technology Research Center of Medical Electronic, Chongqing, 400030, People's Republic of China.
| |
Collapse
|
48
|
Sun K, He M, He Z, Liu H, Pi X. EfficientNet embedded with spatial attention for recognition of multi-label fundus disease from color fundus photographs. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103768] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
49
|
DEEP LEARNING-BASED PREDICTION OF OUTCOMES FOLLOWING NONCOMPLICATED EPIRETINAL MEMBRANE SURGERY. Retina 2022; 42:1465-1471. [PMID: 35877965 DOI: 10.1097/iae.0000000000003480] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
PURPOSE We used deep learning to predict the final central foveal thickness (CFT), changes in CFT, final best corrected visual acuity, and best corrected visual acuity changes following noncomplicated idiopathic epiretinal membrane surgery. METHODS Data of patients who underwent noncomplicated epiretinal membrane surgery at Severance Hospital from January 1, 2010, to December 31, 2018, were reviewed. Patient age, sex, hypertension and diabetes statuses, and preoperative optical coherence tomography scans were noted. For image analysis and model development, a pre-trained VGG16 was adopted. The mean absolute error and coefficient of determination (R 2 ) were used to evaluate the model performances. The study involved 688 eyes of 657 patients. RESULTS For final CFT, the mean absolute error was the lowest in the model that considered only clinical and demographic characteristics; the highest accuracy was achieved by the model that considered all clinical and surgical information. For CFT changes, models utilizing clinical and surgical information showed the best performance. However, our best model failed to predict the final best corrected visual acuity and best corrected visual acuity changes. CONCLUSION A deep learning model predicted the final CFT and CFT changes in patients 1 year after epiretinal membrane surgery. Central foveal thickness prediction showed the best results when demographic factors, comorbid diseases, and surgical techniques were considered.
Collapse
|
50
|
Biswas S, Khan MIA, Hossain MT, Biswas A, Nakai T, Rohdin J. Which Color Channel Is Better for Diagnosing Retinal Diseases Automatically in Color Fundus Photographs? LIFE (BASEL, SWITZERLAND) 2022; 12:life12070973. [PMID: 35888063 PMCID: PMC9321111 DOI: 10.3390/life12070973] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/27/2022] [Revised: 05/25/2022] [Accepted: 06/01/2022] [Indexed: 11/22/2022]
Abstract
Color fundus photographs are the most common type of image used for automatic diagnosis of retinal diseases and abnormalities. As all color photographs, these images contain information about three primary colors, i.e., red, green, and blue, in three separate color channels. This work aims to understand the impact of each channel in the automatic diagnosis of retinal diseases and abnormalities. To this end, the existing works are surveyed extensively to explore which color channel is used most commonly for automatically detecting four leading causes of blindness and one retinal abnormality along with segmenting three retinal landmarks. From this survey, it is clear that all channels together are typically used for neural network-based systems, whereas for non-neural network-based systems, the green channel is most commonly used. However, from the previous works, no conclusion can be drawn regarding the importance of the different channels. Therefore, systematic experiments are conducted to analyse this. A well-known U-shaped deep neural network (U-Net) is used to investigate which color channel is best for segmenting one retinal abnormality and three retinal landmarks.
Collapse
Affiliation(s)
- Sangeeta Biswas
- Faculty of Engineering, University of Rajshahi, Rajshahi 6205, Bangladesh; (M.I.A.K.); (M.T.H.)
- Correspondence: or
| | - Md. Iqbal Aziz Khan
- Faculty of Engineering, University of Rajshahi, Rajshahi 6205, Bangladesh; (M.I.A.K.); (M.T.H.)
| | - Md. Tanvir Hossain
- Faculty of Engineering, University of Rajshahi, Rajshahi 6205, Bangladesh; (M.I.A.K.); (M.T.H.)
| | - Angkan Biswas
- CAPM Company Limited, Bonani, Dhaka 1213, Bangladesh;
| | - Takayoshi Nakai
- Faculty of Engineering, Shizuoka University, Hamamatsu 432-8561, Japan;
| | - Johan Rohdin
- Faculty of Information Technology, Brno University of Technology, 61200 Brno, Czech Republic;
| |
Collapse
|