1
|
Abramovich O, Pizem H, Van Eijgen J, Oren I, Melamed J, Stalmans I, Blumenthal EZ, Behar JA. FundusQ-Net: A regression quality assessment deep learning algorithm for fundus images quality grading. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 239:107522. [PMID: 37285697 DOI: 10.1016/j.cmpb.2023.107522] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/25/2022] [Revised: 03/23/2023] [Accepted: 03/30/2023] [Indexed: 06/09/2023]
Abstract
OBJECTIVE Ophthalmological pathologies such as glaucoma, diabetic retinopathy and age-related macular degeneration are major causes of blindness and vision impairment. There is a need for novel decision support tools that can simplify and speed up the diagnosis of these pathologies. A key step in this process is to automatically estimate the quality of the fundus images to make sure these are interpretable by a human operator or a machine learning model. We present a novel fundus image quality scale and deep learning (DL) model that can estimate fundus image quality relative to this new scale. METHODS A total of 1245 images were graded for quality by two ophthalmologists within the range 1-10, with a resolution of 0.5. A DL regression model was trained for fundus image quality assessment. The architecture used was Inception-V3. The model was developed using a total of 89,947 images from 6 databases, of which 1245 were labeled by the specialists and the remaining 88,702 images were used for pre-training and semi-supervised learning. The final DL model was evaluated on an internal test set (n=209) as well as an external test set (n=194). RESULTS The final DL model, denoted FundusQ-Net, achieved a mean absolute error of 0.61 (0.54-0.68) on the internal test set. When evaluated as a binary classification model on the public DRIMDB database as an external test set the model obtained an accuracy of 99%. SIGNIFICANCE the proposed algorithm provides a new robust tool for automated quality grading of fundus images.
Collapse
Affiliation(s)
- Or Abramovich
- The Faculty of Biomedical Engineering, Technion-IIT, Haifa, Israel
| | - Hadas Pizem
- Rambam Medical Center: Rambam Health Care Campus, Israel
| | - Jan Van Eijgen
- Research Group of Ophthalmology, Department of Neurosciences, KU Leuven, Oude Markt 13, 3000 Leuven; Department of Ophthalmology, University Hospitals UZ Leuven, Herestraat 49, 3000 Leuven, Belgium
| | - Ilan Oren
- The Faculty of Biomedical Engineering, Technion-IIT, Haifa, Israel
| | - Joshua Melamed
- The Faculty of Biomedical Engineering, Technion-IIT, Haifa, Israel
| | - Ingeborg Stalmans
- Research Group of Ophthalmology, Department of Neurosciences, KU Leuven, Oude Markt 13, 3000 Leuven; Department of Ophthalmology, University Hospitals UZ Leuven, Herestraat 49, 3000 Leuven, Belgium
| | | | - Joachim A Behar
- The Faculty of Biomedical Engineering, Technion-IIT, Haifa, Israel.
| |
Collapse
|
2
|
A Neural Network for Automated Image Quality Assessment of Optic Disc Photographs. J Clin Med 2023; 12:jcm12031217. [PMID: 36769865 PMCID: PMC9917571 DOI: 10.3390/jcm12031217] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Revised: 01/26/2023] [Accepted: 01/31/2023] [Indexed: 02/05/2023] Open
Abstract
This study describes the development of a convolutional neural network (CNN) for automated assessment of optic disc photograph quality. Using a code-free deep learning platform, a total of 2377 optic disc photographs were used to develop a deep CNN capable of determining optic disc photograph quality. Of these, 1002 were good-quality images, 609 were acceptable-quality, and 766 were poor-quality images. The dataset was split 80/10/10 into training, validation, and test sets and balanced for quality. A ternary classification model (good, acceptable, and poor quality) and a binary model (usable, unusable) were developed. In the ternary classification system, the model had an overall accuracy of 91% and an AUC of 0.98. The model had higher predictive accuracy for images of good (93%) and poor quality (96%) than for images of acceptable quality (91%). The binary model performed with an overall accuracy of 98% and an AUC of 0.99. When validated on 292 images not included in the original training/validation/test dataset, the model's accuracy was 85% on the three-class classification task and 97% on the binary classification task. The proposed system for automated image-quality assessment for optic disc photographs achieves high accuracy in both ternary and binary classification systems, and highlights the success achievable with a code-free platform. There is wide clinical and research potential for such a model, with potential applications ranging from integration into fundus camera software to provide immediate feedback to ophthalmic photographers, to prescreening large databases before their use in research.
Collapse
|
3
|
Chan E, Tang Z, Najjar RP, Narayanaswamy A, Sathianvichitr K, Newman NJ, Biousse V, Milea D. A Deep Learning System for Automated Quality Evaluation of Optic Disc Photographs in Neuro-Ophthalmic Disorders. Diagnostics (Basel) 2023; 13:diagnostics13010160. [PMID: 36611452 PMCID: PMC9818957 DOI: 10.3390/diagnostics13010160] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Revised: 12/27/2022] [Accepted: 12/28/2022] [Indexed: 01/05/2023] Open
Abstract
The quality of ocular fundus photographs can affect the accuracy of the morphologic assessment of the optic nerve head (ONH), either by humans or by deep learning systems (DLS). In order to automatically identify ONH photographs of optimal quality, we have developed, trained, and tested a DLS, using an international, multicentre, multi-ethnic dataset of 5015 ocular fundus photographs from 31 centres in 20 countries participating to the Brain and Optic Nerve Study with Artificial Intelligence (BONSAI). The reference standard in image quality was established by three experts who independently classified photographs as of "good", "borderline", or "poor" quality. The DLS was trained on 4208 fundus photographs and tested on an independent external dataset of 807 photographs, using a multi-class model, evaluated with a one-vs-rest classification strategy. In the external-testing dataset, the DLS could identify with excellent performance "good" quality photographs (AUC = 0.93 (95% CI, 0.91-0.95), accuracy = 91.4% (95% CI, 90.0-92.9%), sensitivity = 93.8% (95% CI, 92.5-95.2%), specificity = 75.9% (95% CI, 69.7-82.1%) and "poor" quality photographs (AUC = 1.00 (95% CI, 0.99-1.00), accuracy = 99.1% (95% CI, 98.6-99.6%), sensitivity = 81.5% (95% CI, 70.6-93.8%), specificity = 99.7% (95% CI, 99.6-100.0%). "Borderline" quality images were also accurately classified (AUC = 0.90 (95% CI, 0.88-0.93), accuracy = 90.6% (95% CI, 89.1-92.2%), sensitivity = 65.4% (95% CI, 56.6-72.9%), specificity = 93.4% (95% CI, 92.1-94.8%). The overall accuracy to distinguish among the three classes was 90.6% (95% CI, 89.1-92.1%), suggesting that this DLS could select optimal quality fundus photographs in patients with neuro-ophthalmic and neurological disorders affecting the ONH.
Collapse
Affiliation(s)
- Ebenezer Chan
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore 169856, Singapore
- Duke-NUS School of Medicine, Singapore 169857, Singapore
| | - Zhiqun Tang
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore 169856, Singapore
| | - Raymond P. Najjar
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore 169856, Singapore
- Duke-NUS School of Medicine, Singapore 169857, Singapore
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore 117597, Singapore
- Center for Innovation & Precision Eye Health, National University of Singapore, Singapore 119077, Singapore
| | - Arun Narayanaswamy
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore 169856, Singapore
- Glaucoma Department, Singapore National Eye Centre, Singapore 168751, Singapore
| | | | - Nancy J. Newman
- Departments of Ophthalmology and Neurology, Emory University, Atlanta, GA 30322, USA
| | - Valérie Biousse
- Departments of Ophthalmology and Neurology, Emory University, Atlanta, GA 30322, USA
| | - Dan Milea
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore 169856, Singapore
- Duke-NUS School of Medicine, Singapore 169857, Singapore
- Department of Ophthalmology, Rigshospitalet, University of Copenhagen, 2600 Copenhagen, Denmark
- Department of Ophthalmology, Angers University Hospital, 49100 Angers, France
- Neuro-Ophthalmology Department, Singapore National Eye Centre, Singapore 168751, Singapore
- Correspondence:
| | | |
Collapse
|
4
|
Teo ZL, Lee AY, Campbell P, Chan RVP, Ting DSW. Developments in Artificial Intelligence for Ophthalmology: Federated Learning. Asia Pac J Ophthalmol (Phila) 2022; 11:500-502. [PMID: 36417673 DOI: 10.1097/apo.0000000000000582] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2022] [Accepted: 10/04/2022] [Indexed: 11/24/2022] Open
Affiliation(s)
- Zhen Ling Teo
- Singapore National Eye Centre, Singapore
- Singapore Eye Research Institute, Singapore
| | - Aaron Y Lee
- Department of Ophthalmology, US Roger and Angie Karalis Johnson Retina Center, University of Washington, Seattle, WA
| | - Peter Campbell
- Department of Ophthalmology, Oregon Health and Science University, Portland, OR
| | - R V Paul Chan
- Department of Ophthalmology, University of Illinois Chicago, Chicago, IL
| | - Daniel S W Ting
- Singapore National Eye Centre, Singapore
- Singapore Eye Research Institute, Singapore
- Duke-NUS Medical School, Singapore
| |
Collapse
|
5
|
Amanova N, Martin J, Elster C. Explainability for deep learning in mammography image quality assessment. MACHINE LEARNING: SCIENCE AND TECHNOLOGY 2022. [DOI: 10.1088/2632-2153/ac7a03] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Abstract
The application of deep learning has recently been proposed for the assessment of image quality in mammography. It was demonstrated in a proof-of-principle study that the proposed approach can be more efficient than currently applied automated conventional methods. However, in contrast to conventional methods, the deep learning approach has a black-box nature and, before it can be recommended for the routine use, it must be understood more thoroughly. For this purpose, we propose and apply a new explainability method: the oriented, modified integrated gradients (OMIG) method. The design of this method is inspired by the integrated gradientsmethod but adapted considerably to the use case at hand. To further enhance this method, an upsampling technique is developed that produces high-resolution explainability maps for the downsampled data used by the deep learning approach. Comparison with established explainability methods demonstrates that the proposed approach yields substantially more expressive and informative results for our specific use case. Application of the proposed explainability approach generally confirms the validity of the considered deep learning-based mammography image quality assessment (IQA) method. Specifically, it is demonstrated that the predicted image quality is based on a meaningful mapping that makes successful use of certain geometric structures of the images. In addition, the novel explainability method helps us to identify the parts of the employed phantom that have the largest impact on the predicted image quality, and to shed some light on cases in which the trained neural networks fail to work as expected. While tailored to assess a specific approach from deep learning for mammography IQA, the proposed explainability method could also become relevant in other, similar deep learning applications based on high-dimensional images.
Collapse
|
6
|
Zheng C, Ye H, Yang J, Fei P, Qiu Y, Xie X, Wang Z, Chen J, Zhao P. Development and Clinical Validation of Semi-Supervised Generative Adversarial Networks for Detection of Retinal Disorders in Optical Coherence Tomography Images Using Small Dataset. Asia Pac J Ophthalmol (Phila) 2022; 11:219-226. [PMID: 35342179 DOI: 10.1097/apo.0000000000000498] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023] Open
Abstract
PURPOSE To develop and test semi-supervised generative adversarial networks (GANs) that detect retinal disorders on optical coherence tomography (OCT) images using a small-labeled dataset. METHODS From a public database, we randomly chose a small supervised dataset with 400 OCT images (100 choroidal neovascularization, 100 diabetic macular edema, 100 drusen, and 100 normal) and assigned all other OCT images to unsupervised dataset (107,912 images without labeling). We adopted a semi-supervised GAN and a supervised deep learning (DL) model for automatically detecting retinal disorders from OCT images. The performance of the 2 models was compared in 3 testing datasets with different OCT devices. The evaluation metrics included accuracy, sensitivity, specificity, and the area under the receiver operating characteristic curves. RESULTS The local validation dataset included 1000 images with 250 from each category. The independent clinical dataset included 366 OCT images using Cirrus OCT Shanghai Shibei Hospital and 511 OCT images using RTVue OCT from Xinhua Hospital respectively. The semi-supervised GANs classifier achieved better accuracy than supervised DL model (0.91 vs 0.86 for local cell validation dataset, 0.91 vs 0.86 in the Shanghai Shibei Hospital testing dataset, and 0.93 vs 0.92 in Xinhua Hospital testing dataset). For detecting urgent referrals (choroidal neo-vascularization and diabetic macular edema) from nonurgent referrals (drusen and normal) on OCT images, the semi-supervised GANs classifier also achieved better area under the receiver operating characteristic curves than supervised DL model (0.99 vs 0.97, 0.97 vs 0.96, and 0.99 vs 0.99, respectively). CONCLUSIONS A semi-supervised GAN can achieve better performance than that of a supervised DL model when the labeled dataset is limited. The current study offers utility to various research and clinical studies using DL with relatively small datasets. Semi-supervised GANs can detect retinal disorders from OCT images using relatively small dataset.
Collapse
Affiliation(s)
- Ce Zheng
- Department of Ophthalmology, Xinhua Hospital, Affiliated to Shanghai Jiaotong University School of Medicine, Shanghai, China
| | - Hongfei Ye
- Department of Ophthalmology, Xinhua Hospital, Affiliated to Shanghai Jiaotong University School of Medicine, Shanghai, China
| | - Jianlong Yang
- Ningbo Institute of Industrial Technology, Chinese Academy of Sciences, Ningbo, China
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Ping Fei
- Department of Ophthalmology, Xinhua Hospital, Affiliated to Shanghai Jiaotong University School of Medicine, Shanghai, China
| | - Yingping Qiu
- Department of Ophthalmology, Xinhua Hospital, Affiliated to Shanghai Jiaotong University School of Medicine, Shanghai, China
| | - Xiaolin Xie
- Joint Shantou International Eye Center of Shantou University and the Chinese University of Hong Kong, Shantou University Medical College, Shantou, Guangdong, China
| | - Zilei Wang
- Shanghai Children's Hospital, Shanghai, China
| | - Jili Chen
- Department of Ophthalmology, Shibei Hospital, Shanghai, China
| | - Peiquan Zhao
- Department of Ophthalmology, Xinhua Hospital, Affiliated to Shanghai Jiaotong University School of Medicine, Shanghai, China
| |
Collapse
|
7
|
Leong YY, Vasseneix C, Finkelstein MT, Milea D, Najjar RP. Artificial Intelligence Meets Neuro-Ophthalmology. Asia Pac J Ophthalmol (Phila) 2022; 11:111-125. [PMID: 35533331 DOI: 10.1097/apo.0000000000000512] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022] Open
Abstract
ABSTRACT Recent advances in artificial intelligence have provided ophthalmologists with fast, accurate, and automated means for diagnosing and treating ocular conditions, paving the way to a modern and scalable eye care system. Compared to other ophthalmic disciplines, neuro-ophthalmology has, until recently, not benefitted from significant advances in the area of artificial intelligence. In this narrative review, we summarize and discuss recent advancements utilizing artificial intelligence for the detection of structural and functional optic nerve head abnormalities, and ocular movement disorders in neuro-ophthalmology.
Collapse
Affiliation(s)
| | - Caroline Vasseneix
- Singapore Eye Research Institute, Singapore, Singapore
- Duke-NUS Medical School, Singapore, Singapore
| | | | - Dan Milea
- Singapore National Eye Center, Singapore, Singapore
- Singapore Eye Research Institute, Singapore, Singapore
- Duke-NUS Medical School, Singapore, Singapore
| | - Raymond P Najjar
- Singapore Eye Research Institute, Singapore, Singapore
- Duke-NUS Medical School, Singapore, Singapore
- Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| |
Collapse
|