1
|
Abd El-Khalek AA, Balaha HM, Sewelam A, Ghazal M, Khalil AT, Abo-Elsoud MEA, El-Baz A. A Comprehensive Review of AI Diagnosis Strategies for Age-Related Macular Degeneration (AMD). Bioengineering (Basel) 2024; 11:711. [PMID: 39061793 PMCID: PMC11273790 DOI: 10.3390/bioengineering11070711] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2024] [Revised: 07/02/2024] [Accepted: 07/09/2024] [Indexed: 07/28/2024] Open
Abstract
The rapid advancement of computational infrastructure has led to unprecedented growth in machine learning, deep learning, and computer vision, fundamentally transforming the analysis of retinal images. By utilizing a wide array of visual cues extracted from retinal fundus images, sophisticated artificial intelligence models have been developed to diagnose various retinal disorders. This paper concentrates on the detection of Age-Related Macular Degeneration (AMD), a significant retinal condition, by offering an exhaustive examination of recent machine learning and deep learning methodologies. Additionally, it discusses potential obstacles and constraints associated with implementing this technology in the field of ophthalmology. Through a systematic review, this research aims to assess the efficacy of machine learning and deep learning techniques in discerning AMD from different modalities as they have shown promise in the field of AMD and retinal disorders diagnosis. Organized around prevalent datasets and imaging techniques, the paper initially outlines assessment criteria, image preprocessing methodologies, and learning frameworks before conducting a thorough investigation of diverse approaches for AMD detection. Drawing insights from the analysis of more than 30 selected studies, the conclusion underscores current research trajectories, major challenges, and future prospects in AMD diagnosis, providing a valuable resource for both scholars and practitioners in the domain.
Collapse
Affiliation(s)
- Aya A. Abd El-Khalek
- Communications and Electronics Engineering Department, Nile Higher Institute for Engineering and Technology, Mansoura 35511, Egypt;
| | - Hossam Magdy Balaha
- Department of Bioengineering, J.B. Speed School of Engineering, University of Louisville, Louisville, KY 40292, USA
| | - Ashraf Sewelam
- Ophthalmology Department, Faculty of Medicine, Mansoura University, Mansoura 35511, Egypt;
| | - Mohammed Ghazal
- Electrical, Computer, and Biomedical Engineering Department, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates;
| | - Abeer T. Khalil
- Communications and Electronics Engineering Department, Faculty of Engineering, Mansoura University, Mansoura 35511, Egypt; (A.T.K.); (M.E.A.A.-E.)
| | - Mohy Eldin A. Abo-Elsoud
- Communications and Electronics Engineering Department, Faculty of Engineering, Mansoura University, Mansoura 35511, Egypt; (A.T.K.); (M.E.A.A.-E.)
| | - Ayman El-Baz
- Department of Bioengineering, J.B. Speed School of Engineering, University of Louisville, Louisville, KY 40292, USA
| |
Collapse
|
2
|
Bekollari M, Dettoraki M, Stavrou V, Glotsos D, Liaparinos P. Computer-Aided Discrimination of Glaucoma Patients from Healthy Subjects Using the RETeval Portable Device. Diagnostics (Basel) 2024; 14:349. [PMID: 38396388 PMCID: PMC10888400 DOI: 10.3390/diagnostics14040349] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2024] [Revised: 01/29/2024] [Accepted: 02/01/2024] [Indexed: 02/25/2024] Open
Abstract
Glaucoma is a chronic, progressive eye disease affecting the optic nerve, which may cause visual damage and blindness. In this study, we present a machine-learning investigation to classify patients with glaucoma (case group) with respect to normal participants (control group). We examined 172 eyes at the Ophthalmology Clinic of the "Elpis" General Hospital of Athens between October 2022 and September 2023. In addition, we investigated the glaucoma classification in terms of the following: (a) eye selection and (b) gender. Our methodology was based on the features extracted via two diagnostic optical systems: (i) conventional optical coherence tomography (OCT) and (ii) a modern RETeval portable device. The machine-learning approach comprised three different classifiers: the Bayesian, the Probabilistic Neural Network (PNN), and Support Vectors Machines (SVMs). For all cases examined, classification accuracy was found to be significantly higher when using the RETeval device with respect to the OCT system, as follows: 14.7% for all participants, 13.4% and 29.3% for eye selection (right and left, respectively), and 25.6% and 22.6% for gender (male and female, respectively). The most efficient classifier was found to be the SVM compared to the PNN and Bayesian classifiers. In summary, all aforementioned comparisons demonstrate that the RETeval device has the advantage over the OCT system for the classification of glaucoma patients by using the machine-learning approach.
Collapse
Affiliation(s)
- Marsida Bekollari
- Department of Biomedical Engineering, University of West Attica, Ag. Spyridonos, 12243 Athens, Greece; (M.B.); (D.G.)
| | - Maria Dettoraki
- Department of Ophthalmology, “Elpis” General Hospital, 11522 Athens, Greece
| | - Valentina Stavrou
- Department of Ophthalmology, “Elpis” General Hospital, 11522 Athens, Greece
| | - Dimitris Glotsos
- Department of Biomedical Engineering, University of West Attica, Ag. Spyridonos, 12243 Athens, Greece; (M.B.); (D.G.)
| | - Panagiotis Liaparinos
- Department of Biomedical Engineering, University of West Attica, Ag. Spyridonos, 12243 Athens, Greece; (M.B.); (D.G.)
| |
Collapse
|
3
|
Wu TE, Chen JW, Liu TC, Yu CH, Jhou MJ, Lu CJ. Identifying and Exploring the Impact Factors for Intraocular Pressure Prediction in Myopic Children with Atropine Control Utilizing Multivariate Adaptive Regression Splines. J Pers Med 2024; 14:125. [PMID: 38276247 PMCID: PMC10817583 DOI: 10.3390/jpm14010125] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2023] [Revised: 01/16/2024] [Accepted: 01/18/2024] [Indexed: 01/27/2024] Open
Abstract
PURPOSE The treatment of childhood myopia often involves the use of topical atropine, which has been demonstrated to be effective in decelerating the progression of myopia. It is crucial to monitor intraocular pressure (IOP) to ensure the safety of topical atropine. This study aims to identify the optimal machine learning IOP-monitoring module and establish a precise baseline IOP as a clinical safety reference for atropine medication. METHODS Data from 1545 eyes of 1171 children receiving atropine for myopia were retrospectively analyzed. Nineteen variables including patient demographics, medical history, refractive error, and IOP measurements were considered. The data were analyzed using a multivariate adaptive regression spline (MARS) model to analyze the impact of different factors on the End IOP. RESULTS The MARS model identified age, baseline IOP, End Spherical, duration of previous atropine treatment, and duration of current atropine treatment as the five most significant factors influencing the End IOP. The outcomes revealed that the baseline IOP had the most significant effect on final IOP, exhibiting a notable knot at 14 mmHg. When the baseline IOP was equal to or exceeded 14 mmHg, there was a positive correlation between atropine use and End IOP, suggesting that atropine may increase the End IOP in children with a baseline IOP greater than 14 mmHg. CONCLUSIONS MARS model demonstrates a better ability to capture nonlinearity than classic multiple linear regression for predicting End IOP. It is crucial to acknowledge that administrating atropine may elevate intraocular pressure when the baseline IOP exceeds 14 mmHg. These findings offer valuable insights into factors affecting IOP in children undergoing atropine treatment for myopia, enabling clinicians to make informed decisions regarding treatment options.
Collapse
Affiliation(s)
- Tzu-En Wu
- Department of Ophthalmology, Shin Kong Wu Ho-Su Memorial Hospital, Taipei 11101, Taiwan
- School of Medicine, Fu Jen Catholic University, New Taipei City 24205, Taiwan
| | - Jun-Wei Chen
- School of Medicine, Chang Gung University, Taoyuan 33302, Taiwan
| | - Tzu-Chi Liu
- Graduate Institute of Business Administration, Fu Jen Catholic University, New Taipei City 24205, Taiwan
| | - Chieh-Han Yu
- School of Medicine, Chang Gung University, Taoyuan 33302, Taiwan
| | - Mao-Jhen Jhou
- Graduate Institute of Business Administration, Fu Jen Catholic University, New Taipei City 24205, Taiwan
| | - Chi-Jie Lu
- Graduate Institute of Business Administration, Fu Jen Catholic University, New Taipei City 24205, Taiwan
- Artificial Intelligence Development Center, Fu Jen Catholic University, New Taipei City 24205, Taiwan
- Department of Information Management, Fu Jen Catholic University, New Taipei City 24205, Taiwan
| |
Collapse
|
4
|
Anderson M, Sadiq S, Nahaboo Solim M, Barker H, Steel DH, Habib M, Obara B. Biomedical Data Annotation: An OCT Imaging Case Study. J Ophthalmol 2023; 2023:5747010. [PMID: 37650051 PMCID: PMC10465257 DOI: 10.1155/2023/5747010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2023] [Revised: 07/20/2023] [Accepted: 08/02/2023] [Indexed: 09/01/2023] Open
Abstract
In ophthalmology, optical coherence tomography (OCT) is a widely used imaging modality, allowing visualisation of the structures of the eye with objective and quantitative cross-sectional three-dimensional (3D) volumetric scans. Due to the quantity of data generated from OCT scans and the time taken for an ophthalmologist to inspect for various disease pathology features, automated image analysis in the form of deep neural networks has seen success for the classification and segmentation of OCT layers and quantification of features. However, existing high-performance deep learning approaches rely on huge training datasets with high-quality annotations, which are challenging to obtain in many clinical applications. The collection of annotations from less experienced clinicians has the potential to alleviate time constraints from more senior clinicians, allowing faster data collection of medical image annotations; however, with less experience, there is the possibility of reduced annotation quality. In this study, we evaluate the quality of diabetic macular edema (DME) intraretinal fluid (IRF) biomarker image annotations on OCT B-scans from five clinicians with a range of experience. We also assess the effectiveness of annotating across multiple sessions following a training session led by an expert clinician. Our investigation shows a notable variance in annotation performance, with a correlation that depends on the clinician's experience with OCT image interpretation of DME, and that having multiple annotation sessions has a limited effect on the annotation quality.
Collapse
Affiliation(s)
- Matthew Anderson
- School of Computing, Newcastle University, Urban Sciences Building, Newcastle upon Tyne NE4 5TG, UK
| | - Salman Sadiq
- Sunderland Eye Infirmary, Queen Alexandra Rd, Sunderland NE4 5TG, UK
| | | | - Hannah Barker
- Sunderland Eye Infirmary, Queen Alexandra Rd, Sunderland NE4 5TG, UK
| | - David H. Steel
- Sunderland Eye Infirmary, Queen Alexandra Rd, Sunderland NE4 5TG, UK
- Bioscience Institute, Newcastle University, Catherine Cookson Building, Newcastle upon Tyne NE2 4HH, UK
| | - Maged Habib
- Sunderland Eye Infirmary, Queen Alexandra Rd, Sunderland NE4 5TG, UK
- Bioscience Institute, Newcastle University, Catherine Cookson Building, Newcastle upon Tyne NE2 4HH, UK
| | - Boguslaw Obara
- School of Computing, Newcastle University, Urban Sciences Building, Newcastle upon Tyne NE4 5TG, UK
- Bioscience Institute, Newcastle University, Catherine Cookson Building, Newcastle upon Tyne NE2 4HH, UK
| |
Collapse
|
5
|
Computational intelligence in eye disease diagnosis: a comparative study. Med Biol Eng Comput 2023; 61:593-615. [PMID: 36595155 DOI: 10.1007/s11517-022-02737-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2021] [Accepted: 12/09/2022] [Indexed: 01/04/2023]
Abstract
In recent years, eye disorders are an important health issue among older people. Generally, individuals with eye diseases are unaware of the gradual growth of symptoms. Therefore, routine eye examinations are required for early diagnosis. Usually, eye disorders are identified by an ophthalmologist via a slit-lamp investigation. Slit-lamp interpretations are inadequate due to the differences in the analytical skills of the ophthalmologist, inconsistency in eye disorder analysis, and record maintenance issues. Therefore, digital images of an eye and computational intelligence (CI)-based approaches are preferred as assistive methods for eye disease diagnosis. A comparative study of CI-based decision support models for eye disorder diagnosis is presented in this paper. The CI-based decision support systems used for eye abnormalities diagnosis were grouped as anterior and retinal eye abnormalities diagnostic systems, and numerous algorithms used for diagnosing the eye abnormalities were also briefed. Various eye imaging modalities, pre-processing methods such as reflection removal, contrast enhancement, region of interest segmentation methods, and public eye image databases used for CI-based eye disease diagnosis system development were also discussed in this paper. In this comparative study, the reliability of various CI-based systems used for anterior eye and retinal disorder diagnosis was compared based on the precision, sensitivity, and specificity in eye disease diagnosis. The outcomes of the comparative analysis indicate that the CI-based anterior and retinal disease diagnosis systems attained significant prediction accuracy. Hence, these CI-based diagnosis systems can be used in clinics to reduce the burden on physicians, minimize fatigue-related misdetection, and take precise clinical decisions.
Collapse
|
6
|
Classification of breast cancer histology images using MSMV-PFENet. Sci Rep 2022; 12:17447. [PMID: 36261463 PMCID: PMC9581896 DOI: 10.1038/s41598-022-22358-y] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2021] [Accepted: 10/13/2022] [Indexed: 01/12/2023] Open
Abstract
Deep learning has been used extensively in histopathological image classification, but people in this field are still exploring new neural network architectures for more effective and efficient cancer diagnosis. Here, we propose multi-scale, multi-view progressive feature encoding network (MSMV-PFENet) for effective classification. With respect to the density of cell nuclei, we selected the regions potentially related to carcinogenesis at multiple scales from each view. The progressive feature encoding network then extracted the global and local features from these regions. A bidirectional long short-term memory analyzed the encoding vectors to get a category score, and finally the majority voting method integrated different views to classify the histopathological images. We tested our method on the breast cancer histology dataset from the ICIAR 2018 grand challenge. The proposed MSMV-PFENet achieved 93.0[Formula: see text] and 94.8[Formula: see text] accuracies at the patch and image levels, respectively. This method can potentially benefit the clinical cancer diagnosis.
Collapse
|
7
|
Su Y, Cheng J, Cao G, Liu H. How to design a deep neural network for retinal vessel segmentation: an empirical study. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103761] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
8
|
Multiple Ocular Disease Diagnosis Using Fundus Images Based on Multi-Label Deep Learning Classification. ELECTRONICS 2022. [DOI: 10.3390/electronics11131966] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Designing computer-aided diagnosis (CAD) systems that can automatically detect ocular diseases (ODs) has become an active research field in the health domain. Although the human eye might have more than one OD simultaneously, most existing systems are designed to detect specific eye diseases. Therefore, it is crucial to develop new CAD systems that can detect multiple ODs simultaneously. This paper presents a novel multi-label convolutional neural network (ML-CNN) system based on ML classification (MLC) to diagnose various ODs from color fundus images. The proposed ML-CNN-based system consists of three main phases: the preprocessing phase, which includes normalization and augmentation using several transformation processes, the modeling phase, and the prediction phase. The proposed ML-CNN consists of three convolution (CONV) layers and one max pooling (MP) layer. Then, two CONV layers are performed, followed by one MP and dropout (DO). After that, one flatten layer is performed, followed by one fully connected (FC) layer. We added another DO once again, and finally, one FC layer with 45 nodes is performed. The system outputs the probabilities of all 45 diseases in each image. We validated the model by using cross-validation (CV) and measured the performance by five different metrics: accuracy (ACC), recall, precision, Dice similarity coefficient (DSC), and area under the curve (AUC). The results are 94.3%, 80%, 91.5%, 99%, and 96.7%, respectively. The comparisons with the existing built-in models, such as MobileNetV2, DenseNet201, SeResNext50, InceptionV3, and InceptionresNetv2, demonstrate the superiority of the proposed ML-CNN model.
Collapse
|
9
|
Neto A, Camara J, Cunha A. Evaluations of Deep Learning Approaches for Glaucoma Screening Using Retinal Images from Mobile Device. SENSORS 2022; 22:s22041449. [PMID: 35214351 PMCID: PMC8874723 DOI: 10.3390/s22041449] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/29/2021] [Revised: 02/09/2022] [Accepted: 02/10/2022] [Indexed: 02/04/2023]
Abstract
Glaucoma is a silent disease that leads to vision loss or irreversible blindness. Current deep learning methods can help glaucoma screening by extending it to larger populations using retinal images. Low-cost lenses attached to mobile devices can increase the frequency of screening and alert patients earlier for a more thorough evaluation. This work explored and compared the performance of classification and segmentation methods for glaucoma screening with retinal images acquired by both retinography and mobile devices. The goal was to verify the results of these methods and see if similar results could be achieved using images captured by mobile devices. The used classification methods were the Xception, ResNet152 V2 and the Inception ResNet V2 models. The models’ activation maps were produced and analysed to support glaucoma classifier predictions. In clinical practice, glaucoma assessment is commonly based on the cup-to-disc ratio (CDR) criterion, a frequent indicator used by specialists. For this reason, additionally, the U-Net architecture was used with the Inception ResNet V2 and Inception V3 models as the backbone to segment and estimate CDR. For both tasks, the performance of the models reached close to that of state-of-the-art methods, and the classification method applied to a low-quality private dataset illustrates the advantage of using cheaper lenses.
Collapse
Affiliation(s)
- Alexandre Neto
- Escola de Ciências de Tecnologia, University of Trás-os-Montes and Alto Douro, Quinta de Prados, 5001-801 Vila Real, Portugal;
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal;
| | - José Camara
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal;
- Departamento de Ciências e Tecnologia, University Aberta, 1250-100 Lisboa, Portugal
| | - António Cunha
- Escola de Ciências de Tecnologia, University of Trás-os-Montes and Alto Douro, Quinta de Prados, 5001-801 Vila Real, Portugal;
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal;
- Correspondence: ; Tel.: +351-931-636-373
| |
Collapse
|
10
|
Camara J, Neto A, Pires IM, Villasana MV, Zdravevski E, Cunha A. Literature Review on Artificial Intelligence Methods for Glaucoma Screening, Segmentation, and Classification. J Imaging 2022; 8:19. [PMID: 35200722 PMCID: PMC8878383 DOI: 10.3390/jimaging8020019] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2021] [Revised: 01/11/2022] [Accepted: 01/17/2022] [Indexed: 12/20/2022] Open
Abstract
Artificial intelligence techniques are now being applied in different medical solutions ranging from disease screening to activity recognition and computer-aided diagnosis. The combination of computer science methods and medical knowledge facilitates and improves the accuracy of the different processes and tools. Inspired by these advances, this paper performs a literature review focused on state-of-the-art glaucoma screening, segmentation, and classification based on images of the papilla and excavation using deep learning techniques. These techniques have been shown to have high sensitivity and specificity in glaucoma screening based on papilla and excavation images. The automatic segmentation of the contours of the optic disc and the excavation then allows the identification and assessment of the glaucomatous disease's progression. As a result, we verified whether deep learning techniques may be helpful in performing accurate and low-cost measurements related to glaucoma, which may promote patient empowerment and help medical doctors better monitor patients.
Collapse
Affiliation(s)
- José Camara
- R. Escola Politécnica, Universidade Aberta, 1250-100 Lisboa, Portugal;
- Instituto de Engenharia de Sistemas e Computadores, Tecnologia e Ciência, 3200-465 Porto, Portugal;
| | - Alexandre Neto
- Instituto de Engenharia de Sistemas e Computadores, Tecnologia e Ciência, 3200-465 Porto, Portugal;
- Escola de Ciências e Tecnologia, University of Trás-os-Montes e Alto Douro, Quinta de Prados, 5001-801 Vila Real, Portugal;
| | - Ivan Miguel Pires
- Escola de Ciências e Tecnologia, University of Trás-os-Montes e Alto Douro, Quinta de Prados, 5001-801 Vila Real, Portugal;
- Instituto de Telecomunicações, Universidade da Beira Interior, 6200-001 Covilhã, Portugal
| | - María Vanessa Villasana
- Centro Hospitalar Universitário Cova da Beira, 6200-251 Covilhã, Portugal;
- UICISA:E Research Centre, School of Health, Polytechnic Institute of Viseu, 3504-510 Viseu, Portugal
| | - Eftim Zdravevski
- Faculty of Computer Science and Engineering, University Ss Cyril and Methodius, 1000 Skopje, North Macedonia;
| | - António Cunha
- Instituto de Engenharia de Sistemas e Computadores, Tecnologia e Ciência, 3200-465 Porto, Portugal;
- Escola de Ciências e Tecnologia, University of Trás-os-Montes e Alto Douro, Quinta de Prados, 5001-801 Vila Real, Portugal;
| |
Collapse
|
11
|
Martinez-Perez C, Alvarez-Peregrina C, Villa-Collar C, Sánchez-Tena MÁ. Artificial intelligence applied to ophthalmology and optometry: A citation network analysis. JOURNAL OF OPTOMETRY 2022; 15 Suppl 1:S82-S90. [PMID: 36151035 PMCID: PMC9732482 DOI: 10.1016/j.optom.2022.06.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Revised: 06/17/2022] [Accepted: 06/18/2022] [Indexed: 05/14/2023]
Abstract
PURPOSE The objective of this study is to analyse co-authorship and co-citation networks of publications in the field of artificial intelligence in ophthalmology and optometry. As well as, identify the different areas of research and the most cited publication. METHOD A search of publications was performed in the Web of Science database for the period from 1977 to December 2021, using the term "Artificial Intelligence AND (Ophthalmol* OR optometry)". The analysis of the publication was carried out using the Citation Network Explorer, VOSviewer and CiteSpace software. RESULTS 1086 publications and 2348 citation networks were found. 2020 was the year with the highest number of publications, a total of 351 publications and 115 citation networks. The most cited publication was "Clinically applicable deep learning for diagnosis and referral in retinal disease" published by De Fauw et al. in 2018, with a citation index of 723. Through the clustering function, three groups were found that cover the main research areas in this field: retinal pathology, anterior segment and glaucoma. CONCLUSIONS The citation network analysis offers an in-depth analysis of scientific publications and the adoption of new topics and fields of research. The results of an exhaustive analysis of citation networks in artificial intelligence in the field of ophthalmology and optometry are presented since the publication of the first article in 1977.
Collapse
Affiliation(s)
- Clara Martinez-Perez
- ISEC LISBOA, Instituto Superior de Educação e Ciências, Lisboa 1750-179, Portugal.
| | | | - Cesar Villa-Collar
- Universidad Europea de Madrid, Faculty of Biomedical and Health Science, Spain
| | - Miguel Ángel Sánchez-Tena
- ISEC LISBOA, Instituto Superior de Educação e Ciências, Lisboa 1750-179, Portugal; Universidad Complutense de Madrid, Department of Optometry and Vision, Faculty of Optics and Optometry, Madrid 28037, Spain.
| |
Collapse
|
12
|
Ashraf MN, Hussain M, Habib Z. Review of Various Tasks Performed in the Preprocessing Phase of a Diabetic Retinopathy Diagnosis System. Curr Med Imaging 2021; 16:397-426. [PMID: 32410541 DOI: 10.2174/1573405615666190219102427] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2018] [Revised: 12/31/2018] [Accepted: 01/20/2019] [Indexed: 12/15/2022]
Abstract
Diabetic Retinopathy (DR) is a major cause of blindness in diabetic patients. The increasing population of diabetic patients and difficulty to diagnose it at an early stage are limiting the screening capabilities of manual diagnosis by ophthalmologists. Color fundus images are widely used to detect DR lesions due to their comfortable, cost-effective and non-invasive acquisition procedure. Computer Aided Diagnosis (CAD) of DR based on these images can assist ophthalmologists and help in saving many sight years of diabetic patients. In a CAD system, preprocessing is a crucial phase, which significantly affects its performance. Commonly used preprocessing operations are the enhancement of poor contrast, balancing the illumination imbalance due to the spherical shape of a retina, noise reduction, image resizing to support multi-resolution, color normalization, extraction of a field of view (FOV), etc. Also, the presence of blood vessels and optic discs makes the lesion detection more challenging because these two artifacts exhibit specific attributes, which are similar to those of DR lesions. Preprocessing operations can be broadly divided into three categories: 1) fixing the native defects, 2) segmentation of blood vessels, and 3) localization and segmentation of optic discs. This paper presents a review of the state-of-the-art preprocessing techniques related to three categories of operations, highlighting their significant aspects and limitations. The survey is concluded with the most effective preprocessing methods, which have been shown to improve the accuracy and efficiency of the CAD systems.
Collapse
Affiliation(s)
| | - Muhammad Hussain
- Department of Computer Science, College of Computer and Information Sciences, King Saud University, Riyadh, Saudi Arabia
| | - Zulfiqar Habib
- Department of Computer Science, COMSATS University Islamabad, Lahore, Pakistan
| |
Collapse
|
13
|
Jiang J, Wang L, Fu H, Long E, Sun Y, Li R, Li Z, Zhu M, Liu Z, Chen J, Lin Z, Wu X, Wang D, Liu X, Lin H. Automatic classification of heterogeneous slit-illumination images using an ensemble of cost-sensitive convolutional neural networks. ANNALS OF TRANSLATIONAL MEDICINE 2021; 9:550. [PMID: 33987248 DOI: 10.21037/atm-20-6635] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Background Lens opacity seriously affects the visual development of infants. Slit-illumination images play an irreplaceable role in lens opacity detection; however, these images exhibited varied phenotypes with severe heterogeneity and complexity, particularly among pediatric cataracts. Therefore, it is urgently needed to explore an effective computer-aided method to automatically diagnose heterogeneous lens opacity and to provide appropriate treatment recommendations in a timely manner. Methods We integrated three different deep learning networks and a cost-sensitive method into an ensemble learning architecture, and then proposed an effective model called CCNN-Ensemble [ensemble of cost-sensitive convolutional neural networks (CNNs)] for automatic lens opacity detection. A total of 470 slit-illumination images of pediatric cataracts were used for training and comparison between the CCNN-Ensemble model and conventional methods. Finally, we used two external datasets (132 independent test images and 79 Internet-based images) to further evaluate the model's generalizability and effectiveness. Results Experimental results and comparative analyses demonstrated that the proposed method was superior to conventional approaches and provided clinically meaningful performance in terms of three grading indices of lens opacity: area (specificity and sensitivity; 92.00% and 92.31%), density (93.85% and 91.43%) and opacity location (95.25% and 89.29%). Furthermore, the comparable performance on the independent testing dataset and the internet-based images verified the effectiveness and generalizability of the model. Finally, we developed and implemented a website-based automatic diagnosis software for pediatric cataract grading diagnosis in ophthalmology clinics. Conclusions The CCNN-Ensemble method demonstrates higher specificity and sensitivity than conventional methods on multi-source datasets. This study provides a practical strategy for heterogeneous lens opacity diagnosis and has the potential to be applied to the analysis of other medical images.
Collapse
Affiliation(s)
- Jiewei Jiang
- School of Electronic Engineering, Xi'an University of Posts and Telecommunications, Xi'an, China
| | - Liming Wang
- School of Computer Science and Technology, Xidian University, Xi'an, China
| | - Haoran Fu
- School of Computer Science and Technology, Xidian University, Xi'an, China
| | - Erping Long
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Yibin Sun
- School of Electronic Engineering, Xi'an University of Posts and Telecommunications, Xi'an, China
| | - Ruiyang Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Zhongwen Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Mingmin Zhu
- School of Mathematics and Statistics, Xidian University, Xi'an, China
| | - Zhenzhen Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Jingjing Chen
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Zhuoling Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Xiaohang Wu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Dongni Wang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Xiyang Liu
- School of Computer Science and Technology, Xidian University, Xi'an, China
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| |
Collapse
|
14
|
Li T, Bo W, Hu C, Kang H, Liu H, Wang K, Fu H. Applications of deep learning in fundus images: A review. Med Image Anal 2021; 69:101971. [PMID: 33524824 DOI: 10.1016/j.media.2021.101971] [Citation(s) in RCA: 81] [Impact Index Per Article: 27.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2020] [Accepted: 01/12/2021] [Indexed: 02/06/2023]
Abstract
The use of fundus images for the early screening of eye diseases is of great clinical importance. Due to its powerful performance, deep learning is becoming more and more popular in related applications, such as lesion segmentation, biomarkers segmentation, disease diagnosis and image synthesis. Therefore, it is very necessary to summarize the recent developments in deep learning for fundus images with a review paper. In this review, we introduce 143 application papers with a carefully designed hierarchy. Moreover, 33 publicly available datasets are presented. Summaries and analyses are provided for each task. Finally, limitations common to all tasks are revealed and possible solutions are given. We will also release and regularly update the state-of-the-art results and newly-released datasets at https://github.com/nkicsl/Fundus_Review to adapt to the rapid development of this field.
Collapse
Affiliation(s)
- Tao Li
- College of Computer Science, Nankai University, Tianjin 300350, China
| | - Wang Bo
- College of Computer Science, Nankai University, Tianjin 300350, China
| | - Chunyu Hu
- College of Computer Science, Nankai University, Tianjin 300350, China
| | - Hong Kang
- College of Computer Science, Nankai University, Tianjin 300350, China
| | - Hanruo Liu
- Beijing Tongren Hospital, Capital Medical University, Address, Beijing 100730 China
| | - Kai Wang
- College of Computer Science, Nankai University, Tianjin 300350, China.
| | - Huazhu Fu
- Inception Institute of Artificial Intelligence (IIAI), Abu Dhabi, UAE
| |
Collapse
|
15
|
Deep Learning Models for Automated Diagnosis of Retinopathy of Prematurity in Preterm Infants. ELECTRONICS 2020. [DOI: 10.3390/electronics9091444] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Retinopathy of prematurity (ROP) is a disease that can cause blindness in premature infants. It is characterized by immature vascular growth of the retinal blood vessels. However, early detection and treatment of ROP can significantly improve the visual acuity of high-risk patients. Thus, early diagnosis of ROP is crucial in preventing visual impairment. However, several patients refrain from treatment owing to the lack of medical expertise in diagnosing the disease; this is especially problematic considering that the number of ROP cases is on the rise. To this end, we applied transfer learning to five deep neural network architectures for identifying ROP in preterm infants. Our results showed that the VGG19 model outperformed the other models in determining whether a preterm infant has ROP, with 96% accuracy, 96.6% sensitivity, and 95.2% specificity. We also classified the severity of the disease; the VGG19 model showed 98.82% accuracy in predicting the severity of the disease with a sensitivity and specificity of 100% and 98.41%, respectively. We performed 5-fold cross-validation on the datasets to validate the reliability of the VGG19 model and found that the VGG19 model exhibited high accuracy in predicting ROP. These findings could help promote the development of computer-aided diagnosis.
Collapse
|
16
|
Roy Chowdhury A, Banerjee S, Chatterjee T. A cybernetic systems approach to abnormality detection in retina images using case based reasoning. SN APPLIED SCIENCES 2020. [DOI: 10.1007/s42452-020-3187-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022] Open
|
17
|
Noah Akande O, Christiana Abikoye O, Anthonia Kayode A, Lamari Y. Implementation of a Framework for Healthy and Diabetic Retinopathy Retinal Image Recognition. SCIENTIFICA 2020; 2020:4972527. [PMID: 32509373 PMCID: PMC7254094 DOI: 10.1155/2020/4972527] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/29/2019] [Accepted: 04/10/2020] [Indexed: 06/11/2023]
Abstract
The feature extraction stage remains a major component of every biometric recognition system. In most instances, the eventual accuracy of a recognition system is dependent on the features extracted from the biometric trait and the feature extraction technique adopted. The widely adopted technique employs features extracted from healthy retinal images in training retina recognition system. However, literature has shown that certain eye diseases such as diabetic retinopathy (DR), hypertensive retinopathy, glaucoma, and cataract could alter the recognition accuracy of the retina recognition system. This connotes that a robust retina recognition system should be designed to accommodate healthy and diseased retinal images. A framework with two different approaches for retina image recognition is presented in this study. The first approach employed structural features for healthy retinal image recognition while the second employed vascular and lesion-based features for DR retinal image recognition. Any input retinal image was first examined for the presence of DR symptoms before the appropriate feature extraction technique was adopted. Recognition rates of 100% and 97.23% were achieved for the healthy and DR retinal images, respectively, and a false acceptance rate of 0.0444 and a false rejection rate of 0.0133 were also achieved.
Collapse
Affiliation(s)
| | | | | | - Yema Lamari
- Computer Science Department, University of Carthage, Tunis, Tunisia
| |
Collapse
|
18
|
Tong Y, Lu W, Yu Y, Shen Y. Application of machine learning in ophthalmic imaging modalities. EYE AND VISION 2020; 7:22. [PMID: 32322599 PMCID: PMC7160952 DOI: 10.1186/s40662-020-00183-6] [Citation(s) in RCA: 40] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/16/2019] [Accepted: 03/10/2020] [Indexed: 12/27/2022]
Abstract
In clinical ophthalmology, a variety of image-related diagnostic techniques have begun to offer unprecedented insights into eye diseases based on morphological datasets with millions of data points. Artificial intelligence (AI), inspired by the human multilayered neuronal system, has shown astonishing success within some visual and auditory recognition tasks. In these tasks, AI can analyze digital data in a comprehensive, rapid and non-invasive manner. Bioinformatics has become a focus particularly in the field of medical imaging, where it is driven by enhanced computing power and cloud storage, as well as utilization of novel algorithms and generation of data in massive quantities. Machine learning (ML) is an important branch in the field of AI. The overall potential of ML to automatically pinpoint, identify and grade pathological features in ocular diseases will empower ophthalmologists to provide high-quality diagnosis and facilitate personalized health care in the near future. This review offers perspectives on the origin, development, and applications of ML technology, particularly regarding its applications in ophthalmic imaging modalities.
Collapse
Affiliation(s)
- Yan Tong
- 1Eye Center, Renmin Hospital of Wuhan University, Wuhan, 430060 Hubei China
| | - Wei Lu
- 1Eye Center, Renmin Hospital of Wuhan University, Wuhan, 430060 Hubei China
| | - Yue Yu
- 1Eye Center, Renmin Hospital of Wuhan University, Wuhan, 430060 Hubei China
| | - Yin Shen
- 1Eye Center, Renmin Hospital of Wuhan University, Wuhan, 430060 Hubei China.,2Medical Research Institute, Wuhan University, Wuhan, Hubei China
| |
Collapse
|
19
|
Motta D, Casaca W, Paiva A. Vessel Optimal Transport for Automated Alignment of Retinal Fundus Images. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2019; 28:6154-6168. [PMID: 31283507 DOI: 10.1109/tip.2019.2925287] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Optimal transport has emerged as a promising and useful tool for supporting modern image processing applications such as medical imaging and scientific visualization. Indeed, the optimal transport theory enables great flexibility in modeling problems related to image registration, as different optimization resources can be successfully used as well as the choice of suitable matching models to align the images. In this paper, we introduce an automated framework for fundus image registration which unifies optimal transport theory, image processing tools, and graph matching schemes into a functional and concise methodology. Given two ocular fundus images, we construct representative graphs which embed in their structures spatial and topological information from the eye's blood vessels. The graphs produced are then used as input by our optimal transport model in order to establish a correspondence between their sets of nodes. Finally, geometric transformations are performed between the images so as to accomplish the registration task properly. Our formulation relies on the solid mathematical foundation of optimal transport as a constrained optimization problem, being also robust when dealing with outliers created during the matching stage. We demonstrate the accuracy and effectiveness of the present framework throughout a comprehensive set of qualitative and quantitative comparisons against several influential state-of-the-art methods on various fundus image databases.
Collapse
|
20
|
Sengupta S, Singh A, Leopold HA, Gulati T, Lakshminarayanan V. Ophthalmic diagnosis using deep learning with fundus images - A critical review. Artif Intell Med 2019; 102:101758. [PMID: 31980096 DOI: 10.1016/j.artmed.2019.101758] [Citation(s) in RCA: 67] [Impact Index Per Article: 13.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2019] [Revised: 11/04/2019] [Accepted: 11/05/2019] [Indexed: 12/23/2022]
Abstract
An overview of the applications of deep learning for ophthalmic diagnosis using retinal fundus images is presented. We describe various retinal image datasets that can be used for deep learning purposes. Applications of deep learning for segmentation of optic disk, optic cup, blood vessels as well as detection of lesions are reviewed. Recent deep learning models for classification of diseases such as age-related macular degeneration, glaucoma, and diabetic retinopathy are also discussed. Important critical insights and future research directions are given.
Collapse
Affiliation(s)
- Sourya Sengupta
- Theoretical and Experimental Epistemology Lab, School of Optometry and Vision Science, University of Waterloo, Ontario, Canada; Department of Systems Design Engineering, University of Waterloo, Ontario, Canada.
| | - Amitojdeep Singh
- Theoretical and Experimental Epistemology Lab, School of Optometry and Vision Science, University of Waterloo, Ontario, Canada; Department of Systems Design Engineering, University of Waterloo, Ontario, Canada
| | - Henry A Leopold
- Theoretical and Experimental Epistemology Lab, School of Optometry and Vision Science, University of Waterloo, Ontario, Canada; Department of Systems Design Engineering, University of Waterloo, Ontario, Canada
| | - Tanmay Gulati
- Department of Computer Science and Engineering, Manipal Institute of Technology, India
| | - Vasudevan Lakshminarayanan
- Theoretical and Experimental Epistemology Lab, School of Optometry and Vision Science, University of Waterloo, Ontario, Canada; Department of Systems Design Engineering, University of Waterloo, Ontario, Canada
| |
Collapse
|
21
|
Yang J, Zhu X, Liu Y, Jiang X, Fu J, Ren X, Li K, Qiu W, Li X, Yao J. TMIS: a new image-based software application for the measurement of tear meniscus height. Acta Ophthalmol 2019; 97:e973-e980. [PMID: 31044537 DOI: 10.1111/aos.14107] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2018] [Accepted: 03/14/2019] [Indexed: 12/25/2022]
Abstract
PURPOSE To present a new automated image recognition software for the measurement of tear meniscus height (TMH) and investigate its correlation and efficacy compared with an open-source software (NIH ImageJ) and manual evaluation. METHODS A total of 520 slit lamp photographs, among which 276 were in ×16 magnification and 244 were ×40 magnified, captured from 138 eyes of 69 healthy subjects were assessed for TMH by the new automated Tear Meniscus Identification Software (TMIS), ImageJ and human graders. Images processing of TMIS included filtration, recognition and measurement of slit lamp photographs under certain algorithm, which output two measurement patterns, TMISM ax and TMISM ean . TMH measured by ImageJ software, considered as the reference value, was conducted by a masked observer while four masked ophthalmologists performed the manual evaluation. RESULTS In both magnifications, TMH measured by TMISM ean showed similar values with ImageJ while manual evaluation demonstrated underestimated results, and a strong correlation was detected between TMIS and ImageJ. In ×16 magnified photographs, manually obtained TMH revealed a higher correlation with ImageJ, whereas a notably stronger correlation of TMIS with ImageJ was observed in ×40 photographs. Correspondingly, the accuracy for both TMISM ax and TMISM ean appeared to be lower than most doctors in ×16 slit lamp images, in contrast to a better precision of TMISM ean in ×40 ones. CONCLUSION The new software displayed high accuracy and efficacy in ×40 magnification and TMISM ean pattern, suggesting the possibility of this automated TMH measurement platform to be a valid tool in dry eye screening and follow-up practice.
Collapse
Affiliation(s)
- Jiarui Yang
- Department of Ophthalmology Peking University Third Hospital Beijing China
| | - Xingyu Zhu
- Research Centre of Multiphase Flow in Porous Media China University of Petroleum (East China) Qingdao China
| | - Yushi Liu
- Department of Ophthalmology Peking University Third Hospital Beijing China
| | - Xiaodan Jiang
- Department of Ophthalmology Peking University Third Hospital Beijing China
- Beijing key laboratory of restoration of damaged ocular nerve Beijing China
| | - Jiayu Fu
- Department of Ophthalmology Peking University Third Hospital Beijing China
| | - Xiaotong Ren
- Department of Ophthalmology Peking University Third Hospital Beijing China
| | - Kaixiu Li
- Burns and Plastic Department Miyun Teaching Hospital of Capital Medical University Beijing China
| | - Weiqiang Qiu
- Department of Ophthalmology Peking University Third Hospital Beijing China
| | - Xuemin Li
- Department of Ophthalmology Peking University Third Hospital Beijing China
- Beijing key laboratory of restoration of damaged ocular nerve Beijing China
| | - Jun Yao
- Research Centre of Multiphase Flow in Porous Media China University of Petroleum (East China) Qingdao China
| |
Collapse
|
22
|
Perdomo O, Rios H, Rodríguez FJ, Otálora S, Meriaudeau F, Müller H, González FA. Classification of diabetes-related retinal diseases using a deep learning approach in optical coherence tomography. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2019; 178:181-189. [PMID: 31416547 DOI: 10.1016/j.cmpb.2019.06.016] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/22/2018] [Revised: 03/06/2019] [Accepted: 06/13/2019] [Indexed: 06/10/2023]
Abstract
BACKGROUND AND OBJECTIVES Spectral Domain Optical Coherence Tomography (SD-OCT) is a volumetric imaging technique that allows measuring patterns between layers such as small amounts of fluid. Since 2012, automatic medical image analysis performance has steadily increased through the use of deep learning models that automatically learn relevant features for specific tasks, instead of designing visual features manually. Nevertheless, providing insights and interpretation of the predictions made by the model is still a challenge. This paper describes a deep learning model able to detect medically interpretable information in relevant images from a volume to classify diabetes-related retinal diseases. METHODS This article presents a new deep learning model, OCT-NET, which is a customized convolutional neural network for processing scans extracted from optical coherence tomography volumes. OCT-NET is applied to the classification of three conditions seen in SD-OCT volumes. Additionally, the proposed model includes a feedback stage that highlights the areas of the scans to support the interpretation of the results. This information is potentially useful for a medical specialist while assessing the prediction produced by the model. RESULTS The proposed model was tested on the public SERI-CUHK and A2A SD-OCT data sets containing healthy, diabetic retinopathy, diabetic macular edema and age-related macular degeneration. The experimental evaluation shows that the proposed method outperforms conventional convolutional deep learning models from the state of the art reported on the SERI+CUHK and A2A SD-OCT data sets with a precision of 93% and an area under the ROC curve (AUC) of 0.99 respectively. CONCLUSIONS The proposed method is able to classify the three studied retinal diseases with high accuracy. One advantage of the method is its ability to produce interpretable clinical information in the form of highlighting the regions of the image that most contribute to the classifier decision.
Collapse
Affiliation(s)
- Oscar Perdomo
- MindLab Research Group, Universidad Nacional de Colombia, Edificio 453, Laboratorio 207, Bogotá, Colombia
| | - Hernán Rios
- Fundación Oftalmológica Nacional, Bogotá, Colombia
| | | | - Sebastián Otálora
- University of Applied Sciences Western Switzerland (HES-SO), Sierre, Switzerland; University of Geneva, Geneva, Switzerland
| | | | - Henning Müller
- University of Applied Sciences Western Switzerland (HES-SO), Sierre, Switzerland; University of Geneva, Geneva, Switzerland
| | - Fabio A González
- MindLab Research Group, Universidad Nacional de Colombia, Edificio 453, Laboratorio 207, Bogotá, Colombia. https://sites.google.com/a/unal.edu.co/mindlab/
| |
Collapse
|
23
|
Comparison of Diagnostic Power of Optic Nerve Head and Posterior Sclera Configuration Parameters on Myopic Normal Tension Glaucoma. J Glaucoma 2019; 28:834-842. [PMID: 31306361 DOI: 10.1097/ijg.0000000000001328] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
PURPOSE The aim of this study was to compare the diagnostic power of optic nerve head and posterior scleral configuration parameters obtained with the swept-source optical coherence tomography (SSOCT) on myopic normal-tension glaucoma (NTG). MATERIALS AND METHODS A total of 203 eyes of 203 participants with myopia diagnosed at Seoul Saint Mary's Hospital between September 2016 and February 2018 were divided into myopic NTG group (n=113) and nonglaucomatous myopia group (n=90). Established optic nerve head (ONH) parameters such as disc torsion, horizontal tilt, and vertical tilt, and novel parameters representing posterior sclera, were quantified using SSOCT. The posterior sclera was presented with the relative position of the deepest point of the eye (DPE) from the optic disc by measuring the distance, depth, and angle. The mean and the statistical distribution of each index were calculated. Differences in distribution led to another novel marker, absolute misaligned angle, which represents the displaced direction of the ONH from the sclera. The ONH was classified as misaligned when the degree of misalignment was >15 degrees in either direction. The area under the receiver operating characteristic curves and multivariate logistic regression analysis were used to test the diagnostic power in the presence of myopic NTG. RESULTS No significant difference was observed with respect to age, sex, refractive error, axial length, and central corneal thickness between the 2 groups. However, 20 (22.22%) of 90 eyes in the nonglaucomatous group showed misalignment, whereas 60 (53.09%) of 113 eyes in the NTG group had misalignment (odds ratio: 3.962, P<0.001). The absolute misaligned angle (0.696) and the horizontal tilt (0.682) were significantly associated with myopic NTG, which significantly exceeded other parameters in area under the receiver operating characteristic curves (both P<0.001). The multivariate logistic regression also showed that the absolute misaligned angle (hazard ratio=1.045, 95% confidence interval=1.023-1.068, P<0.001) and the horizontal tilt (hazard ratio=1.061, 95% confidence interval=1.015-1.109, P=0.009) were associated significantly with the presence of NTG. CONCLUSIONS The diagnostic power of absolute misaligned angle and the horizontal tilt angle significantly exceeded other parameters on myopic NTG. These parameters may be associated with a displaced direction of the ONH to the posterior sclera, which can be linked to the altered sclera configuration of myopic NTG subjects.
Collapse
|
24
|
Data Driven Approach for Eye Disease Classification with Machine Learning. APPLIED SCIENCES-BASEL 2019. [DOI: 10.3390/app9142789] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/24/2023]
Abstract
Medical health systems have been concentrating on artificial intelligence techniques for speedy diagnosis. However, the recording of health data in a standard form still requires attention so that machine learning can be more accurate and reliable by considering multiple features. The aim of this study is to develop a general framework for recording diagnostic data in an international standard format to facilitate prediction of disease diagnosis based on symptoms using machine learning algorithms. Efforts were made to ensure error-free data entry by developing a user-friendly interface. Furthermore, multiple machine learning algorithms including Decision Tree, Random Forest, Naive Bayes and Neural Network algorithms were used to analyze patient data based on multiple features, including age, illness history and clinical observations. This data was formatted according to structured hierarchies designed by medical experts, whereas diagnosis was made as per the ICD-10 coding developed by the American Academy of Ophthalmology. Furthermore, the system is designed to evolve through self-learning by adding new classifications for both diagnosis and symptoms. The classification results from tree-based methods demonstrated that the proposed framework performs satisfactorily, given a sufficient amount of data. Owing to a structured data arrangement, the random forest and decision tree algorithms’ prediction rate is more than 90% as compared to more complex methods such as neural networks and the naïve Bayes algorithm.
Collapse
|
25
|
Rong Y, Xiang D, Zhu W, Yu K, Shi F, Fan Z, Chen X. Surrogate-Assisted Retinal OCT Image Classification Based on Convolutional Neural Networks. IEEE J Biomed Health Inform 2019; 23:253-263. [DOI: 10.1109/jbhi.2018.2795545] [Citation(s) in RCA: 52] [Impact Index Per Article: 10.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
26
|
Yang C, Lu M, Duan Y, Liu B. An efficient optic cup segmentation method decreasing the influences of blood vessels. Biomed Eng Online 2018; 17:130. [PMID: 30257677 PMCID: PMC6158914 DOI: 10.1186/s12938-018-0560-y] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2018] [Accepted: 09/15/2018] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Optic cup is an important structure in ophthalmologic diagnosis such as glaucoma. Automatic optic cup segmentation is also a key issue in computer aided diagnosis based on digital fundus image. However, current methods didn't effectively solve the problem of edge blurring caused by blood vessels around the optic cup. METHODS In this study, an improved Bertalmio-Sapiro-Caselles-Ballester (BSCB) model was proposed to eliminate the noising induced by blood vessel. First, morphological operations were performed to get the enhanced green channel image. Then blood vessels were extracted and filled by improved BSCB model. Finally, Local Chart-Vest model was used to segment the optic cup. A total of 94 samples which included 32 glaucoma fundus images and 62 normal fundus images were experimented. RESULTS The evaluation parameters of F-score and the boundary distance achieved by the proposed method against the results from experts were 0.7955 ± 0.0724 and 11.42 ± 3.61, respectively. Average vertical optic cup-to-disc ratio values of the normal and glaucoma samples achieved by the proposed method were 0.4369 ± 0.1193 and 0.7156 ± 0.0698, which were also close to those by experts. In addition, 39 glaucoma images from the public dataset RIM-ONE were also used for methodology evaluation. CONCLUSIONS The results showed that our proposed method could overcome the influence of blood vessels in some degree and was competitive to other current optic cup segmentation algorithms. This novel methodology will be expected to use in clinic in the field of glaucoma early detection.
Collapse
Affiliation(s)
- Chunlan Yang
- College of Life Science and Bioengineering, Beijing University of Technology, Beijing, 100124, China.
| | - Min Lu
- College of Life Science and Bioengineering, Beijing University of Technology, Beijing, 100124, China
| | - Yanhua Duan
- College of Life Science and Bioengineering, Beijing University of Technology, Beijing, 100124, China
| | - Bing Liu
- Department of Ophthalmology, Hospital of Beijing University of Technology, Beijing, 100124, China
| |
Collapse
|
27
|
Eladawi N, Elmogy M, Khalifa F, Ghazal M, Ghazi N, Aboelfetouh A, Riad A, Sandhu H, Schaal S, El-Baz A. Early diabetic retinopathy diagnosis based on local retinal blood vessel analysis in optical coherence tomography angiography (OCTA) images. Med Phys 2018; 45:4582-4599. [DOI: 10.1002/mp.13142] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2018] [Revised: 08/13/2018] [Accepted: 08/15/2018] [Indexed: 11/10/2022] Open
Affiliation(s)
- Nabila Eladawi
- Faculty of Computers and Information; Mansoura University; Mansoura 35516 Egypt
- Bioengineering Department; University of Louisville; Louisville KY40292 USA
| | - Mohammed Elmogy
- Faculty of Computers and Information; Mansoura University; Mansoura 35516 Egypt
- Bioengineering Department; University of Louisville; Louisville KY40292 USA
| | - Fahmi Khalifa
- Electronics and Communications Engineering Department; Mansoura University; Mansoura Egypt
| | - Mohammed Ghazal
- Electrical and Computer Engineering Department; Abu Dhabi University; Abu Dhabi UAE
| | - Nicola Ghazi
- Eye Institute at Cleveland Clinic; Abu Dhabi UAE
| | - Ahmed Aboelfetouh
- Faculty of Computers and Information; Mansoura University; Mansoura 35516 Egypt
| | - Alaa Riad
- Faculty of Computers and Information; Mansoura University; Mansoura 35516 Egypt
| | - Harpal Sandhu
- Ophthalmology and Visual Sciences Department; School of Medicine; University of Louisville; Louisville KY USA
| | - Shlomit Schaal
- Department of Ophthalmology and Visual Sciences; University of Massachusetts Medical School; Worcester MA USA
| | - Ayman El-Baz
- Bioengineering Department; University of Louisville; Louisville KY40292 USA
| |
Collapse
|
28
|
A Random Forest classifier-based approach in the detection of abnormalities in the retina. Med Biol Eng Comput 2018; 57:193-203. [PMID: 30076537 DOI: 10.1007/s11517-018-1878-0] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2017] [Accepted: 07/21/2018] [Indexed: 10/28/2022]
Abstract
Classification of abnormalities from medical images using computer-based approaches is of growing interest in medical imaging. Timely detection of abnormalities due to diabetic retinopathy and age-related macular degeneration is required in order to prevent the prognosis of the disease. Computer-aided systems using machine learning are becoming interesting to ophthalmologists and researchers. We present here one such technique, the Random Forest classifier, to aid medical practitioners in accurate diagnosis of the diseases. A computer-aided diagnosis system is proposed for detecting retina abnormalities, which combines K means-based segmentation of the retina image, after due preprocessing, followed by machine learning techniques, using several low level and statistical features. Abnormalities in the retina that are classified are caused by age-related macular degeneration and diabetic retinopathy. Performance measures used in the analysis are accuracy, sensitivity, specificity, F-measure, and Mathew correlation coefficient. A comparison with another machine learning technique, the Naïve Bayes classifier shows that the classification achieved by Random Forest classifier is 93.58% and it outperforms Naïve Bayes classifier which yields an accuracy of 83.63%. Graphical abstract Random Forest classifier for abnormality detection in retina images.
Collapse
|
29
|
Jiang J, Liu X, Liu L, Wang S, Long E, Yang H, Yuan F, Yu D, Zhang K, Wang L, Liu Z, Wang D, Xi C, Lin Z, Wu X, Cui J, Zhu M, Lin H. Predicting the progression of ophthalmic disease based on slit-lamp images using a deep temporal sequence network. PLoS One 2018; 13:e0201142. [PMID: 30063738 PMCID: PMC6067742 DOI: 10.1371/journal.pone.0201142] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2018] [Accepted: 06/12/2018] [Indexed: 11/21/2022] Open
Abstract
Ocular images play an essential role in ophthalmology. Current research mainly focuses on computer-aided diagnosis using slit-lamp images, however few studies have been done to predict the progression of ophthalmic disease. Therefore exploring an effective approach of prediction can help to plan treatment strategies and to provide early warning for the patients. In this study, we present an end-to-end temporal sequence network (TempSeq-Net) to automatically predict the progression of ophthalmic disease, which includes employing convolutional neural network (CNN) to extract high-level features from consecutive slit-lamp images and applying long short term memory (LSTM) method to mine the temporal relationship of features. First, we comprehensively compare six potential combinations of CNNs and LSTM (or recurrent neural network) in terms of effectiveness and efficiency, to obtain the optimal TempSeq-Net model. Second, we analyze the impacts of sequence lengths on model's performance which help to evaluate their stability and validity and to determine the appropriate range of sequence lengths. The quantitative results demonstrated that our proposed model offers exceptional performance with mean accuracy (92.22), sensitivity (88.55), specificity (94.31) and AUC (97.18). Moreover, the model achieves real-time prediction with only 27.6ms for single sequence, and simultaneously predicts sequence data with lengths of 3-5. Our study provides a promising strategy for the progression of ophthalmic disease, and has the potential to be applied in other medical fields.
Collapse
Affiliation(s)
- Jiewei Jiang
- School of Computer Science and Technology, Xidian University, Xi’an, China
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Xiyang Liu
- School of Computer Science and Technology, Xidian University, Xi’an, China
- School of Software, Xidian University, Xi’an, China
| | - Lin Liu
- School of Computer Science and Technology, Xidian University, Xi’an, China
| | - Shuai Wang
- School of Software, Xidian University, Xi’an, China
| | - Erping Long
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Haoqing Yang
- School of Computer Science and Technology, Xidian University, Xi’an, China
| | - Fuqiang Yuan
- School of Computer Science and Technology, Xidian University, Xi’an, China
| | - Deying Yu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, China
| | - Kai Zhang
- School of Computer Science and Technology, Xidian University, Xi’an, China
| | - Liming Wang
- School of Computer Science and Technology, Xidian University, Xi’an, China
- School of Software, Xidian University, Xi’an, China
| | - Zhenzhen Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Dongni Wang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Changzun Xi
- School of Computer Science and Technology, Xidian University, Xi’an, China
| | - Zhuoling Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Xiaohang Wu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Jiangtao Cui
- School of Computer Science and Technology, Xidian University, Xi’an, China
| | - Mingmin Zhu
- School of Mathematics and Statistics, Xidian University, Xi’an, China
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| |
Collapse
|
30
|
S V MK, R G. Computer-Aided Diagnosis of Anterior Segment Eye Abnormalities using Visible Wavelength Image Analysis Based Machine Learning. J Med Syst 2018; 42:128. [PMID: 29860586 DOI: 10.1007/s10916-018-0980-z] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2018] [Accepted: 05/18/2018] [Indexed: 11/26/2022]
Abstract
Eye disease is a major health problem among the elderly people. Cataract and corneal arcus are the major abnormalities that exist in the anterior segment eye region of aged people. Hence, computer-aided diagnosis of anterior segment eye abnormalities will be helpful for mass screening and grading in ophthalmology. In this paper, we propose a multiclass computer-aided diagnosis (CAD) system using visible wavelength (VW) eye images to diagnose anterior segment eye abnormalities. In the proposed method, the input VW eye images are pre-processed for specular reflection removal and the iris circle region is segmented using a circular Hough Transform (CHT)-based approach. The first-order statistical features and wavelet-based features are extracted from the segmented iris circle and used for classification. The Support Vector Machine (SVM) by Sequential Minimal Optimization (SMO) algorithm was used for the classification. In experiments, we used 228 VW eye images that belong to three different classes of anterior segment eye abnormalities. The proposed method achieved a predictive accuracy of 96.96% with 97% sensitivity and 99% specificity. The experimental results show that the proposed method has significant potential for use in clinical applications.
Collapse
Affiliation(s)
- Mahesh Kumar S V
- Department of Electronics and Communication Engineering, Pondicherry Engineering College, Puducherry, India.
| | - Gunasundari R
- Department of Electronics and Communication Engineering, Pondicherry Engineering College, Puducherry, India
| |
Collapse
|
31
|
Jiang J, Liu X, Zhang K, Long E, Wang L, Li W, Liu L, Wang S, Zhu M, Cui J, Liu Z, Lin Z, Li X, Chen J, Cao Q, Li J, Wu X, Wang D, Wang J, Lin H. Automatic diagnosis of imbalanced ophthalmic images using a cost-sensitive deep convolutional neural network. Biomed Eng Online 2017; 16:132. [PMID: 29157240 PMCID: PMC5697161 DOI: 10.1186/s12938-017-0420-1] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2017] [Accepted: 11/07/2017] [Indexed: 11/22/2022] Open
Abstract
BACKGROUND Ocular images play an essential role in ophthalmological diagnoses. Having an imbalanced dataset is an inevitable issue in automated ocular diseases diagnosis; the scarcity of positive samples always tends to result in the misdiagnosis of severe patients during the classification task. Exploring an effective computer-aided diagnostic method to deal with imbalanced ophthalmological dataset is crucial. METHODS In this paper, we develop an effective cost-sensitive deep residual convolutional neural network (CS-ResCNN) classifier to diagnose ophthalmic diseases using retro-illumination images. First, the regions of interest (crystalline lens) are automatically identified via twice-applied Canny detection and Hough transformation. Then, the localized zones are fed into the CS-ResCNN to extract high-level features for subsequent use in automatic diagnosis. Second, the impacts of cost factors on the CS-ResCNN are further analyzed using a grid-search procedure to verify that our proposed system is robust and efficient. RESULTS Qualitative analyses and quantitative experimental results demonstrate that our proposed method outperforms other conventional approaches and offers exceptional mean accuracy (92.24%), specificity (93.19%), sensitivity (89.66%) and AUC (97.11%) results. Moreover, the sensitivity of the CS-ResCNN is enhanced by over 13.6% compared to the native CNN method. CONCLUSION Our study provides a practical strategy for addressing imbalanced ophthalmological datasets and has the potential to be applied to other medical images. The developed and deployed CS-ResCNN could serve as computer-aided diagnosis software for ophthalmologists in clinical application.
Collapse
Affiliation(s)
- Jiewei Jiang
- School of Computer Science and Technology, Xidian University, No. 2 South Taibai Rd, Xi’an, 710071 China
| | - Xiyang Liu
- School of Computer Science and Technology, Xidian University, No. 2 South Taibai Rd, Xi’an, 710071 China
- School of Software, Xidian University, No. 2 South Taibai Rd, Xi’an, 710071 China
| | - Kai Zhang
- School of Computer Science and Technology, Xidian University, No. 2 South Taibai Rd, Xi’an, 710071 China
| | - Erping Long
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Xian Lie South Road 54#, Guangzhou, 510060 China
| | - Liming Wang
- School of Computer Science and Technology, Xidian University, No. 2 South Taibai Rd, Xi’an, 710071 China
- School of Software, Xidian University, No. 2 South Taibai Rd, Xi’an, 710071 China
| | - Wangting Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Xian Lie South Road 54#, Guangzhou, 510060 China
| | - Lin Liu
- School of Computer Science and Technology, Xidian University, No. 2 South Taibai Rd, Xi’an, 710071 China
| | - Shuai Wang
- School of Software, Xidian University, No. 2 South Taibai Rd, Xi’an, 710071 China
| | - Mingmin Zhu
- School of Mathematics and Statistics, Xidian University, Xi’an, 710071 China
| | - Jiangtao Cui
- School of Computer Science and Technology, Xidian University, No. 2 South Taibai Rd, Xi’an, 710071 China
| | - Zhenzhen Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Xian Lie South Road 54#, Guangzhou, 510060 China
| | - Zhuoling Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Xian Lie South Road 54#, Guangzhou, 510060 China
| | - Xiaoyan Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Xian Lie South Road 54#, Guangzhou, 510060 China
| | - Jingjing Chen
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Xian Lie South Road 54#, Guangzhou, 510060 China
| | - Qianzhong Cao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Xian Lie South Road 54#, Guangzhou, 510060 China
| | - Jing Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Xian Lie South Road 54#, Guangzhou, 510060 China
| | - Xiaohang Wu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Xian Lie South Road 54#, Guangzhou, 510060 China
| | - Dongni Wang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Xian Lie South Road 54#, Guangzhou, 510060 China
| | - Jinghui Wang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Xian Lie South Road 54#, Guangzhou, 510060 China
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Xian Lie South Road 54#, Guangzhou, 510060 China
| |
Collapse
|
32
|
Jorjandi S, Rabbani H, Kafieh R, Amini Z. Statistical modeling of Optical Coherence Tomography images by asymmetric Normal Laplace mixture model. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2017; 2017:4399-4402. [PMID: 29060872 DOI: 10.1109/embc.2017.8037831] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Optical Coherence Tomography (OCT) is known as a non-invasive and high resolution imaging modality in ophthalmology. Effecting noise on the OCT images as well as other reasons cause a random behavior in these images. In this study, we introduce a new statistical model for retinal layers in healthy OCT images. This model, namely asymmetric Normal Laplace (NL), fits well the advent of asymmetry and heavy-tailed in intensity distribution of each layer. Due to the layered structure of retina, a mixture model is addressed. It is proposed to evaluate the fitness criteria called Kull-back Leibler Divergence (KLD) and chi-square test along visual results. The results express the well performance of proposed model in fitness of data except for 6th and 7th layers. Using a complicated model, e.g. a mixture model with two component, seems to be appropriate for these layers. The mentioned process for train images can then be devised for a test image by employing the Expectation Maximization (EM) algorithm to estimate the values of parameters in mixture model.
Collapse
|
33
|
Koh JEW, Ng EYK, Bhandary SV, Laude A, Acharya UR. Automated detection of retinal health using PHOG and SURF features extracted from fundus images. APPL INTELL 2017. [DOI: 10.1007/s10489-017-1048-3] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
34
|
Fang L, Yang L, Li S, Rabbani H, Liu Z, Peng Q, Chen X. Automatic detection and recognition of multiple macular lesions in retinal optical coherence tomography images with multi-instance multilabel learning. JOURNAL OF BIOMEDICAL OPTICS 2017; 22:66014. [PMID: 28655052 DOI: 10.1117/1.jbo.22.6.066014] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/18/2017] [Accepted: 06/02/2017] [Indexed: 06/07/2023]
Abstract
Detection and recognition of macular lesions in optical coherence tomography (OCT) are very important for retinal diseases diagnosis and treatment. As one kind of retinal disease (e.g., diabetic retinopathy) may contain multiple lesions (e.g., edema, exudates, and microaneurysms) and eye patients may suffer from multiple retinal diseases, multiple lesions often coexist within one retinal image. Therefore, one single-lesion-based detector may not support the diagnosis of clinical eye diseases. To address this issue, we propose a multi-instance multilabel-based lesions recognition (MIML-LR) method for the simultaneous detection and recognition of multiple lesions. The proposed MIML-LR method consists of the following steps: (1) segment the regions of interest (ROIs) for different lesions, (2) compute descriptive instances (features) for each lesion region, (3) construct multilabel detectors, and (4) recognize each ROI with the detectors. The proposed MIML-LR method was tested on 823 clinically labeled OCT images with normal macular and macular with three common lesions: epiretinal membrane, edema, and drusen. For each input OCT image, our MIML-LR method can automatically identify the number of lesions and assign the class labels, achieving the average accuracy of 88.72% for the cases with multiple lesions, which better assists macular disease diagnosis and treatment.
Collapse
Affiliation(s)
- Leyuan Fang
- Hunan University, College of Electrical and Information Engineering, Changsha, Hunan, China
| | - Liumao Yang
- Hunan University, College of Electrical and Information Engineering, Changsha, Hunan, China
| | - Shutao Li
- Hunan University, College of Electrical and Information Engineering, Changsha, Hunan, China
| | - Hossein Rabbani
- Isfahan University of Medical Sciences, Medical Image and Signal Processing Research Center, Isfahan, Iran
| | - Zhimin Liu
- The First Affiliated Hospital of Hunan University of Chinese Medicine, Department of Ophthalmology, Changsha, Hunan, China
| | - Qinghua Peng
- The First Affiliated Hospital of Hunan University of Chinese Medicine, Department of Ophthalmology, Changsha, Hunan, China
| | - Xiangdong Chen
- The First Affiliated Hospital of Hunan University of Chinese Medicine, Department of Ophthalmology, Changsha, Hunan, China
| |
Collapse
|
35
|
Liu X, Jiang J, Zhang K, Long E, Cui J, Zhu M, An Y, Zhang J, Liu Z, Lin Z, Li X, Chen J, Cao Q, Li J, Wu X, Wang D, Lin H. Localization and diagnosis framework for pediatric cataracts based on slit-lamp images using deep features of a convolutional neural network. PLoS One 2017; 12:e0168606. [PMID: 28306716 PMCID: PMC5356999 DOI: 10.1371/journal.pone.0168606] [Citation(s) in RCA: 46] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2016] [Accepted: 11/11/2016] [Indexed: 12/12/2022] Open
Abstract
Slit-lamp images play an essential role for diagnosis of pediatric cataracts. We present a computer vision-based framework for the automatic localization and diagnosis of slit-lamp images by identifying the lens region of interest (ROI) and employing a deep learning convolutional neural network (CNN). First, three grading degrees for slit-lamp images are proposed in conjunction with three leading ophthalmologists. The lens ROI is located in an automated manner in the original image using two successive applications of Candy detection and the Hough transform, which are cropped, resized to a fixed size and used to form pediatric cataract datasets. These datasets are fed into the CNN to extract high-level features and implement automatic classification and grading. To demonstrate the performance and effectiveness of the deep features extracted in the CNN, we investigate the features combined with support vector machine (SVM) and softmax classifier and compare these with the traditional representative methods. The qualitative and quantitative experimental results demonstrate that our proposed method offers exceptional mean accuracy, sensitivity and specificity: classification (97.07%, 97.28%, and 96.83%) and a three-degree grading area (89.02%, 86.63%, and 90.75%), density (92.68%, 91.05%, and 93.94%) and location (89.28%, 82.70%, and 93.08%). Finally, we developed and deployed a potential automatic diagnostic software for ophthalmologists and patients in clinical applications to implement the validated model.
Collapse
Affiliation(s)
- Xiyang Liu
- School of Computer Science and Technology, Xidian University, Xi’an, China
- School of Software, Xidian University, Xi’an, China
| | - Jiewei Jiang
- School of Computer Science and Technology, Xidian University, Xi’an, China
| | - Kai Zhang
- School of Computer Science and Technology, Xidian University, Xi’an, China
| | - Erping Long
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Jiangtao Cui
- School of Computer Science and Technology, Xidian University, Xi’an, China
| | - Mingmin Zhu
- School of Mathematics and Statistics, Xidian University, Xi’an, China
| | - Yingying An
- School of Computer Science and Technology, Xidian University, Xi’an, China
| | - Jia Zhang
- School of Computer Science and Technology, Xidian University, Xi’an, China
| | - Zhenzhen Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Zhuoling Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Xiaoyan Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Jingjing Chen
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Qianzhong Cao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Jing Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Xiaohang Wu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Dongni Wang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| |
Collapse
|
36
|
Wang L, Zhang K, Liu X, Long E, Jiang J, An Y, Zhang J, Liu Z, Lin Z, Li X, Chen J, Cao Q, Li J, Wu X, Wang D, Li W, Lin H. Comparative analysis of image classification methods for automatic diagnosis of ophthalmic images. Sci Rep 2017; 7:41545. [PMID: 28139688 PMCID: PMC5282520 DOI: 10.1038/srep41545] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2016] [Accepted: 12/22/2016] [Indexed: 11/16/2022] Open
Abstract
There are many image classification methods, but it remains unclear which methods are most helpful for analyzing and intelligently identifying ophthalmic images. We select representative slit-lamp images which show the complexity of ocular images as research material to compare image classification algorithms for diagnosing ophthalmic diseases. To facilitate this study, some feature extraction algorithms and classifiers are combined to automatic diagnose pediatric cataract with same dataset and then their performance are compared using multiple criteria. This comparative study reveals the general characteristics of the existing methods for automatic identification of ophthalmic images and provides new insights into the strengths and shortcomings of these methods. The relevant methods (local binary pattern +SVMs, wavelet transformation +SVMs) which achieve an average accuracy of 87% and can be adopted in specific situations to aid doctors in preliminarily disease screening. Furthermore, some methods requiring fewer computational resources and less time could be applied in remote places or mobile devices to assist individuals in understanding the condition of their body. In addition, it would be helpful to accelerate the development of innovative approaches and to apply these methods to assist doctors in diagnosing ophthalmic disease.
Collapse
Affiliation(s)
- Liming Wang
- Institute of Software Engineering, Xidian University, Xi'an 710071, China
| | - Kai Zhang
- School of Computer Science and Technology, Xidian University, Xi'an 710071, China
| | - Xiyang Liu
- School of Computer Science and Technology, Xidian University, Xi'an 710071, China.,School of Software, Xidian University, Xi'an 710071, China
| | - Erping Long
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou 510060, China
| | - Jiewei Jiang
- School of Computer Science and Technology, Xidian University, Xi'an 710071, China
| | - Yingying An
- School of Computer Science and Technology, Xidian University, Xi'an 710071, China
| | - Jia Zhang
- School of Computer Science and Technology, Xidian University, Xi'an 710071, China
| | - Zhenzhen Liu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou 510060, China
| | - Zhuoling Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou 510060, China
| | - Xiaoyan Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou 510060, China
| | - Jingjing Chen
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou 510060, China
| | - Qianzhong Cao
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou 510060, China
| | - Jing Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou 510060, China
| | - Xiaohang Wu
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou 510060, China
| | - Dongni Wang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou 510060, China
| | - Wangting Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou 510060, China
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou 510060, China
| |
Collapse
|
37
|
Hybrid Features and Mediods Classification based Robust Segmentation of Blood Vessels. J Med Syst 2015; 39:128. [DOI: 10.1007/s10916-015-0316-1] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2015] [Accepted: 08/05/2015] [Indexed: 11/26/2022]
|