1
|
Shroff S, Rao DP, Savoy FM, Shruthi S, Hsu CK, Pradhan ZS, Jayasree PV, Sivaraman A, Sengupta S, Shetty R, Rao HL. Agreement of a Novel Artificial Intelligence Software With Optical Coherence Tomography and Manual Grading of the Optic Disc in Glaucoma. J Glaucoma 2023; 32:280-286. [PMID: 36730188 DOI: 10.1097/ijg.0000000000002147] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2022] [Accepted: 11/19/2022] [Indexed: 02/03/2023]
Abstract
PRCIS The offline artificial intelligence (AI) on a smartphone-based fundus camera shows good agreement and correlation with the vertical cup-to-disc ratio (vCDR) from the spectral-domain optical coherence tomography (SD-OCT) and manual grading by experts. PURPOSE The purpose of this study is to assess the agreement of vCDR measured by a new AI software from optic disc images obtained using a validated smartphone-based imaging device, with SD-OCT vCDR measurements, and manual grading by experts on a stereoscopic fundus camera. METHODS In a prospective, cross-sectional study, participants above 18 years (Glaucoma and normal) underwent a dilated fundus evaluation, followed by optic disc imaging including a 42-degree monoscopic disc-centered image (Remidio NM-FOP-10), a 30-degree stereoscopic disc-centered image (Kowa nonmyd WX-3D desktop fundus camera), and disc analysis (Cirrus SD-OCT). Remidio FOP images were analyzed for vCDR using the new AI software, and Kowa stereoscopic images were manually graded by 3 fellowship-trained glaucoma specialists. RESULTS We included 473 eyes of 244 participants. The vCDR values from the new AI software showed strong agreement with SD-OCT measurements [95% limits of agreement (LoA)=-0.13 to 0.16]. The agreement with SD-OCT was marginally better in eyes with higher vCDR (95% LoA=-0.15 to 0.12 for vCDR>0.8). Interclass correlation coefficient was 0.90 (95% CI, 0.88-0.91). The vCDR values from AI software showed a good correlation with the manual segmentation by experts (interclass correlation coefficient=0.89, 95% CI, 0.87-0.91) on stereoscopic images (95% LoA=-0.18 to 0.11) with agreement better for eyes with vCDR>0.8 (LoA=-0.12 to 0.08). CONCLUSIONS The new AI software vCDR measurements had an excellent agreement and correlation with the SD-OCT and manual grading. The ability of the Medios AI to work offline, without requiring cloud-based inferencing, is an added advantage.
Collapse
Affiliation(s)
- Sujani Shroff
- Department of Glaucoma, Narayana Nethralaya, Rajajinagar
| | - Divya P Rao
- Remidio Innovative Solution Inc., Glen Allen, VA
| | - Florian M Savoy
- Medios Technologies, Remidio Innovative Solutions Pvt Ltd, Singapore
| | - S Shruthi
- Department of Glaucoma, Narayana Nethralaya, Rajajinagar
| | - Chao-Kai Hsu
- Medios Technologies, Remidio Innovative Solutions Pvt Ltd, Singapore
| | - Zia S Pradhan
- Department of Glaucoma, Narayana Nethralaya, Rajajinagar
| | - P V Jayasree
- Department of Glaucoma, Narayana Nethralaya, Rajajinagar
| | - Anand Sivaraman
- Remidio Innovative Solution Pvt Ltd, Bengaluru, Karnataka, India
| | | | - Rohit Shetty
- Department of Glaucoma, Narayana Nethralaya, Rajajinagar
| | - Harsha L Rao
- Department of Glaucoma, Narayana Nethralaya, Bannerghatta Road
| |
Collapse
|
2
|
An empirical study of preprocessing techniques with convolutional neural networks for accurate detection of chronic ocular diseases using fundus images. APPL INTELL 2023; 53:1548-1566. [PMID: 35528131 PMCID: PMC9059700 DOI: 10.1007/s10489-022-03490-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/08/2022] [Indexed: 01/07/2023]
Abstract
Chronic Ocular Diseases (COD) such as myopia, diabetic retinopathy, age-related macular degeneration, glaucoma, and cataract can affect the eye and may even lead to severe vision impairment or blindness. According to a recent World Health Organization (WHO) report on vision, at least 2.2 billion individuals worldwide suffer from vision impairment. Often, overt signs indicative of COD do not manifest until the disease has progressed to an advanced stage. However, if COD is detected early, vision impairment can be avoided by early intervention and cost-effective treatment. Ophthalmologists are trained to detect COD by examining certain minute changes in the retina, such as microaneurysms, macular edema, hemorrhages, and alterations in the blood vessels. The range of eye conditions is diverse, and each of these conditions requires a unique patient-specific treatment. Convolutional neural networks (CNNs) have demonstrated significant potential in multi-disciplinary fields, including the detection of a variety of eye diseases. In this study, we combined several preprocessing approaches with convolutional neural networks to accurately detect COD in eye fundus images. To the best of our knowledge, this is the first work that provides a qualitative analysis of preprocessing approaches for COD classification using CNN models. Experimental results demonstrate that CNNs trained on the region of interest segmented images outperform the models trained on the original input images by a substantial margin. Additionally, an ensemble of three preprocessing techniques outperformed other state-of-the-art approaches by 30% and 3%, in terms of Kappa and F 1 scores, respectively. The developed prototype has been extensively tested and can be evaluated on more comprehensive COD datasets for deployment in the clinical setup.
Collapse
|
3
|
Kako NA, Abdulazeez AM. Peripapillary Atrophy Segmentation and Classification Methodologies for Glaucoma Image Detection: A Review. Curr Med Imaging 2022; 18:1140-1159. [PMID: 35260060 DOI: 10.2174/1573405618666220308112732] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Revised: 12/04/2021] [Accepted: 12/22/2021] [Indexed: 11/22/2022]
Abstract
Information-based image processing and computer vision methods are utilized in several healthcare organizations to diagnose diseases. The irregularities in the visual system are identified over fundus images shaped over a fundus camera. Among ophthalmology diseases, glaucoma is measured as the most common case that can lead to neurodegenerative illness. The unsuitable fluid pressure inside the eye within the visual system is described as the major cause of those diseases. Glaucoma has no symptoms in the early stages, and if it is not treated, it may result in total blindness. Diagnosing glaucoma at an early stage may prevent permanent blindness. Manual inspection of the human eye may be a solution, but it depends on the skills of the individuals involved. The auto diagnosis of glaucoma by applying a consolidation of computer vision, artificial intelligence, and image processing can aid in the ban and detection of those diseases. In this review article, we aim to introduce a review of the numerous approaches based on peripapillary atrophy segmentation and classification that can detect these diseases, as well as details about the publicly available image benchmarks, datasets, and measurement of performance. The review article introduces the demonstrated research of numerous available study models that objectively diagnose glaucoma via peripapillary atrophy from the lowest level of feature extraction to the current direction based on deep learning. The advantages and disadvantages of each method are addressed in detail, and tabular descriptions are included to highlight the results of each category. Moreover, the frameworks of each approach and fundus image datasets are provided. The improved reporting of our study would help in providing possible future work directions to diagnose glaucoma in conclusion.
Collapse
Affiliation(s)
- Najdavan A Kako
- Duhok Polytechnic University, Technical Institute of Administration, MIS, Duhok, Iraq
| | | |
Collapse
|
4
|
Bunod R, Augstburger E, Brasnu E, Labbe A, Baudouin C. [Artificial intelligence and glaucoma: A literature review]. J Fr Ophtalmol 2022; 45:216-232. [PMID: 34991909 DOI: 10.1016/j.jfo.2021.11.002] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2021] [Accepted: 11/18/2021] [Indexed: 11/26/2022]
Abstract
In recent years, research in artificial intelligence (AI) has experienced an unprecedented surge in the field of ophthalmology, in particular glaucoma. The diagnosis and follow-up of glaucoma is complex and relies on a body of clinical evidence and ancillary tests. This large amount of information from structural and functional testing of the optic nerve and macula makes glaucoma a particularly appropriate field for the application of AI. In this paper, we will review work using AI in the field of glaucoma, whether for screening, diagnosis or detection of progression. Many AI strategies have shown promising results for glaucoma detection using fundus photography, optical coherence tomography, or automated perimetry. The combination of these imaging modalities increases the performance of AI algorithms, with results comparable to those of humans. We will discuss potential applications as well as obstacles and limitations to the deployment and validation of such models. While there is no doubt that AI has the potential to revolutionize glaucoma management and screening, research in the coming years will need to address unavoidable questions regarding the clinical significance of such results and the explicability of the predictions.
Collapse
Affiliation(s)
- R Bunod
- Service d'ophtalmologie 3, IHU FOReSIGHT, centre hospitalier national des Quinze-Vingts, 28, rue de Charenton, 75012 Paris, France.
| | - E Augstburger
- Service d'ophtalmologie 3, IHU FOReSIGHT, centre hospitalier national des Quinze-Vingts, 28, rue de Charenton, 75012 Paris, France
| | - E Brasnu
- Service d'ophtalmologie 3, IHU FOReSIGHT, centre hospitalier national des Quinze-Vingts, 28, rue de Charenton, 75012 Paris, France; CHNO des Quinze-Vingts, IHU FOReSIGHT, INSERM-DGOS CIC 1423, 17, rue Moreau, 75012 Paris, France; Sorbonne universités, INSERM, CNRS, institut de la Vision, 17, rue Moreau, 75012 Paris, France
| | - A Labbe
- Service d'ophtalmologie 3, IHU FOReSIGHT, centre hospitalier national des Quinze-Vingts, 28, rue de Charenton, 75012 Paris, France; CHNO des Quinze-Vingts, IHU FOReSIGHT, INSERM-DGOS CIC 1423, 17, rue Moreau, 75012 Paris, France; Sorbonne universités, INSERM, CNRS, institut de la Vision, 17, rue Moreau, 75012 Paris, France; Service d'ophtalmologie, hôpital Ambroise-Paré, AP-HP, université de Paris Saclay, 9, avenue Charles-de-Gaulle, 92100 Boulogne-Billancourt, France
| | - C Baudouin
- Service d'ophtalmologie 3, IHU FOReSIGHT, centre hospitalier national des Quinze-Vingts, 28, rue de Charenton, 75012 Paris, France; CHNO des Quinze-Vingts, IHU FOReSIGHT, INSERM-DGOS CIC 1423, 17, rue Moreau, 75012 Paris, France; Sorbonne universités, INSERM, CNRS, institut de la Vision, 17, rue Moreau, 75012 Paris, France; Service d'ophtalmologie, hôpital Ambroise-Paré, AP-HP, université de Paris Saclay, 9, avenue Charles-de-Gaulle, 92100 Boulogne-Billancourt, France
| |
Collapse
|
5
|
Addis V, Chen M, Zorger R, Salowe R, Daniel E, Lee R, Pistilli M, Gao J, Maguire MG, Chan L, Gudiseva HV, Zenebe-Gete S, Merriam S, Smith EJ, Martin R, Parker Ostroff C, Gee JC, Cui QN, Miller-Ellis E, O’Brien JM, Sankar PS. A Precise Method to Evaluate 360 Degree Measures of Optic Cup and Disc Morphology in an African American Cohort and Its Genetic Applications. Genes (Basel) 2021; 12:genes12121961. [PMID: 34946910 PMCID: PMC8701339 DOI: 10.3390/genes12121961] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Revised: 12/01/2021] [Accepted: 12/05/2021] [Indexed: 11/16/2022] Open
Abstract
(1) Background: Vertical cup-to-disc ratio (CDR) is an important measure for evaluating damage to the optic nerve head (ONH) in glaucoma patients. However, this measure often does not fully capture the irregular cupping observed in glaucomatous nerves. We developed and evaluated a method to measure cup-to-disc ratio (CDR) at all 360 degrees of the ONH. (2) Methods: Non-physician graders from the Scheie Reading Center outlined the cup and disc on digital stereo color disc images from African American patients enrolled in the Primary Open-Angle African American Glaucoma Genetics (POAAGG) study. After converting the resultant coordinates into polar representation, the CDR at each 360-degree location of the ONH was obtained. We compared grader VCDR values with clinical VCDR values, using Spearman correlation analysis, and validated significant genetic associations with clinical VCDR, using grader VCDR values. (3) Results: Graders delineated outlines of the cup contour and disc boundaries twice in each of 1815 stereo disc images. For both cases and controls, the mean CDR was highest at the horizontal bisector, particularly in the temporal region, as compared to other degree locations. There was a good correlation between grader CDR at the vertical bisector and clinical VCDR (Spearman Correlation OD: r = 0.78 [95% CI: 0.76–0.79]). An SNP in the MPDZ gene, associated with clinical VCDR in a prior genome-wide association study, showed a significant association with grader VCDR (p = 0.01) and grader CDR area ratio (p = 0.02). (4) Conclusions: The CDR of both glaucomatous and non-glaucomatous eyes varies by degree location, with the highest measurements in the temporal region of the eye. This method can be useful for capturing innate eccentric ONH morphology, tracking disease progression, and identifying genetic associations.
Collapse
Affiliation(s)
- Victoria Addis
- Scheie Eye Institute, University of Pennsylvania, Philadelphia, PA 19104, USA; (V.A.); (R.S.); (E.D.); (R.L.); (M.P.); (J.G.); (M.G.M.); (L.C.); (H.V.G.); (S.Z.-G.); (S.M.); (E.J.S.); (R.M.); (C.P.O.); (Q.N.C.); (E.M.-E.); (P.S.S.)
| | - Min Chen
- Department of Radiology, University of Pennsylvania, Philadelphia, PA 19104, USA; (M.C.); (J.C.G.)
| | - Richard Zorger
- Penn Vision Research Center, University of Pennsylvania, Philadelphia, PA 19104, USA;
| | - Rebecca Salowe
- Scheie Eye Institute, University of Pennsylvania, Philadelphia, PA 19104, USA; (V.A.); (R.S.); (E.D.); (R.L.); (M.P.); (J.G.); (M.G.M.); (L.C.); (H.V.G.); (S.Z.-G.); (S.M.); (E.J.S.); (R.M.); (C.P.O.); (Q.N.C.); (E.M.-E.); (P.S.S.)
| | - Ebenezer Daniel
- Scheie Eye Institute, University of Pennsylvania, Philadelphia, PA 19104, USA; (V.A.); (R.S.); (E.D.); (R.L.); (M.P.); (J.G.); (M.G.M.); (L.C.); (H.V.G.); (S.Z.-G.); (S.M.); (E.J.S.); (R.M.); (C.P.O.); (Q.N.C.); (E.M.-E.); (P.S.S.)
| | - Roy Lee
- Scheie Eye Institute, University of Pennsylvania, Philadelphia, PA 19104, USA; (V.A.); (R.S.); (E.D.); (R.L.); (M.P.); (J.G.); (M.G.M.); (L.C.); (H.V.G.); (S.Z.-G.); (S.M.); (E.J.S.); (R.M.); (C.P.O.); (Q.N.C.); (E.M.-E.); (P.S.S.)
| | - Maxwell Pistilli
- Scheie Eye Institute, University of Pennsylvania, Philadelphia, PA 19104, USA; (V.A.); (R.S.); (E.D.); (R.L.); (M.P.); (J.G.); (M.G.M.); (L.C.); (H.V.G.); (S.Z.-G.); (S.M.); (E.J.S.); (R.M.); (C.P.O.); (Q.N.C.); (E.M.-E.); (P.S.S.)
| | - Jinpeng Gao
- Scheie Eye Institute, University of Pennsylvania, Philadelphia, PA 19104, USA; (V.A.); (R.S.); (E.D.); (R.L.); (M.P.); (J.G.); (M.G.M.); (L.C.); (H.V.G.); (S.Z.-G.); (S.M.); (E.J.S.); (R.M.); (C.P.O.); (Q.N.C.); (E.M.-E.); (P.S.S.)
| | - Maureen G. Maguire
- Scheie Eye Institute, University of Pennsylvania, Philadelphia, PA 19104, USA; (V.A.); (R.S.); (E.D.); (R.L.); (M.P.); (J.G.); (M.G.M.); (L.C.); (H.V.G.); (S.Z.-G.); (S.M.); (E.J.S.); (R.M.); (C.P.O.); (Q.N.C.); (E.M.-E.); (P.S.S.)
| | - Lilian Chan
- Scheie Eye Institute, University of Pennsylvania, Philadelphia, PA 19104, USA; (V.A.); (R.S.); (E.D.); (R.L.); (M.P.); (J.G.); (M.G.M.); (L.C.); (H.V.G.); (S.Z.-G.); (S.M.); (E.J.S.); (R.M.); (C.P.O.); (Q.N.C.); (E.M.-E.); (P.S.S.)
| | - Harini V. Gudiseva
- Scheie Eye Institute, University of Pennsylvania, Philadelphia, PA 19104, USA; (V.A.); (R.S.); (E.D.); (R.L.); (M.P.); (J.G.); (M.G.M.); (L.C.); (H.V.G.); (S.Z.-G.); (S.M.); (E.J.S.); (R.M.); (C.P.O.); (Q.N.C.); (E.M.-E.); (P.S.S.)
| | - Selam Zenebe-Gete
- Scheie Eye Institute, University of Pennsylvania, Philadelphia, PA 19104, USA; (V.A.); (R.S.); (E.D.); (R.L.); (M.P.); (J.G.); (M.G.M.); (L.C.); (H.V.G.); (S.Z.-G.); (S.M.); (E.J.S.); (R.M.); (C.P.O.); (Q.N.C.); (E.M.-E.); (P.S.S.)
| | - Sayaka Merriam
- Scheie Eye Institute, University of Pennsylvania, Philadelphia, PA 19104, USA; (V.A.); (R.S.); (E.D.); (R.L.); (M.P.); (J.G.); (M.G.M.); (L.C.); (H.V.G.); (S.Z.-G.); (S.M.); (E.J.S.); (R.M.); (C.P.O.); (Q.N.C.); (E.M.-E.); (P.S.S.)
| | - Eli J. Smith
- Scheie Eye Institute, University of Pennsylvania, Philadelphia, PA 19104, USA; (V.A.); (R.S.); (E.D.); (R.L.); (M.P.); (J.G.); (M.G.M.); (L.C.); (H.V.G.); (S.Z.-G.); (S.M.); (E.J.S.); (R.M.); (C.P.O.); (Q.N.C.); (E.M.-E.); (P.S.S.)
| | - Revell Martin
- Scheie Eye Institute, University of Pennsylvania, Philadelphia, PA 19104, USA; (V.A.); (R.S.); (E.D.); (R.L.); (M.P.); (J.G.); (M.G.M.); (L.C.); (H.V.G.); (S.Z.-G.); (S.M.); (E.J.S.); (R.M.); (C.P.O.); (Q.N.C.); (E.M.-E.); (P.S.S.)
| | - Candace Parker Ostroff
- Scheie Eye Institute, University of Pennsylvania, Philadelphia, PA 19104, USA; (V.A.); (R.S.); (E.D.); (R.L.); (M.P.); (J.G.); (M.G.M.); (L.C.); (H.V.G.); (S.Z.-G.); (S.M.); (E.J.S.); (R.M.); (C.P.O.); (Q.N.C.); (E.M.-E.); (P.S.S.)
| | - James C. Gee
- Department of Radiology, University of Pennsylvania, Philadelphia, PA 19104, USA; (M.C.); (J.C.G.)
| | - Qi N. Cui
- Scheie Eye Institute, University of Pennsylvania, Philadelphia, PA 19104, USA; (V.A.); (R.S.); (E.D.); (R.L.); (M.P.); (J.G.); (M.G.M.); (L.C.); (H.V.G.); (S.Z.-G.); (S.M.); (E.J.S.); (R.M.); (C.P.O.); (Q.N.C.); (E.M.-E.); (P.S.S.)
| | - Eydie Miller-Ellis
- Scheie Eye Institute, University of Pennsylvania, Philadelphia, PA 19104, USA; (V.A.); (R.S.); (E.D.); (R.L.); (M.P.); (J.G.); (M.G.M.); (L.C.); (H.V.G.); (S.Z.-G.); (S.M.); (E.J.S.); (R.M.); (C.P.O.); (Q.N.C.); (E.M.-E.); (P.S.S.)
| | - Joan M. O’Brien
- Scheie Eye Institute, University of Pennsylvania, Philadelphia, PA 19104, USA; (V.A.); (R.S.); (E.D.); (R.L.); (M.P.); (J.G.); (M.G.M.); (L.C.); (H.V.G.); (S.Z.-G.); (S.M.); (E.J.S.); (R.M.); (C.P.O.); (Q.N.C.); (E.M.-E.); (P.S.S.)
- Correspondence: Joan.O’; Tel.: +1-215-662-8657; Fax: +1-215-662-9676
| | - Prithvi S. Sankar
- Scheie Eye Institute, University of Pennsylvania, Philadelphia, PA 19104, USA; (V.A.); (R.S.); (E.D.); (R.L.); (M.P.); (J.G.); (M.G.M.); (L.C.); (H.V.G.); (S.Z.-G.); (S.M.); (E.J.S.); (R.M.); (C.P.O.); (Q.N.C.); (E.M.-E.); (P.S.S.)
| |
Collapse
|
6
|
Xu X, Li J, Guan Y, Zhao L, Zhao Q, Zhang L, Li L. GLA-Net: A global-local attention network for automatic cataract classification. J Biomed Inform 2021; 124:103939. [PMID: 34752858 DOI: 10.1016/j.jbi.2021.103939] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2021] [Revised: 10/02/2021] [Accepted: 10/25/2021] [Indexed: 10/19/2022]
Abstract
Cataracts are the most crucial cause of blindness among all ophthalmic diseases. Convenient and cost-effective early cataract screening is urgently needed to reduce the risks of visual loss. To date, many studies have investigated automatic cataract classification based on fundus images. However, existing methods mainly rely on global image information while ignoring various local and subtle features. Notably, these local features are highly helpful for the identification of cataracts with different severities. To avoid this disadvantage, we introduce a deep learning technique to learn multilevel feature representations of the fundus image simultaneously. Specifically, a global-local attention network (GLA-Net) is proposed to handle the cataract classification task, which consists of two levels of subnets: the global-level attention subnet pays attention to the global structure information of the fundus image, while the local-level attention subnet focuses on the local discriminative features of the specific regions. These two types of subnets extract retinal features at different attention levels, which are then combined for final cataract classification. Our GLA-Net achieves the best performance in all metrics (90.65% detection accuracy, 83.47% grading accuracy, and 81.11% classification accuracy of grades 1 and 2). The experimental results on a real clinical dataset show that the combination of global-level and local-level attention models is effective for cataract screening and provides significant potential for other medical tasks.
Collapse
Affiliation(s)
- Xi Xu
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Jianqiang Li
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Yu Guan
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Linna Zhao
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Qing Zhao
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China.
| | - Li Zhang
- Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Li Li
- National Center for Children's Health, Beijing Children's Hospital, Capital Medical University, Beijing, China
| |
Collapse
|
7
|
Mrad Y, Elloumi Y, Akil M, Bedoui MH. A fast and accurate method for glaucoma screening from smartphone-captured fundus images. Ing Rech Biomed 2021. [DOI: 10.1016/j.irbm.2021.06.004] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
|
8
|
Mursch-Edlmayr AS, Ng WS, Diniz-Filho A, Sousa DC, Arnold L, Schlenker MB, Duenas-Angeles K, Keane PA, Crowston JG, Jayaram H. Artificial Intelligence Algorithms to Diagnose Glaucoma and Detect Glaucoma Progression: Translation to Clinical Practice. Transl Vis Sci Technol 2020; 9:55. [PMID: 33117612 PMCID: PMC7571273 DOI: 10.1167/tvst.9.2.55] [Citation(s) in RCA: 47] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2019] [Accepted: 09/18/2020] [Indexed: 12/11/2022] Open
Abstract
Purpose This concise review aims to explore the potential for the clinical implementation of artificial intelligence (AI) strategies for detecting glaucoma and monitoring glaucoma progression. Methods Nonsystematic literature review using the search combinations “Artificial Intelligence,” “Deep Learning,” “Machine Learning,” “Neural Networks,” “Bayesian Networks,” “Glaucoma Diagnosis,” and “Glaucoma Progression.” Information on sensitivity and specificity regarding glaucoma diagnosis and progression analysis as well as methodological details were extracted. Results Numerous AI strategies provide promising levels of specificity and sensitivity for structural (e.g. optical coherence tomography [OCT] imaging, fundus photography) and functional (visual field [VF] testing) test modalities used for the detection of glaucoma. Area under receiver operating curve (AROC) values of > 0.90 were achieved with every modality. Combining structural and functional inputs has been shown to even more improve the diagnostic ability. Regarding glaucoma progression, AI strategies can detect progression earlier than conventional methods or potentially from one single VF test. Conclusions AI algorithms applied to fundus photographs for screening purposes may provide good results using a simple and widely accessible test. However, for patients who are likely to have glaucoma more sophisticated methods should be used including data from OCT and perimetry. Outputs may serve as an adjunct to assist clinical decision making, whereas also enhancing the efficiency, productivity, and quality of the delivery of glaucoma care. Patients with diagnosed glaucoma may benefit from future algorithms to evaluate their risk of progression. Challenges are yet to be overcome, including the external validity of AI strategies, a move from a “black box” toward “explainable AI,” and likely regulatory hurdles. However, it is clear that AI can enhance the role of specialist clinicians and will inevitably shape the future of the delivery of glaucoma care to the next generation. Translational Relevance The promising levels of diagnostic accuracy reported by AI strategies across the modalities used in clinical practice for glaucoma detection can pave the way for the development of reliable models appropriate for their translation into clinical practice. Future incorporation of AI into healthcare models may help address the current limitations of access and timely management of patients with glaucoma across the world.
Collapse
Affiliation(s)
| | - Wai Siene Ng
- Cardiff Eye Unit, University Hospital of Wales, Cardiff, UK
| | - Alberto Diniz-Filho
- Department of Ophthalmology and Otorhinolaryngology, Federal University of Minas Gerais, Belo Horizonte, Brazil
| | - David C Sousa
- Department of Ophthalmology, Hospital de Santa Maria, Lisbon, Portugal
| | - Louis Arnold
- Department of Ophthalmology, University Hospital, Dijon, France
| | - Matthew B Schlenker
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Canada
| | - Karla Duenas-Angeles
- Department of Ophthalmology, Universidad Nacional Autónoma de Mexico, Mexico City, Mexico
| | - Pearse A Keane
- NIHR Biomedical Research Centre for Ophthalmology, UCL Institute of Ophthalmology & Moorfields Eye Hospital, London, UK
| | - Jonathan G Crowston
- Centre for Vision Research, Duke-NUS Medical School, Singapore.,Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Hari Jayaram
- NIHR Biomedical Research Centre for Ophthalmology, UCL Institute of Ophthalmology & Moorfields Eye Hospital, London, UK
| |
Collapse
|
9
|
Girard MJA, Schmetterer L. Artificial intelligence and deep learning in glaucoma: Current state and future prospects. PROGRESS IN BRAIN RESEARCH 2020; 257:37-64. [PMID: 32988472 DOI: 10.1016/bs.pbr.2020.07.002] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
Over the past few years, there has been an unprecedented and tremendous excitement for artificial intelligence (AI) research in the field of Ophthalmology; this has naturally been translated to glaucoma-a progressive optic neuropathy characterized by retinal ganglion cell axon loss and associated visual field defects. In this review, we aim to discuss how AI may have a unique opportunity to tackle the many challenges faced in the glaucoma clinic. This is because glaucoma remains poorly understood with difficulties in providing early diagnosis and prognosis accurately and in a timely fashion. In the short term, AI could also become a game changer by paving the way for the first cost-effective glaucoma screening campaigns. While there are undeniable technical and clinical challenges ahead, and more so than for other ophthalmic disorders whereby AI is already booming, we strongly believe that glaucoma specialists should embrace AI as a companion to their practice. Finally, this review will also remind ourselves that glaucoma is a complex group of disorders with a multitude of physiological manifestations that cannot yet be observed clinically. AI in glaucoma is here to stay, but it will not be the only tool to solve glaucoma.
Collapse
Affiliation(s)
- Michaël J A Girard
- Ophthalmic Engineering & Innovation Laboratory (OEIL), Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore.
| | - Leopold Schmetterer
- Ocular Imaging, Singapore Eye Research Institute, Singapore National Eye Centre, Singapore, Singapore; School of Chemical and Biomedical Engineering, Nanyang Technological University, Singapore, Singapore; SERI-NTU Advanced Ocular Engineering (STANCE), Singapore, Singapore; Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, Singapore, Singapore; Department of Clinical Pharmacology, Medical University of Vienna, Vienna, Austria; Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria; Institute of Clinical and Experimental Ophthalmology, Basel, Switzerland.
| |
Collapse
|
10
|
Zhou Y, Li G, Li H. Automatic Cataract Classification Using Deep Neural Network With Discrete State Transition. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:436-446. [PMID: 31295110 DOI: 10.1109/tmi.2019.2928229] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Cataract is the clouding of lens, which affects vision and it is the leading cause of blindness in the world's population. Accurate and convenient cataract detection and cataract severity evaluation will improve the situation. Automatic cataract detection and grading methods are proposed in this paper. With prior knowledge, the improved Haar features and visible structure features are combined as features, and multilayer perceptron with discrete state transition (DST-MLP) or exponential DST (EDST-MLP) are designed as classifiers. Without prior knowledge, residual neural networks with DST (DST-ResNet) or EDST (EDST-ResNet) are proposed. Whether with prior knowledge or not, our proposed DST and EDST strategy can prevent overfitting and reduce storage memory during network training and implementation, and neural networks with these strategies achieve state-of-the-art accuracy in cataract detection and grading. The experimental results indicate that combined features always achieve better performance than a single type of feature, and classification methods with feature extraction based on prior knowledge are more suitable for complicated medical image classification task. These analyses can provide constructive advice for other medical image processing applications.
Collapse
|
11
|
Automated detection of optic disc contours in fundus images using decision tree classifier. Biocybern Biomed Eng 2020. [DOI: 10.1016/j.bbe.2019.11.003] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
|
12
|
Devalla SK, Liang Z, Pham TH, Boote C, Strouthidis NG, Thiery AH, Girard MJA. Glaucoma management in the era of artificial intelligence. Br J Ophthalmol 2019; 104:301-311. [DOI: 10.1136/bjophthalmol-2019-315016] [Citation(s) in RCA: 36] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2019] [Revised: 09/07/2019] [Accepted: 10/05/2019] [Indexed: 12/20/2022]
Abstract
Glaucoma is a result of irreversible damage to the retinal ganglion cells. While an early intervention could minimise the risk of vision loss in glaucoma, its asymptomatic nature makes it difficult to diagnose until a late stage. The diagnosis of glaucoma is a complicated and expensive effort that is heavily dependent on the experience and expertise of a clinician. The application of artificial intelligence (AI) algorithms in ophthalmology has improved our understanding of many retinal, macular, choroidal and corneal pathologies. With the advent of deep learning, a number of tools for the classification, segmentation and enhancement of ocular images have been developed. Over the years, several AI techniques have been proposed to help detect glaucoma by analysis of functional and/or structural evaluations of the eye. Moreover, the use of AI has also been explored to improve the reliability of ascribing disease prognosis. This review summarises the role of AI in the diagnosis and prognosis of glaucoma, discusses the advantages and challenges of using AI systems in clinics and predicts likely areas of future progress.
Collapse
|
13
|
Optic Disc Localization in Complicated Environment of Retinal Image Using Circular-Like Estimation. ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING 2019. [DOI: 10.1007/s13369-019-03756-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
14
|
Cheng J, Li Z, Gu Z, Fu H, Wong DWK, Liu J. Structure-Preserving Guided Retinal Image Filtering and Its Application for Optic Disk Analysis. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:2536-2546. [PMID: 29994522 DOI: 10.1109/tmi.2018.2838550] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Retinal fundus photographs have been used in the diagnosis of many ocular diseases such as glaucoma, pathological myopia, age-related macular degeneration, and diabetic retinopathy. With the development of computer science, computer aided diagnosis has been developed to process and analyze the retinal images automatically. One of the challenges in the analysis is that the quality of the retinal image is often degraded. For example, a cataract in human lens will attenuate the retinal image, just as a cloudy camera lens which reduces the quality of a photograph. It often obscures the details in the retinal images and posts challenges in retinal image processing and analyzing tasks. In this paper, we approximate the degradation of the retinal images as a combination of human-lens attenuation and scattering. A novel structure-preserving guided retinal image filtering (SGRIF) is then proposed to restore images based on the attenuation and scattering model. The proposed SGRIF consists of a step of global structure transferring and a step of global edge-preserving smoothing. Our results show that the proposed SGRIF method is able to improve the contrast of retinal images, measured by histogram flatness measure, histogram spread, and variability of local luminosity. In addition, we further explored the benefits of SGRIF for subsequent retinal image processing and analyzing tasks. In the two applications of deep learning-based optic cup segmentation and sparse learning-based cup-to-disk ratio (CDR) computation, our results show that we are able to achieve more accurate optic cup segmentation and CDR measurements from images processed by SGRIF.
Collapse
|
15
|
Septiarini A, Harjoko A, Pulungan R, Ekantini R. Automatic detection of peripapillary atrophy in retinal fundus images using statistical features. Biomed Signal Process Control 2018. [DOI: 10.1016/j.bspc.2018.05.028] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
16
|
Ghassabi Z, Shanbehzadeh J, Nouri-Mahdavi K. A Unified Optic Nerve Head and Optic Cup Segmentation Using Unsupervised Neural Networks for Glaucoma Screening. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2018; 2018:5942-5945. [PMID: 30441689 DOI: 10.1109/embc.2018.8513573] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Segmentation of retinal anatomical features such as optic nerve head (ONH) and optic cup, the brightest area in the center of ONH which is devoid of neural elements, is a prerequisite for computer-aided diagnosis and follow-up of glaucoma. The ONH segmentation methods, which imposed shape and intensity constraints, are unable to identify ONH and optic cup boundaries at the same time. On the other hand, recent efficient supervised learning-based methods, which provide a unified system, require combination of many informative features as their inputs, as well as ground truth for the training phase. This paper uses a saliency map including color, intensity and orientation contrasts as the input of a winner-take-all neural network, and color visual features as the input of a self-organizing map neural network to segment ONH and optic cup, simultaneously. Our method is evaluated on a database of 205 ocular fundus images provided by local eye hospitals and publicly available image databases RIMONE and DIARETDB0 comprising 60 non-glaucomtous and 145 glaucomatous images. The ground truth is provided by two expert ophthalmologists. The method attained an average overlapping error of 9.6% and 25.1% for ONH and cup segmentation, respectively. Cup-to-disc area ratio (CDR) is computed for glaucoma assessment. The mean and standard deviation of the CDR differences between our method and the ground truth in all images are 0.11 and 0.09, respectively.
Collapse
|
17
|
Yang X, Yang JD, Hwang HP, Yu HC, Ahn S, Kim BW, You H. Segmentation of liver and vessels from CT images and classification of liver segments for preoperative liver surgical planning in living donor liver transplantation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2018; 158:41-52. [PMID: 29544789 DOI: 10.1016/j.cmpb.2017.12.008] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/16/2017] [Revised: 11/13/2017] [Accepted: 12/11/2017] [Indexed: 06/08/2023]
Abstract
BACKGROUND AND OBJECTIVE The present study developed an effective surgical planning method consisting of a liver extraction stage, a vessel extraction stage, and a liver segment classification stage based on abdominal computerized tomography (CT) images. METHODS An automatic seed point identification method, customized level set methods, and an automated thresholding method were applied in this study to extraction of the liver, portal vein (PV), and hepatic vein (HV) from CT images. Then, a semi-automatic method was developed to separate PV and HV. Lastly, a local searching method was proposed for identification of PV branches and the nearest neighbor approximation method was applied to classifying liver segments. RESULTS Onsite evaluation of liver segmentation provided by the SLIVER07 website showed that the liver segmentation method achieved an average volumetric overlap accuracy of 95.2%. An expert radiologist evaluation of vessel segmentation showed no false positive errors or misconnections between PV and HV in the extracted vessel trees. Clinical evaluation of liver segment classification using 43 CT datasets from two medical centers showed that the proposed method achieved high accuracy in liver graft volumetry (absolute error, AE = 45.2 ± 20.9 ml; percentage of AE, %AE = 6.8% ± 3.2%; percentage of %AE > 10% = 16.3%; percentage of %AE > 20% = none) and the classified segment boundaries agreed with the intraoperative surgical cutting boundaries by visual inspection. CONCLUSIONS The method in this study is effective in segmentation of liver and vessels and classification of liver segments and can be applied to preoperative liver surgical planning in living donor liver transplantation.
Collapse
Affiliation(s)
- Xiaopeng Yang
- Department of Industrial Management and Engineering, Pohang University of Science and Technology, Pohang, 37673, South Korea
| | - Jae Do Yang
- Department of Surgery, Chonbuk National University Medical School, Jeonju, 54907, South Korea; Research Institute of Clinical Medicine of Chonbuk National University-Biomedical Research Institute of Chonbuk University Hospital, Jeonju, 54907, South Korea; Research Institute for Endocrine Sciences, Chonbuk National University, Jeonju, 54907, South Korea
| | - Hong Pil Hwang
- Department of Surgery, Chonbuk National University Medical School, Jeonju, 54907, South Korea; Research Institute of Clinical Medicine of Chonbuk National University-Biomedical Research Institute of Chonbuk University Hospital, Jeonju, 54907, South Korea; Research Institute for Endocrine Sciences, Chonbuk National University, Jeonju, 54907, South Korea
| | - Hee Chul Yu
- Department of Surgery, Chonbuk National University Medical School, Jeonju, 54907, South Korea; Research Institute of Clinical Medicine of Chonbuk National University-Biomedical Research Institute of Chonbuk University Hospital, Jeonju, 54907, South Korea; Research Institute for Endocrine Sciences, Chonbuk National University, Jeonju, 54907, South Korea.
| | - Sungwoo Ahn
- Department of Surgery, Chonbuk National University Medical School, Jeonju, 54907, South Korea; Research Institute of Clinical Medicine of Chonbuk National University-Biomedical Research Institute of Chonbuk University Hospital, Jeonju, 54907, South Korea; Research Institute for Endocrine Sciences, Chonbuk National University, Jeonju, 54907, South Korea
| | - Bong-Wan Kim
- Department of Liver Transplantation and Hepatobiliary Surgery, Ajou University School of Medicine, Suwon, 16499, South Korea
| | - Heecheon You
- Department of Industrial Management and Engineering, Pohang University of Science and Technology, Pohang, 37673, South Korea
| |
Collapse
|
18
|
Panda R, Puhan NB, Panda G. Mean curvature and texture constrained composite weighted random walk algorithm for optic disc segmentation towards glaucoma screening. Healthc Technol Lett 2018. [PMID: 29515814 PMCID: PMC5830943 DOI: 10.1049/htl.2017.0043] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022] Open
Abstract
Accurate optic disc (OD) segmentation is an important step in obtaining cup-to-disc ratio-based glaucoma screening using fundus imaging. It is a challenging task because of the subtle OD boundary, blood vessel occlusion and intensity inhomogeneity. In this Letter, the authors propose an improved version of the random walk algorithm for OD segmentation to tackle such challenges. The algorithm incorporates the mean curvature and Gabor texture energy features to define the new composite weight function to compute the edge weights. Unlike the deformable model-based OD segmentation techniques, the proposed algorithm remains unaffected by curve initialisation and local energy minima problem. The effectiveness of the proposed method is verified with DRIVE, DIARETDB1, DRISHTI-GS and MESSIDOR database images using the performance measures such as mean absolute distance, overlapping ratio, dice coefficient, sensitivity, specificity and precision. The obtained OD segmentation results and quantitative performance measures show robustness and superiority of the proposed algorithm in handling the complex challenges in OD segmentation.
Collapse
Affiliation(s)
- Rashmi Panda
- School of Electrical Sciences, Indian Institute of Technology Bhubaneswar, Bhubaneswar, Odisha 752050, India
| | - N B Puhan
- School of Electrical Sciences, Indian Institute of Technology Bhubaneswar, Bhubaneswar, Odisha 752050, India
| | - Ganapati Panda
- School of Electrical Sciences, Indian Institute of Technology Bhubaneswar, Bhubaneswar, Odisha 752050, India
| |
Collapse
|
19
|
Septiarini A, Khairina DM, Kridalaksana AH, Hamdani H. Automatic Glaucoma Detection Method Applying a Statistical Approach to Fundus Images. Healthc Inform Res 2018; 24:53-60. [PMID: 29503753 PMCID: PMC5820087 DOI: 10.4258/hir.2018.24.1.53] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2017] [Revised: 10/27/2017] [Accepted: 11/04/2017] [Indexed: 11/23/2022] Open
Abstract
OBJECTIVES Glaucoma is an incurable eye disease and the second leading cause of blindness in the world. Until 2020, the number of patients of this disease is estimated to increase. This paper proposes a glaucoma detection method using statistical features and the k-nearest neighbor algorithm as the classifier. METHODS We propose three statistical features, namely, the mean, smoothness and 3rd moment, which are extracted from images of the optic nerve head. These three features are obtained through feature extraction followed by feature selection using the correlation feature selection method. To classify those features, we apply the k-nearest neighbor algorithm as a classifier to perform glaucoma detection on fundus images. RESULTS To evaluate the performance of the proposed method, 84 fundus images were used as experimental data consisting of 41 glaucoma image and 43 normal images. The performance of our proposed method was measured in terms of accuracy, and the overall result achieved in this work was 95.24%, respectively. CONCLUSIONS This research showed that the proposed method using three statistics features achieves good performance for glaucoma detection.
Collapse
Affiliation(s)
- Anindita Septiarini
- Department of Computer Science, Faculty of Computer Science and Information Technology, Mulawarman University, Samarinda, Indonesia
| | - Dyna M. Khairina
- Department of Computer Science, Faculty of Computer Science and Information Technology, Mulawarman University, Samarinda, Indonesia
| | - Awang H. Kridalaksana
- Department of Computer Science, Faculty of Computer Science and Information Technology, Mulawarman University, Samarinda, Indonesia
| | - Hamdani Hamdani
- Department of Computer Science, Faculty of Computer Science and Information Technology, Mulawarman University, Samarinda, Indonesia
| |
Collapse
|
20
|
Hatanaka Y, Tajima M, Kawasaki R, Saito K, Ogohara K, Muramatsu C, Sunayama W, Fujita H. Retinal biometrics based on Iterative Closest Point algorithm. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2017; 2017:373-376. [PMID: 29059888 DOI: 10.1109/embc.2017.8036840] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
The pattern of blood vessels in the eye is unique to each person because it rarely changes over time. Therefore, it is well known that retinal blood vessels are useful for biometrics. This paper describes a biometrics method using the Jaccard similarity coefficient (JSC) based on blood vessel regions in retinal image pairs. The retinal image pairs were rough matched by the center of their optic discs. Moreover, the image pairs were aligned using the Iterative Closest Point algorithm based on detailed blood vessel skeletons. For registration, perspective transform was applied to the retinal images. Finally, the pairs were classified as either correct or incorrect using the JSC of the blood vessel region in the image pairs. The proposed method was applied to temporal retinal images, which were obtained in 2009 (695 images) and 2013 (87 images). The 87 images acquired in 2013 were all from persons already examined in 2009. The accuracy of the proposed method reached 100%.
Collapse
|
21
|
Robust and accurate optic disk localization using vessel symmetry line measure in fundus images. Biocybern Biomed Eng 2017. [DOI: 10.1016/j.bbe.2017.05.008] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
22
|
Hatanaka Y, Tachiki H, Ogohara K, Muramatsu C, Okumura S, Fujita H. Artery and vein diameter ratio measurement based on improvement of arteries and veins segmentation on retinal images. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2016; 2016:1336-1339. [PMID: 28268572 DOI: 10.1109/embc.2016.7590954] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Retinal arteriolar narrowing is decided based on the artery and vein diameter ratio (AVR). Previous methods segmented blood vessels and classified arteries and veins by color pixels in the centerlines of blood vessels. AVR was definitively determined through measurement of artery and vein diameters. However, this approach was not sufficient for cases with close contact between the artery of interest and an imposing vein. Here, an algorithm for AVR measurement via new classification of arteries and veins is proposed. In this algorithm, additional steps for an accurate segmentation of arteries and veins, which were not identified using the previous method, have been added to better identify major veins in the red channel of a color image. To identify major arteries, a decision tree with three features was used. As a result, all major veins and 90.9% of major arteries were correctly identified, and the absolute mean error in AVRs was 0.12. The proposed method will require further testing with a greater number of images of arteriolar narrowing before clinical application.
Collapse
|
23
|
Sarathi MP, Dutta MK, Singh A, Travieso CM. Blood vessel inpainting based technique for efficient localization and segmentation of optic disc in digital fundus images. Biomed Signal Process Control 2016. [DOI: 10.1016/j.bspc.2015.10.012] [Citation(s) in RCA: 47] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
24
|
Wu M, Leng T, de Sisternes L, Rubin DL, Chen Q. Automated segmentation of optic disc in SD-OCT images and cup-to-disc ratios quantification by patch searching-based neural canal opening detection. OPTICS EXPRESS 2015; 23:31216-31229. [PMID: 26698750 DOI: 10.1364/oe.23.031216] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Glaucoma is one of the most common causes of blindness worldwide. Early detection of glaucoma is traditionally based on assessment of the cup-to-disc (C/D) ratio, an important indicator of structural changes to the optic nerve head. Here, we present an automated optic disc segmentation algorithm in 3-D spectral domain optical coherence tomography (SD-OCT) volumes to quantify this ratio. The proposed algorithm utilizes a two-stage strategy. First, it detects the neural canal opening (NCO) by finding the points with maximum curvature on the retinal pigment epithelium (RPE) boundary with a spatial correlation smoothness constraint on consecutive B-scans, and it approximately locates the coarse disc margin in the projection image using convex hull fitting. Then, a patch searching procedure using a probabilistic support vector machine (SVM) classifier finds the most likely patch with the NCO in its center in order to refine the segmentation result. Thus, a reference plane can be determined to calculate the C/D radio. Experimental results on 42 SD-OCT volumes from 17 glaucoma patients demonstrate that the proposed algorithm can achieve high segmentation accuracy and a low C/D ratio evaluation error. The unsigned border error for optic disc segmentation and the evaluation error for C/D ratio comparing with manual segmentation are 2.216 ± 1.406 pixels (0.067 ± 0.042 mm) and 0.045 ± 0.033, respectively.
Collapse
|
25
|
Issac A, Partha Sarathi M, Dutta MK. An adaptive threshold based image processing technique for improved glaucoma detection and classification. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2015; 122:229-244. [PMID: 26321351 DOI: 10.1016/j.cmpb.2015.08.002] [Citation(s) in RCA: 65] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/05/2015] [Revised: 07/14/2015] [Accepted: 08/03/2015] [Indexed: 06/04/2023]
Abstract
Glaucoma is an optic neuropathy which is one of the main causes of permanent blindness worldwide. This paper presents an automatic image processing based method for detection of glaucoma from the digital fundus images. In this proposed work, the discriminatory parameters of glaucoma infection, such as cup to disc ratio (CDR), neuro retinal rim (NRR) area and blood vessels in different regions of the optic disc has been used as features and fed as inputs to learning algorithms for glaucoma diagnosis. These features which have discriminatory changes with the occurrence of glaucoma are strategically used for training the classifiers to improve the accuracy of identification. The segmentation of optic disc and cup based on adaptive threshold of the pixel intensities lying in the optic nerve head region. Unlike existing methods the proposed algorithm is based on an adaptive threshold that uses local features from the fundus image for segmentation of optic cup and optic disc making it invariant to the quality of the image and noise content which may find wider acceptability. The experimental results indicate that such features are more significant in comparison to the statistical or textural features as considered in existing works. The proposed work achieves an accuracy of 94.11% with a sensitivity of 100%. A comparison of the proposed work with the existing methods indicates that the proposed approach has improved accuracy of classification glaucoma from a digital fundus which may be considered clinically significant.
Collapse
Affiliation(s)
- Ashish Issac
- Department of Electronics & Communication Engineering, Amity University, Noida, India
| | - M Partha Sarathi
- Department of Electronics & Communication Engineering, Amity University, Noida, India
| | - Malay Kishore Dutta
- Department of Electronics & Communication Engineering, Amity University, Noida, India.
| |
Collapse
|
26
|
Cheng J, Liu J, Yin F, Lee BH, Wong DWK, Aung T, Cheng CY, Wong TY. Self-assessment for optic disc segmentation. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2015; 2013:5861-4. [PMID: 24111072 DOI: 10.1109/embc.2013.6610885] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Optic disc segmentation from retinal fundus image is a fundamental but important step in many applications such as automated glaucoma diagnosis. Very often, one method might work well on many images but fail on some other images and it is difficult to have a single method or model to cover all scenarios. Therefore, it is important to combine results from several methods to minimize the risk of failure. For this purpose, this paper computes confidence scores for three methods and combine their results for an optimal one. The experimental results show that the combined result from three methods is better than the results by any individual method. It reduces the mean overlapping error by 7.4% relatively compared with best individual method. Simultaneously, the number of failed cases with large overlapping errors is also greatly reduced. This is important to enhance the clinical deployment of the automated disc segmentation.
Collapse
|
27
|
Sidibé D, Sadek I, Mériaudeau F. Discrimination of retinal images containing bright lesions using sparse coded features and SVM. Comput Biol Med 2015; 62:175-84. [PMID: 25935125 DOI: 10.1016/j.compbiomed.2015.04.026] [Citation(s) in RCA: 39] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2014] [Revised: 04/03/2015] [Accepted: 04/14/2015] [Indexed: 11/17/2022]
Affiliation(s)
- Désiré Sidibé
- Université de Bourgogne - LE2I, CNRS, UMR 6306, 12 rue de la fonderie, 71200 Le Creusot, France.
| | - Ibrahim Sadek
- Université de Bourgogne - LE2I, CNRS, UMR 6306, 12 rue de la fonderie, 71200 Le Creusot, France.
| | - Fabrice Mériaudeau
- Université de Bourgogne - LE2I, CNRS, UMR 6306, 12 rue de la fonderie, 71200 Le Creusot, France.
| |
Collapse
|
28
|
Guo L, Yang JJ, Peng L, Li J, Liang Q. A computer-aided healthcare system for cataract classification and grading based on fundus image analysis. COMPUT IND 2015. [DOI: 10.1016/j.compind.2014.09.005] [Citation(s) in RCA: 33] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
29
|
Chen MF, Chui TYP, Alhadeff P, Rosen RB, Ritch R, Dubra A, Hood DC. Adaptive optics imaging of healthy and abnormal regions of retinal nerve fiber bundles of patients with glaucoma. Invest Ophthalmol Vis Sci 2015; 56:674-81. [PMID: 25574048 DOI: 10.1167/iovs.14-15936] [Citation(s) in RCA: 43] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
PURPOSE To better understand the nature of glaucomatous damage of the macula, especially the structural changes seen between relatively healthy and clearly abnormal (AB) retinal regions, using an adaptive optics scanning light ophthalmoscope (AO-SLO). METHODS Adaptive optics SLO images and optical coherence tomography (OCT) vertical line scans were obtained on one eye of seven glaucoma patients, with relatively deep local arcuate defects on the 10-2 visual field test in one (six eyes) or both hemifields (one eye). Based on the OCT images, the retinal nerve fiber (RNF) layer was divided into two regions: (1) within normal limits (WNL), relative RNF layer thickness within mean control values ±2 SD; and (2) AB, relative thickness less than -2 SD value. RESULTS As seen on AO-SLO, the pattern of AB RNF bundles near the border of the WNL and AB regions differed across eyes. There were normal-appearing bundles in the WNL region of all eyes and AB-appearing bundles near the border with the AB region. This region with AB bundles ranged in extent from a few bundles to the entire AB region in the case of one eye. All other eyes had a large AB region without bundles. However, in two of these eyes, a few bundles were seen within this region of otherwise missing bundles. CONCLUSIONS The AO-SLO images revealed details of glaucomatous damage that are difficult, if not impossible, to see with current OCT technology. Adaptive optics SLO may prove useful in following progression in clinical trials, or in disease management, if AO-SLO becomes widely available and easy to use.
Collapse
Affiliation(s)
- Monica F Chen
- Department of Psychology, Columbia University, New York, New York, United States
| | - Toco Y P Chui
- New York Eye and Ear Infirmary of Mount Sinai, New York, New York, United States Department of Ophthalmology, Icahn School of Medicine at Mount Sinai, New York, New York, United States
| | - Paula Alhadeff
- New York Eye and Ear Infirmary of Mount Sinai, New York, New York, United States
| | - Richard B Rosen
- New York Eye and Ear Infirmary of Mount Sinai, New York, New York, United States Department of Ophthalmology, Icahn School of Medicine at Mount Sinai, New York, New York, United States
| | - Robert Ritch
- New York Eye and Ear Infirmary of Mount Sinai, New York, New York, United States
| | - Alfredo Dubra
- Medical College of Wisconsin, Milwaukee, Wisconsin, United States
| | - Donald C Hood
- Department of Psychology, Columbia University, New York, New York, United States Department of Ophthalmology, Columbia University, New York, New York, United States
| |
Collapse
|
30
|
Santhi D, Manimegalai D, Karkuzhali S. DIAGNOSIS OF DIABETIC RETINOPATHY BY EXUDATES DETECTION USING CLUSTERING TECHNIQUES. BIOMEDICAL ENGINEERING: APPLICATIONS, BASIS AND COMMUNICATIONS 2014. [DOI: 10.4015/s101623721450077x] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Diabetes is the most prevalent disease that affects the retina and leads to blindness without any symptoms. An adverse change in retinal blood vessels that leads to vision loss is called as Diabetic Retinopathy (DR). DR is one among the leading causes of blindness worldwide. There is an increasing interest to design the medical system for screening and diagnosis of DR. Segmentation of exudates is essential for diagnostic purpose. In this regard, Optic Disc (OD) center is detected by template matching technique and then it is masked to avoid misclassification in the results of exudates detection. In this paper, we proposed a novel K-Means nearest neighbor algorithm, combining K-means with morphology and Fuzzy to segment exudates. The main advantage of the proposed approach is that it does not depend upon manually selected parameters. Performances of these algorithms are compared with existing algorithms like Fuzzy C means (FCM) and Spatially Weighted Fuzzy C Means (SWFCM). These different segmentation algorithms are applied to publically available STARE data set and it is found that mean sensitivity, specificity and accuracy values for the fuzzy algorithm is 91%, 94% and 93% respectively and considerably higher than other algorithms.
Collapse
Affiliation(s)
- D. Santhi
- Department of Electronics and Instrumentation Engineering, National Engineering College, Kovilpatti, Tamilnadu, India
| | - D. Manimegalai
- Department of Information Technology, National Engineering College, Kovilpatti, Tamilnadu, India
| | - S. Karkuzhali
- Department of Information Technology, National Engineering College, Kovilpatti, Tamilnadu, India
| |
Collapse
|
31
|
MacGillivray TJ, Trucco E, Cameron JR, Dhillon B, Houston JG, van Beek EJR. Retinal imaging as a source of biomarkers for diagnosis, characterization and prognosis of chronic illness or long-term conditions. Br J Radiol 2014; 87:20130832. [PMID: 24936979 PMCID: PMC4112401 DOI: 10.1259/bjr.20130832] [Citation(s) in RCA: 75] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2013] [Revised: 05/09/2014] [Accepted: 06/16/2014] [Indexed: 11/05/2022] Open
Abstract
The black void behind the pupil was optically impenetrable before the invention of the ophthalmoscope by von Helmholtz over 150 years ago. Advances in retinal imaging and image processing, especially over the past decade, have opened a route to another unexplored landscape, the retinal neurovascular architecture and the retinal ganglion pathways linking to the central nervous system beyond. Exploiting these research opportunities requires multidisciplinary teams to explore the interface sitting at the border between ophthalmology, neurology and computing science. It is from the detail and depth of retinal phenotyping that novel metrics and candidate biomarkers are likely to emerge. Confirmation that in vivo retinal neurovascular measures are predictive of microvascular change in the brain and other organs is likely to be a major area of research activity over the next decade. Unlocking this hidden potential within the retina requires integration of structural and functional data sets, that is, multimodal mapping and longitudinal studies spanning the natural history of the disease process. And with further advances in imaging, it is likely that this area of retinal research will remain active and clinically relevant for many years to come. Accordingly, this review looks at state-of-the-art retinal imaging and its application to diagnosis, characterization and prognosis of chronic illness or long-term conditions.
Collapse
Affiliation(s)
- T J MacGillivray
- Vampire Project, Clinical Research Imaging Centre, University of Edinburgh, Edinburgh, UK
| | | | | | | | | | | |
Collapse
|
32
|
Fuente-Arriaga JADL, Felipe-Riverón EM, Garduño-Calderón E. Application of vascular bundle displacement in the optic disc for glaucoma detection using fundus images. Comput Biol Med 2014; 47:27-35. [DOI: 10.1016/j.compbiomed.2014.01.005] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2013] [Revised: 01/10/2014] [Accepted: 01/14/2014] [Indexed: 10/25/2022]
|
33
|
Hatanaka Y, Nagahata Y, Muramatsu C, Okumura S, Ogohara K, Sawada A, Ishida K, Yamamoto T, Fujita H. Improved automated optic cup segmentation based on detection of blood vessel bends in retinal fundus images. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2014; 2014:126-129. [PMID: 25569913 DOI: 10.1109/embc.2014.6943545] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Glaucoma is a leading cause of permanent blindness. Retinal imaging is useful for early detection of glaucoma. In order to evaluate the presence of glaucoma, ophthalmologists may determine the cup and disc areas and diagnose glaucoma using a vertical optic cup-to-disc (C/D) ratio and a rim-to-disc (R/D) ratio. Previously we proposed a method to determine cup edge by analyzing a vertical profile of pixel values, but this method provided a cup edge smaller than that of an ophthalmologist. This paper describes an improved method using the locations of the blood vessel bends. The blood vessels were detected by a concentration feature determined from the density gradient. The blood vessel bends were detected by tracking the blood vessels from the disc edge to the primary cup edge, which was determined by our previous method. Lastly, the vertical C/D ratio and the R/D ratio were calculated. Using forty-four images, including 32 glaucoma images, the AUCs of both the vertical C/D ratio and R/D ratio by this proposed method were 0.966 and 0.936, respectively.
Collapse
|
34
|
Yang X, Yu HC, Choi Y, Lee W, Wang B, Yang J, Hwang H, Kim JH, Song J, Cho BH, You H. A hybrid semi-automatic method for liver segmentation based on level-set methods using multiple seed points. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2013; 113:69-79. [PMID: 24113421 DOI: 10.1016/j.cmpb.2013.08.019] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/31/2013] [Revised: 08/27/2013] [Accepted: 08/29/2013] [Indexed: 06/02/2023]
Abstract
The present study developed a hybrid semi-automatic method to extract the liver from abdominal computerized tomography (CT) images. The proposed hybrid method consists of a customized fast-marching level-set method for detection of an optimal initial liver region from multiple seed points selected by the user and a threshold-based level-set method for extraction of the actual liver region based on the initial liver region. The performance of the hybrid method was compared with those of the 2D region growing method implemented in OsiriX using abdominal CT datasets of 15 patients. The hybrid method showed a significantly higher accuracy in liver extraction (similarity index, SI=97.6 ± 0.5%; false positive error, FPE = 2.2 ± 0.7%; false negative error, FNE=2.5 ± 0.8%; average symmetric surface distance, ASD=1.4 ± 0.5mm) than the 2D (SI=94.0 ± 1.9%; FPE = 5.3 ± 1.1%; FNE=6.5 ± 3.7%; ASD=6.7 ± 3.8mm) region growing method. The total liver extraction time per CT dataset of the hybrid method (77 ± 10 s) is significantly less than the 2D region growing method (575 ± 136 s). The interaction time per CT dataset between the user and a computer of the hybrid method (28 ± 4 s) is significantly shorter than the 2D region growing method (484 ± 126 s). The proposed hybrid method was found preferred for liver segmentation in preoperative virtual liver surgery planning.
Collapse
Affiliation(s)
- Xiaopeng Yang
- Pohang University of Science and Technology, Pohang 790-784, South Korea
| | | | | | | | | | | | | | | | | | | | | |
Collapse
|
35
|
Cheng J, Liu J, Xu Y, Yin F, Wong DWK, Lee BH, Cheung C, Aung T, Wong TY. Superpixel classification for initialization in model based optic disc segmentation. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2013; 2012:1450-3. [PMID: 23366174 DOI: 10.1109/embc.2012.6346213] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
Abstract
Optic disc segmentation in retinal fundus image is important in ocular image analysis and computer aided diagnosis. Because of the presence of peripapillary atrophy which affects the deformation, it is important to have a good initialization in deformable model based optic disc segmentation. In this paper, a superpixel classification based method is proposed for the initialization. It uses histogram of superpixels from the contrast enhanced image as features. In the training, bootstrapping is adopted to handle the unbalanced cluster issue due to the presence of peripapillary atrophy. A self-assessment reliability score is computed to evaluate the quality of the initialization and the segmentation. The proposed method has been tested in a database of 650 images with optic disc boundaries marked by trained professionals manually. The experimental results show an mean overlapping error of 10.0% and standard deviation of 7.5% in the best scenario. The results also show an increase in overlapping error as the reliability score reduces, which justifies the effectiveness of the self-assessment. The method can be used for optic disc boundary initialization and segmentation in computer aided diagnosis system and the self-assessment can be used as an indicator of cases with large errors and thus enhance the usage of the automatic segmentation.
Collapse
Affiliation(s)
- Jun Cheng
- Institute for Infocomm Research, A*Star, Singapore
| | | | | | | | | | | | | | | | | |
Collapse
|
36
|
Cheng J, Liu J, Xu Y, Yin F, Wong DWK, Tan NM, Tao D, Cheng CY, Aung T, Wong TY. Superpixel classification based optic disc and optic cup segmentation for glaucoma screening. IEEE TRANSACTIONS ON MEDICAL IMAGING 2013; 32:1019-32. [PMID: 23434609 DOI: 10.1109/tmi.2013.2247770] [Citation(s) in RCA: 153] [Impact Index Per Article: 13.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
Glaucoma is a chronic eye disease that leads to vision loss. As it cannot be cured, detecting the disease in time is important. Current tests using intraocular pressure (IOP) are not sensitive enough for population based glaucoma screening. Optic nerve head assessment in retinal fundus images is both more promising and superior. This paper proposes optic disc and optic cup segmentation using superpixel classification for glaucoma screening. In optic disc segmentation, histograms, and center surround statistics are used to classify each superpixel as disc or non-disc. A self-assessment reliability score is computed to evaluate the quality of the automated optic disc segmentation. For optic cup segmentation, in addition to the histograms and center surround statistics, the location information is also included into the feature space to boost the performance. The proposed segmentation methods have been evaluated in a database of 650 images with optic disc and optic cup boundaries manually marked by trained professionals. Experimental results show an average overlapping error of 9.5% and 24.1% in optic disc and optic cup segmentation, respectively. The results also show an increase in overlapping error as the reliability score is reduced, which justifies the effectiveness of the self-assessment. The segmented optic disc and optic cup are then used to compute the cup to disc ratio for glaucoma screening. Our proposed method achieves areas under curve of 0.800 and 0.822 in two data sets, which is higher than other methods. The methods can be used for segmentation and glaucoma screening. The self-assessment will be used as an indicator of cases with large errors and enhance the clinical deployment of the automatic segmentation and screening.
Collapse
Affiliation(s)
- Jun Cheng
- iMED Ocular Imaging Programme in Institute for Infocomm Research, Agency for Science, Technology and Research, Singapore.
| | | | | | | | | | | | | | | | | | | |
Collapse
|
37
|
Trucco E, Ruggeri A, Karnowski T, Giancardo L, Chaum E, Hubschman JP, Al-Diri B, Cheung CY, Wong D, Abràmoff M, Lim G, Kumar D, Burlina P, Bressler NM, Jelinek HF, Meriaudeau F, Quellec G, Macgillivray T, Dhillon B. Validating retinal fundus image analysis algorithms: issues and a proposal. Invest Ophthalmol Vis Sci 2013; 54:3546-59. [PMID: 23794433 DOI: 10.1167/iovs.12-10347] [Citation(s) in RCA: 119] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
This paper concerns the validation of automatic retinal image analysis (ARIA) algorithms. For reasons of space and consistency, we concentrate on the validation of algorithms processing color fundus camera images, currently the largest section of the ARIA literature. We sketch the context (imaging instruments and target tasks) of ARIA validation, summarizing the main image analysis and validation techniques. We then present a list of recommendations focusing on the creation of large repositories of test data created by international consortia, easily accessible via moderated Web sites, including multicenter annotations by multiple experts, specific to clinical tasks, and capable of running submitted software automatically on the data stored, with clear and widely agreed-on performance criteria, to provide a fair comparison.
Collapse
Affiliation(s)
- Emanuele Trucco
- VAMPIRE project, School of Computing, University of Dundee, Dundee, United Kingdom.
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
38
|
Detection of Human Retina Images Suspect of Glaucoma through the Vascular Bundle Displacement in the Optic Disc. ACTA ACUST UNITED AC 2013. [DOI: 10.1007/978-3-642-45114-0_41] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register]
|
39
|
Alayon S, Gonzalez de la Rosa M, Fumero FJ, Sigut Saavedra JF, Sanchez JL. Variability between experts in defining the edge and area of the optic nerve head. ACTA ACUST UNITED AC 2012; 88:168-73. [PMID: 23623016 DOI: 10.1016/j.oftal.2012.07.008] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2011] [Revised: 04/25/2012] [Accepted: 07/10/2012] [Indexed: 11/19/2022]
Abstract
OBJECTIVE Estimation of the error rate in the subjective determination of the optic nerve head edge and area. METHOD 1) 169 images of optic nerve disc were evaluated by five experts for the defining of the edges in 8 positions (every 45°). 2) The estimated areas of 26 cases were compared with the measurements of the Cirrus Optical Coherence Tomography (OCT-Cirrus). RESULTS 1) The mean variation of the estimated radius was ±5.2%, with no significant differences between sectors. Specific differences were found between the 5 experts (P <.001), each one compared with the others. 2) The disc area measured by the OCT-Cirros was 1.78 mm² (SD =0.27). The results corresponding to the experts who detected smaller areas were better correlated to the area detected by the OCT-Cirrus (r=0.77-0.88) than the results corresponding to larger areas (r =0.61-0.69) (P <.05 in extreme cases). CONCLUSIONS There are specific patterns in each expert for defining the disc edges and involve 20% variation in the estimation of the optic nerve area. The experts who detected smaller areas have a higher agreement with the objective method used. A web tool is proposed for self-assessment and training in this task.
Collapse
Affiliation(s)
- S Alayon
- Departamento de Ingeniería de Sistemas y Automática y Arquitectura y Tecnología de Computadores, Universidad de La Laguna, La Laguna, Spain.
| | | | | | | | | |
Collapse
|
40
|
Yin F, Liu J, Ong SH, Sun Y, Wong DWK, Tan NM, Cheung C, Baskaran M, Aung T, Wong TY. Model-based optic nerve head segmentation on retinal fundus images. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2012; 2011:2626-9. [PMID: 22254880 DOI: 10.1109/iembs.2011.6090724] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The optic nerve head (optic disc) plays an important role in the diagnosis of retinal diseases. Automatic localization and segmentation of the optic disc is critical towards a good computer-aided diagnosis (CAD) system. In this paper, we propose a method that combines edge detection, the Circular Hough Transform and a statistical deformable model to detect the optic disc from retinal fundus images. The algorithm was evaluated against a data set of 325 digital color fundus images, which includes both normal images and images with various pathologies. The result shows that the average error in area overlap is 11.3% and the average absolute area error is 10.8%, which outperforms existing methods. The result indicates a high correlation with ground truth segmentation and thus demonstrates a good potential for this system to be integrated with other retinal CAD systems.
Collapse
Affiliation(s)
- Fengshou Yin
- Institute for Infocomm Research, A*STAR, Singapore. fyin@ i2r.a-star.edu.sg
| | | | | | | | | | | | | | | | | | | |
Collapse
|
41
|
Hatanaka Y, Noudo A, Muramatsu C, Sawada A, Hara T, Yamamoto T, Fujita H. Automatic measurement of cup to disc ratio based on line profile analysis in retinal images. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2012; 2011:3387-90. [PMID: 22255066 DOI: 10.1109/iembs.2011.6090917] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Retinal image examination is useful for early detection of glaucoma, which is a leading cause of permanent blindness. In order to evaluate the presence of glaucoma, ophthalmologists may determine the cup and disc areas and diagnose glaucoma using a vertical cup-to-disc ratio. However, determination of the cup area based on computation algorithm is very difficult, thus we propose a method to measure the cup-to-disc ratio using a vertical profile on the optic disc. The edge of optic disc was then detected by use of a Canny edge detection filter. The profile was then obtained around the center of the optic disc. Subsequently, the edges of the cup area were determined by classification of the profiles based on zero-crossing method. Lastly, the vertical cup-to-disc ratio was calculated. Using forty five images, including twenty three glaucoma images, the AUC of 0.947 was achieved with this method.
Collapse
Affiliation(s)
- Yuji Hatanaka
- Department of Electronic Systems Engineering, School of Engineering, the University of Shiga Prefecture, Hassaka-cho 2500, Hikone-shi, Shiga 522-8533, Japan.
| | | | | | | | | | | | | |
Collapse
|
42
|
Muramatsu C, Hatanaka Y, Sawada A, Yamamoto T, Fujita H. Computerized detection of peripapillary chorioretinal atrophy by texture analysis. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2012; 2011:5947-50. [PMID: 22255694 DOI: 10.1109/iembs.2011.6091470] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Presence of peripapillary chorioretinal atrophy (PPA) is considered one of the risk factors for glaucoma. It can be identified as bright regions in retinal fundus images, and therefore, incorrectly included as the part of the optic disc regions in the automated disc detection scheme. For potential risk assessment and use in improving optic disc segmentation, a computerized detection of PPA was investigated. By using texture analysis, the sensitivity for detecting the moderate to severe PPA regions in the test dataset was 73% with the specificity of 95%. The proposed method may be useful for identifying the cases with the PPA in retinal fundus images.
Collapse
Affiliation(s)
- Chisako Muramatsu
- Department of Intelligent Image Information, Graduate School of Medicine, Gifu University, Gifu 501-1194, Japan.
| | | | | | | | | |
Collapse
|
43
|
Muramatsu C, Nakagawa T, Sawada A, Hatanaka Y, Yamamoto T, Fujita H. Automated determination of cup-to-disc ratio for classification of glaucomatous and normal eyes on stereo retinal fundus images. JOURNAL OF BIOMEDICAL OPTICS 2011; 16:096009. [PMID: 21950923 DOI: 10.1117/1.3622755] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
Early diagnosis of glaucoma, which is the second leading cause of blindness in the world, can halt or slow the progression of the disease. We propose an automated method for analyzing the optic disc and measuring the cup-to-disc ratio (CDR) on stereo retinal fundus images to improve ophthalmologists' diagnostic efficiency and potentially reduce the variation on the CDR measurement. The method was developed using 80 retinal fundus image pairs, including 25 glaucomatous, and 55 nonglaucomatous eyes, obtained at our institution. A disc region was segmented using the active contour method with the brightness and edge information. The segmentation of a cup region was performed using a depth map of the optic disc, which was reconstructed on the basis of the stereo disparity. The CDRs were measured and compared with those determined using the manual segmentation results by an expert ophthalmologist. The method was applied to a new database which consisted of 98 stereo image pairs including 60 and 30 pairs with and without signs of glaucoma, respectively. Using the CDRs, an area under the receiver operating characteristic curve of 0.90 was obtained for classification of the glaucomatous and nonglaucomatous eyes. The result indicates potential usefulness of the automated determination of CDRs for the diagnosis of glaucoma.
Collapse
Affiliation(s)
- Chisako Muramatsu
- Gifu University, Graduate School of Medicine, Department of Intelligent Image Information 1-1 Yanagido, Gifu 501-1194, Japan
| | | | | | | | | | | |
Collapse
|
44
|
Muramatsu C, Hatanaka Y, Iwase T, Hara T, Fujita H. Automated selection of major arteries and veins for measurement of arteriolar-to-venular diameter ratio on retinal fundus images. Comput Med Imaging Graph 2011; 35:472-80. [PMID: 21489750 DOI: 10.1016/j.compmedimag.2011.03.002] [Citation(s) in RCA: 49] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2010] [Revised: 12/27/2010] [Accepted: 03/16/2011] [Indexed: 11/16/2022]
Abstract
An automated method for measurement of arteriolar-to-venular diameter ratio (AVR) is presented. The method includes optic disc segmentation for the determination of the AVR measurement zone, retinal vessel segmentation, vessel classification into arteries and veins, selection of major vessel pairs, and measurement of AVRs. The sensitivity for the major vessels in the measurement zone was 87%, while 93% of them were classified correctly into arteries or veins. In 36 out of 40 vessel pairs, at least parts of the paired vessels were correctly identified. Although the average error in the AVRs with respect to those based on the manual vessel segmentation results was 0.11, the average error in vessel diameter was less than 1 pixel. The proposed method may be useful for objective evaluation of AVRs and has a potential for detecting focal arteriolar narrowing on macula-centered screening fundus images.
Collapse
Affiliation(s)
- Chisako Muramatsu
- Department of Intelligent Image Information, Graduate School of Medicine, Gifu University, 1-1 Yanagido, Gifu 501-1194, Japan.
| | | | | | | | | |
Collapse
|