1
|
AlShawabkeh M, AlRyalat SA, Al Bdour M, Alni’mat A, Al-Akhras M. The utilization of artificial intelligence in glaucoma: diagnosis versus screening. FRONTIERS IN OPHTHALMOLOGY 2024; 4:1368081. [PMID: 38984126 PMCID: PMC11182276 DOI: 10.3389/fopht.2024.1368081] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/09/2024] [Accepted: 02/20/2024] [Indexed: 07/11/2024]
Abstract
With advancements in the implementation of artificial intelligence (AI) in different ophthalmology disciplines, it continues to have a significant impact on glaucoma diagnosis and screening. This article explores the distinct roles of AI in specialized ophthalmology clinics and general practice, highlighting the critical balance between sensitivity and specificity in diagnostic and screening models. Screening models prioritize sensitivity to detect potential glaucoma cases efficiently, while diagnostic models emphasize specificity to confirm disease with high accuracy. AI applications, primarily using machine learning (ML) and deep learning (DL), have been successful in detecting glaucomatous optic neuropathy from colored fundus photographs and other retinal imaging modalities. Diagnostic models integrate data extracted from various forms of modalities (including tests that assess structural optic nerve damage as well as those evaluating functional damage) to provide a more nuanced, accurate and thorough approach to diagnosing glaucoma. As AI continues to evolve, the collaboration between technology and clinical expertise should focus more on improving specificity of glaucoma diagnostic models to assess ophthalmologists to revolutionize glaucoma diagnosis and improve patients care.
Collapse
Affiliation(s)
| | - Saif Aldeen AlRyalat
- Department of Ophthalmology, The University of Jordan, Amman, Jordan
- Department of Ophthalmology, Houston Methodist Hospital, Houston, TX, United States
| | - Muawyah Al Bdour
- Department of Ophthalmology, The University of Jordan, Amman, Jordan
| | - Ayat Alni’mat
- Department of Ophthalmology, Al Taif Eye Center, Amman, Jordan
| | - Mousa Al-Akhras
- Department of Computer Information Systems, School of Information Technology and Systems, The University of Jordan, Amman, Jordan
| |
Collapse
|
2
|
Abdushkour H, Soomro TA, Ali A, Ali Jandan F, Jelinek H, Memon F, Althobiani F, Mohammed Ghonaim S, Irfan M. Enhancing fine retinal vessel segmentation: Morphological reconstruction and double thresholds filtering strategy. PLoS One 2023; 18:e0288792. [PMID: 37467245 DOI: 10.1371/journal.pone.0288792] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Accepted: 07/05/2023] [Indexed: 07/21/2023] Open
Abstract
Eye diseases such as diabetic retinopathy are progressive with various changes in the retinal vessels, and it is difficult to analyze the disease for future treatment. There are many computerized algorithms implemented for retinal vessel segmentation, but the tiny vessels drop off, impacting the performance of the overall algorithms. This research work contains the new image processing techniques such as enhancement filters, coherence filters and binary thresholding techniques to handle the different color retinal fundus image problems to achieve a vessel image that is well-segmented, and the proposed algorithm has improved performance over existing work. Our developed technique incorporates morphological techniques to address the center light reflex issue. Additionally, to effectively resolve the problem of insufficient and varying contrast, our developed technique employs homomorphic methods and Wiener filtering. Coherent filters are used to address the coherence issue of the retina vessels, and then a double thresholding technique is applied with image reconstruction to achieve a correctly segmented vessel image. The results of our developed technique were evaluated using the STARE and DRIVE datasets and it achieves an accuracy of about 0.96 and a sensitivity of 0.81. The performance obtained from our proposed method proved the capability of the method which can be used by ophthalmology experts to diagnose ocular abnormalities and recommended for further treatment.
Collapse
Affiliation(s)
- Hesham Abdushkour
- Nautical Science Deptartment, Faculty of Maritime, King Abdul Aziz University, Jeddah, Saudia Arabia
| | - Toufique A Soomro
- Department of Electronic Engineering, Quaid-e-Awam University of Engineering, Science and Technology Larkana Campus, Sukkur, Pakistan
| | - Ahmed Ali
- Eletrical Engineering Department, Sukkur IBA University, Sukkur, Pakistan
| | - Fayyaz Ali Jandan
- Eletrical Engineering Department, Quaid-e-Awam University of Engineering, Science and Technology Larkana Campus, Sukkur, Pakistan
| | - Herbert Jelinek
- Health Engineering Innovation Center and biotechnology Center, Khalifa University, Abu Dhabi, UAE
| | - Farida Memon
- Department of Electronic Engineering, Mehran University, Janshoro, Jamshoro, Pakistan
| | - Faisal Althobiani
- Marine Engineering Department, Faculty of Maritime, King Abdul Aziz University, Jeddah, Saudia Arabia
| | - Saleh Mohammed Ghonaim
- Marine Engineering Department, Faculty of Maritime, King Abdul Aziz University, Jeddah, Saudia Arabia
| | - Muhammad Irfan
- Electrical Engineering Department, College of Engineering, Najran University, Najran, Saudi Arabia
| |
Collapse
|
3
|
Calabrèse A, Fournet V, Dours S, Matonti F, Castet E, Kornprobst P. A New Vessel-Based Method to Estimate Automatically the Position of the Nonfunctional Fovea on Altered Retinography From Maculopathies. Transl Vis Sci Technol 2023; 12:9. [PMID: 37418249 PMCID: PMC10337789 DOI: 10.1167/tvst.12.7.9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Accepted: 06/01/2023] [Indexed: 07/08/2023] Open
Abstract
Purpose The purpose of this study was to validate a new automated method to locate the fovea on normal and pathological fundus images. Compared to the normative anatomic measures (NAMs), our vessel-based fovea localization (VBFL) approach relies on the retina's vessel structure to make predictions. Methods The spatial relationship between the fovea location and vessel characteristics is learnt from healthy fundus images and then used to predict fovea location in new images. We evaluate the VBFL method on three categories of fundus images: healthy images acquired with different head orientations and fixation locations, healthy images with simulated macular lesions, and pathological images from age-related macular degeneration (AMD). Results For healthy images taken with the head tilted to the side, the NAM estimation error is significantly multiplied by 4, whereas VBFL yields no significant increase, representing a 73% reduction in prediction error. With simulated lesions, VBFL performance decreases significantly as lesion size increases and remains better than NAM until lesion size reaches 200 degrees2. For pathological images, average prediction error was 2.8 degrees, with 64% of the images yielding an error of 2.5 degrees or less. VBFL was not robust for images showing darker regions and/or incomplete representation of the optic disk. Conclusions The vascular structure provides enough information to precisely locate the fovea in fundus images in a way that is robust to head tilt, eccentric fixation location, missing vessels, and actual macular lesions. Translational Relevance The VBFL method should allow researchers and clinicians to assess automatically the eccentricity of a newly developed area of fixation in fundus images with macular lesions.
Collapse
Affiliation(s)
- Aurélie Calabrèse
- Aix-Marseille Univ, CNRS, LPC, Marseille, France
- Université Côte d'Azur, Inria, France
| | | | | | - Frédéric Matonti
- Centre Monticelli Paradis d'Ophtalmologie, Marseille, France
- Aix-Marseille Univ, CNRS, INT, Marseille, France
- Groupe Almaviva Santé, Clinique Juge, Marseille, France
| | - Eric Castet
- Aix-Marseille Univ, CNRS, LPC, Marseille, France
| | | |
Collapse
|
4
|
Wang CY, Mukundan A, Liu YS, Tsao YM, Lin FC, Fan WS, Wang HC. Optical Identification of Diabetic Retinopathy Using Hyperspectral Imaging. J Pers Med 2023; 13:939. [PMID: 37373927 DOI: 10.3390/jpm13060939] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2023] [Revised: 05/23/2023] [Accepted: 05/30/2023] [Indexed: 06/29/2023] Open
Abstract
The severity of diabetic retinopathy (DR) is directly correlated to changes in both the oxygen utilization rate of retinal tissue as well as the blood oxygen saturation of both arteries and veins. Therefore, the current stage of DR in a patient can be identified by analyzing the oxygen content in blood vessels through fundus images. This enables medical professionals to make accurate and prompt judgments regarding the patient's condition. However, in order to use this method to implement supplementary medical treatment, blood vessels under fundus images need to be determined first, and arteries and veins then need to be differentiated from one another. Therefore, the entire study was split into three sections. After first removing the background from the fundus images using image processing, the blood vessels in the images were then separated from the background. Second, the method of hyperspectral imaging (HSI) was utilized in order to construct the spectral data. The HSI algorithm was utilized in order to perform analysis and simulations on the overall reflection spectrum of the retinal image. Thirdly, principal component analysis (PCA) was performed in order to both simplify the data and acquire the major principal components score plot for retinopathy in arteries and veins at all stages. In the final step, arteries and veins in the original fundus images were separated using the principal components score plots for each stage. As retinopathy progresses, the difference in reflectance between the arteries and veins gradually decreases. This results in a more difficult differentiation of PCA results in later stages, along with decreased precision and sensitivity. As a consequence of this, the precision and sensitivity of the HSI method in DR patients who are in the normal stage and those who are in the proliferative DR (PDR) stage are the highest and lowest, respectively. On the other hand, the indicator values are comparable between the background DR (BDR) and pre-proliferative DR (PPDR) stages due to the fact that both stages exhibit comparable clinical-pathological severity characteristics. The results indicate that the sensitivity values of arteries are 82.4%, 77.5%, 78.1%, and 72.9% in the normal, BDR, PPDR, and PDR, while for veins, these values are 88.5%, 85.4%, 81.4%, and 75.1% in the normal, BDR, PPDR, and PDR, respectively.
Collapse
Affiliation(s)
- Ching-Yu Wang
- Department of Ophthalmology, Dalin Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation, Chiayi 62247, Taiwan
| | - Arvind Mukundan
- Department of Mechanical Engineering, National Chung Cheng University, Chiayi 62102, Taiwan
| | - Yu-Sin Liu
- Department of Mechanical Engineering, National Chung Cheng University, Chiayi 62102, Taiwan
| | - Yu-Ming Tsao
- Department of Mechanical Engineering, National Chung Cheng University, Chiayi 62102, Taiwan
| | - Fen-Chi Lin
- Department of Ophthalmology, Kaohsiung Armed Forces General Hospital, Kaohsiung 80284, Taiwan
| | - Wen-Shuang Fan
- Department of Ophthalmology, Dalin Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation, Chiayi 62247, Taiwan
| | - Hsiang-Chen Wang
- Department of Mechanical Engineering, National Chung Cheng University, Chiayi 62102, Taiwan
- Director of Technology Development, Hitspectra Intelligent Technology Co., Ltd., Kaohsiung 80661, Taiwan
| |
Collapse
|
5
|
Artificial intelligence using deep learning to predict the anatomical outcome of rhegmatogenous retinal detachment surgery: a pilot study. Graefes Arch Clin Exp Ophthalmol 2023; 261:715-721. [PMID: 36303063 DOI: 10.1007/s00417-022-05884-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2022] [Revised: 10/14/2022] [Accepted: 10/21/2022] [Indexed: 11/04/2022] Open
Abstract
PURPOSE To develop and evaluate an automated deep learning model to predict the anatomical outcome of rhegmatogenous retinal detachment (RRD) surgery. METHODS Six thousand six hundred and sixty-one digital images of RRD treated by vitrectomy and internal tamponade were collected from the British and Eire Association of Vitreoretinal Surgeons database. Each image was classified as a primary surgical success or a primary surgical failure. The synthetic minority over-sampling technique was used to address class imbalance. We adopted the state-of-the-art deep convolutional neural network architecture Inception v3 to train, validate, and test deep learning models to predict the anatomical outcome of RRD surgery. The area under the curve (AUC), sensitivity, and specificity for predicting the outcome of RRD surgery was calculated for the best predictive deep learning model. RESULTS The deep learning model was able to predict the anatomical outcome of RRD surgery with an AUC of 0.94, with a corresponding sensitivity of 73.3% and a specificity of 96%. CONCLUSION A deep learning model is capable of accurately predicting the anatomical outcome of RRD surgery. This fully automated model has potential application in surgical care of patients with RRD.
Collapse
|
6
|
Celard P, Iglesias EL, Sorribes-Fdez JM, Romero R, Vieira AS, Borrajo L. A survey on deep learning applied to medical images: from simple artificial neural networks to generative models. Neural Comput Appl 2022; 35:2291-2323. [PMID: 36373133 PMCID: PMC9638354 DOI: 10.1007/s00521-022-07953-4] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2022] [Accepted: 10/12/2022] [Indexed: 11/06/2022]
Abstract
Deep learning techniques, in particular generative models, have taken on great importance in medical image analysis. This paper surveys fundamental deep learning concepts related to medical image generation. It provides concise overviews of studies which use some of the latest state-of-the-art models from last years applied to medical images of different injured body areas or organs that have a disease associated with (e.g., brain tumor and COVID-19 lungs pneumonia). The motivation for this study is to offer a comprehensive overview of artificial neural networks (NNs) and deep generative models in medical imaging, so more groups and authors that are not familiar with deep learning take into consideration its use in medicine works. We review the use of generative models, such as generative adversarial networks and variational autoencoders, as techniques to achieve semantic segmentation, data augmentation, and better classification algorithms, among other purposes. In addition, a collection of widely used public medical datasets containing magnetic resonance (MR) images, computed tomography (CT) scans, and common pictures is presented. Finally, we feature a summary of the current state of generative models in medical image including key features, current challenges, and future research paths.
Collapse
Affiliation(s)
- P. Celard
- Computer Science Department, Universidade de Vigo, Escuela Superior de Ingeniería Informática, Campus Universitario As Lagoas, 32004 Ourense, Spain
- CINBIO - Biomedical Research Centre, Universidade de Vigo, Campus Universitario Lagoas-Marcosende, 36310 Vigo, Spain
- SING Research Group, Galicia Sur Health Research Institute (IIS Galicia Sur), SERGAS-UVIGO, Vigo, Spain
| | - E. L. Iglesias
- Computer Science Department, Universidade de Vigo, Escuela Superior de Ingeniería Informática, Campus Universitario As Lagoas, 32004 Ourense, Spain
- CINBIO - Biomedical Research Centre, Universidade de Vigo, Campus Universitario Lagoas-Marcosende, 36310 Vigo, Spain
- SING Research Group, Galicia Sur Health Research Institute (IIS Galicia Sur), SERGAS-UVIGO, Vigo, Spain
| | - J. M. Sorribes-Fdez
- Computer Science Department, Universidade de Vigo, Escuela Superior de Ingeniería Informática, Campus Universitario As Lagoas, 32004 Ourense, Spain
- CINBIO - Biomedical Research Centre, Universidade de Vigo, Campus Universitario Lagoas-Marcosende, 36310 Vigo, Spain
- SING Research Group, Galicia Sur Health Research Institute (IIS Galicia Sur), SERGAS-UVIGO, Vigo, Spain
| | - R. Romero
- Computer Science Department, Universidade de Vigo, Escuela Superior de Ingeniería Informática, Campus Universitario As Lagoas, 32004 Ourense, Spain
- CINBIO - Biomedical Research Centre, Universidade de Vigo, Campus Universitario Lagoas-Marcosende, 36310 Vigo, Spain
- SING Research Group, Galicia Sur Health Research Institute (IIS Galicia Sur), SERGAS-UVIGO, Vigo, Spain
| | - A. Seara Vieira
- Computer Science Department, Universidade de Vigo, Escuela Superior de Ingeniería Informática, Campus Universitario As Lagoas, 32004 Ourense, Spain
- CINBIO - Biomedical Research Centre, Universidade de Vigo, Campus Universitario Lagoas-Marcosende, 36310 Vigo, Spain
- SING Research Group, Galicia Sur Health Research Institute (IIS Galicia Sur), SERGAS-UVIGO, Vigo, Spain
| | - L. Borrajo
- Computer Science Department, Universidade de Vigo, Escuela Superior de Ingeniería Informática, Campus Universitario As Lagoas, 32004 Ourense, Spain
- CINBIO - Biomedical Research Centre, Universidade de Vigo, Campus Universitario Lagoas-Marcosende, 36310 Vigo, Spain
- SING Research Group, Galicia Sur Health Research Institute (IIS Galicia Sur), SERGAS-UVIGO, Vigo, Spain
| |
Collapse
|
7
|
Biswas S, Khan MIA, Hossain MT, Biswas A, Nakai T, Rohdin J. Which Color Channel Is Better for Diagnosing Retinal Diseases Automatically in Color Fundus Photographs? LIFE (BASEL, SWITZERLAND) 2022; 12:life12070973. [PMID: 35888063 PMCID: PMC9321111 DOI: 10.3390/life12070973] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/27/2022] [Revised: 05/25/2022] [Accepted: 06/01/2022] [Indexed: 11/22/2022]
Abstract
Color fundus photographs are the most common type of image used for automatic diagnosis of retinal diseases and abnormalities. As all color photographs, these images contain information about three primary colors, i.e., red, green, and blue, in three separate color channels. This work aims to understand the impact of each channel in the automatic diagnosis of retinal diseases and abnormalities. To this end, the existing works are surveyed extensively to explore which color channel is used most commonly for automatically detecting four leading causes of blindness and one retinal abnormality along with segmenting three retinal landmarks. From this survey, it is clear that all channels together are typically used for neural network-based systems, whereas for non-neural network-based systems, the green channel is most commonly used. However, from the previous works, no conclusion can be drawn regarding the importance of the different channels. Therefore, systematic experiments are conducted to analyse this. A well-known U-shaped deep neural network (U-Net) is used to investigate which color channel is best for segmenting one retinal abnormality and three retinal landmarks.
Collapse
Affiliation(s)
- Sangeeta Biswas
- Faculty of Engineering, University of Rajshahi, Rajshahi 6205, Bangladesh; (M.I.A.K.); (M.T.H.)
- Correspondence: or
| | - Md. Iqbal Aziz Khan
- Faculty of Engineering, University of Rajshahi, Rajshahi 6205, Bangladesh; (M.I.A.K.); (M.T.H.)
| | - Md. Tanvir Hossain
- Faculty of Engineering, University of Rajshahi, Rajshahi 6205, Bangladesh; (M.I.A.K.); (M.T.H.)
| | - Angkan Biswas
- CAPM Company Limited, Bonani, Dhaka 1213, Bangladesh;
| | - Takayoshi Nakai
- Faculty of Engineering, Shizuoka University, Hamamatsu 432-8561, Japan;
| | - Johan Rohdin
- Faculty of Information Technology, Brno University of Technology, 61200 Brno, Czech Republic;
| |
Collapse
|
8
|
Surendiran J, Theetchenya S, Benson Mansingh PM, Sekar G, Dhipa M, Yuvaraj N, Arulkarthick VJ, Suresh C, Sriram A, Srihari K, Alene A. Segmentation of Optic Disc and Cup Using Modified Recurrent Neural Network. BIOMED RESEARCH INTERNATIONAL 2022; 2022:6799184. [PMID: 35547359 PMCID: PMC9085314 DOI: 10.1155/2022/6799184] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Revised: 03/08/2022] [Accepted: 03/21/2022] [Indexed: 11/17/2022]
Abstract
Glaucoma is one of the leading factors of vision loss, where the people tends to lose their vision quickly. The examination of cup-to-disc ratio is considered essential in diagnosing glaucoma. It is hence regarded that the segmentation of optic disc and cup is useful in finding the ratio. In this paper, we develop an extraction and segmentation of optic disc and cup from an input eye image using modified recurrent neural networks (mRNN). The mRNN use the combination of recurrent neural network (RNN) with fully convolutional network (FCN) that exploits the intra- and interslice contexts. The FCN extracts the contents from an input image by constructing a feature map for the intra- and interslice contexts. This is carried out to extract the relevant information, where RNN concentrates more on interslice context. The simulation is conducted to test the efficacy of the model that integrates the contextual information for optimal segmentation of optical cup and disc. The results of simulation show that the proposed method mRNN is efficient in improving the rate of segmentation than the other deep learning models like Drive, STARE, MESSIDOR, ORIGA, and DIARETDB.
Collapse
Affiliation(s)
- J. Surendiran
- Department of Electronics and Communication Engineering, HKBK College of Engineering, India
| | - S. Theetchenya
- Department of Computer Science and Engineering, Sona College of Technology, India
| | - P. M. Benson Mansingh
- Department of Electronics and Communication Engineering, Sri Ramakrishna Institute of Technology, India
| | - G. Sekar
- Department of Electronics and Communication Engineering, Sri Ramakrishna Institute of Technology, India
| | - M. Dhipa
- Department of Electronics and Communication Engineering, Erode Sengunthar Engineering College, India
| | - N. Yuvaraj
- Research and Development, ICT Academy, IIT Madras Research Park, India
| | - V. J. Arulkarthick
- Department of Electronics and Communication Engineering, Karpagam Institute Technology, Coimbatore 641105, India
| | - C. Suresh
- Department of Computer Science Engineering, Sri Ranganathar Institute of Engineering and Technology, Coimbatore, India
| | - Arram Sriram
- Department of Information Technology, AnuragUniversity, Hyderabad, India
| | - K. Srihari
- Department of Computer Science and Engineering, SNS College of Technology, India
| | - Assefa Alene
- Department of Chemical Engineering, College of Biological and Chemical Engineering, Addis Ababa Science and Technology University, Ethiopia
| |
Collapse
|
9
|
Mahmood MT, Lee IH. Optic Disc Localization in Fundus Images through Accumulated Directional and Radial Blur Analysis. Comput Med Imaging Graph 2022; 98:102058. [DOI: 10.1016/j.compmedimag.2022.102058] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2020] [Revised: 10/29/2021] [Accepted: 03/17/2022] [Indexed: 10/18/2022]
|
10
|
Bunod R, Augstburger E, Brasnu E, Labbe A, Baudouin C. [Artificial intelligence and glaucoma: A literature review]. J Fr Ophtalmol 2022; 45:216-232. [PMID: 34991909 DOI: 10.1016/j.jfo.2021.11.002] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2021] [Accepted: 11/18/2021] [Indexed: 11/26/2022]
Abstract
In recent years, research in artificial intelligence (AI) has experienced an unprecedented surge in the field of ophthalmology, in particular glaucoma. The diagnosis and follow-up of glaucoma is complex and relies on a body of clinical evidence and ancillary tests. This large amount of information from structural and functional testing of the optic nerve and macula makes glaucoma a particularly appropriate field for the application of AI. In this paper, we will review work using AI in the field of glaucoma, whether for screening, diagnosis or detection of progression. Many AI strategies have shown promising results for glaucoma detection using fundus photography, optical coherence tomography, or automated perimetry. The combination of these imaging modalities increases the performance of AI algorithms, with results comparable to those of humans. We will discuss potential applications as well as obstacles and limitations to the deployment and validation of such models. While there is no doubt that AI has the potential to revolutionize glaucoma management and screening, research in the coming years will need to address unavoidable questions regarding the clinical significance of such results and the explicability of the predictions.
Collapse
Affiliation(s)
- R Bunod
- Service d'ophtalmologie 3, IHU FOReSIGHT, centre hospitalier national des Quinze-Vingts, 28, rue de Charenton, 75012 Paris, France.
| | - E Augstburger
- Service d'ophtalmologie 3, IHU FOReSIGHT, centre hospitalier national des Quinze-Vingts, 28, rue de Charenton, 75012 Paris, France
| | - E Brasnu
- Service d'ophtalmologie 3, IHU FOReSIGHT, centre hospitalier national des Quinze-Vingts, 28, rue de Charenton, 75012 Paris, France; CHNO des Quinze-Vingts, IHU FOReSIGHT, INSERM-DGOS CIC 1423, 17, rue Moreau, 75012 Paris, France; Sorbonne universités, INSERM, CNRS, institut de la Vision, 17, rue Moreau, 75012 Paris, France
| | - A Labbe
- Service d'ophtalmologie 3, IHU FOReSIGHT, centre hospitalier national des Quinze-Vingts, 28, rue de Charenton, 75012 Paris, France; CHNO des Quinze-Vingts, IHU FOReSIGHT, INSERM-DGOS CIC 1423, 17, rue Moreau, 75012 Paris, France; Sorbonne universités, INSERM, CNRS, institut de la Vision, 17, rue Moreau, 75012 Paris, France; Service d'ophtalmologie, hôpital Ambroise-Paré, AP-HP, université de Paris Saclay, 9, avenue Charles-de-Gaulle, 92100 Boulogne-Billancourt, France
| | - C Baudouin
- Service d'ophtalmologie 3, IHU FOReSIGHT, centre hospitalier national des Quinze-Vingts, 28, rue de Charenton, 75012 Paris, France; CHNO des Quinze-Vingts, IHU FOReSIGHT, INSERM-DGOS CIC 1423, 17, rue Moreau, 75012 Paris, France; Sorbonne universités, INSERM, CNRS, institut de la Vision, 17, rue Moreau, 75012 Paris, France; Service d'ophtalmologie, hôpital Ambroise-Paré, AP-HP, université de Paris Saclay, 9, avenue Charles-de-Gaulle, 92100 Boulogne-Billancourt, France
| |
Collapse
|
11
|
Feng Z, Wang G, Xia H, Li M, Liang G, Dong T, Xiao P, Yuan J. Macular Vascular Geometry Changes With Sex and Age in Healthy Subjects: A Fundus Photography Study. Front Med (Lausanne) 2021; 8:778346. [PMID: 34977079 PMCID: PMC8714757 DOI: 10.3389/fmed.2021.778346] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2021] [Accepted: 11/17/2021] [Indexed: 11/13/2022] Open
Abstract
Purpose: To characterize the sex- and age-related alterations of the macular vascular geometry in a population of healthy eyes using fundus photography. Methods: A cross-sectional study was conducted with 610 eyes from 305 healthy subjects (136 men, 169 women) who underwent fundus photography examination and was divided into four age groups (G1 with age ≤ 25 years, G2 with age 26–35 years, G3 with age 36–45 years, and G4 with age ≥ 46 years). A self-developed automated retinal vasculature analysis system allowed segmentation and separate multiparametric quantification of the macular vascular network according to the Early Treatment Diabetic Retinopathy Study (ETDRS). Vessel fractal dimension (Df), vessel area rate (VAR), average vessel diameter (Dm), and vessel tortuosity (τn) were acquired and compared between sex and age groups. Results: There was no significant difference between the mean age of male and female subjects (32.706 ± 10.372 and 33.494 ± 10.620, respectively, p > 0.05) and the mean age of both sexes in each age group (p > 0.05). The Df, VAR, and Dm of the inner ring, the Df of the outer ring, and the Df and VAR of the whole macula were significantly greater in men than women (p < 0.001, p < 0.001, p < 0.05, respectively). There was no significant change of τn between males and females (p > 0.05). The Df, VAR, and Dm of the whole macula, the inner and outer rings associated negatively with age (p < 0.001), whereas the τn showed no significant association with age (p > 0.05). Comparison between age groups observed that Df started to decrease from G2 compared with G1 in the inner ring (p < 0.05) and Df, VAR, and Dm all decreased from G3 compared with the younger groups in the whole macula, inner and outer rings (p < 0.05). Conclusion: In the healthy subjects, macular vascular geometric parameters obtained from fundus photography showed that Df, VAR, and Dm are related to sex and age while τn is not. The baseline values of the macular vascular geometry were also acquired for both sexes and all age groups.
Collapse
Affiliation(s)
- Ziqing Feng
- State Key Laboratory of Ophthalmology, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Gengyuan Wang
- State Key Laboratory of Ophthalmology, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Honghui Xia
- Department of Ophthalmology, Zhaoqing Gaoyao People's Hospital, Zhaoqing, China
| | - Meng Li
- State Key Laboratory of Ophthalmology, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Guoxia Liang
- Department of Ophthalmology, Zhaoqing Gaoyao People's Hospital, Zhaoqing, China
| | - Tingting Dong
- Department of Ophthalmology, Zhaoqing Gaoyao People's Hospital, Zhaoqing, China
| | - Peng Xiao
- State Key Laboratory of Ophthalmology, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
- *Correspondence: Peng Xiao
| | - Jin Yuan
- State Key Laboratory of Ophthalmology, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
- Jin Yuan
| |
Collapse
|
12
|
Al-Mukhtar M, Morad AH, Albadri M, Islam MDS. Weakly Supervised Sensitive Heatmap framework to classify and localize diabetic retinopathy lesions. Sci Rep 2021; 11:23631. [PMID: 34880311 PMCID: PMC8655092 DOI: 10.1038/s41598-021-02834-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2021] [Accepted: 11/11/2021] [Indexed: 11/09/2022] Open
Abstract
Vision loss happens due to diabetic retinopathy (DR) in severe stages. Thus, an automatic detection method applied to diagnose DR in an earlier phase may help medical doctors to make better decisions. DR is considered one of the main risks, leading to blindness. Computer-Aided Diagnosis systems play an essential role in detecting features in fundus images. Fundus images may include blood vessels, exudates, micro-aneurysm, hemorrhages, and neovascularization. In this paper, our model combines automatic detection for the diabetic retinopathy classification with localization methods depending on weakly-supervised learning. The model has four stages; in stage one, various preprocessing techniques are applied to smooth the data set. In stage two, the network had gotten deeply to the optic disk segment for eliminating any exudate's false prediction because the exudates had the same color pixel as the optic disk. In stage three, the network is fed through training data to classify each label. Finally, the layers of the convolution neural network are re-edited, and used to localize the impact of DR on the patient's eye. The framework tackles the matching technique between two essential concepts where the classification problem depends on the supervised learning method. While the localization problem was obtained by the weakly supervised method. An additional layer known as weakly supervised sensitive heat map (WSSH) was added to detect the ROI of the lesion at a test accuracy of 98.65%, while comparing with Class Activation Map that involved weakly supervised technology achieved 0.954. The main purpose is to learn a representation that collect the central localization of discriminative features in a retina image. CNN-WSSH model is able to highlight decisive features in a single forward pass for getting the best detection of lesions.
Collapse
Affiliation(s)
| | | | | | - M D Samiul Islam
- Department of Computing Science, University of Alberta, Edmonton, Canada.
| |
Collapse
|
13
|
Zhang J, Zhang Y, Qiu H, Xie W, Yao Z, Yuan H, Jia Q, Wang T, Shi Y, Huang M, Zhuang J, Xu X. Pyramid-Net: Intra-layer Pyramid-Scale Feature Aggregation Network for Retinal Vessel Segmentation. Front Med (Lausanne) 2021; 8:761050. [PMID: 34950679 PMCID: PMC8688400 DOI: 10.3389/fmed.2021.761050] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2021] [Accepted: 11/05/2021] [Indexed: 11/18/2022] Open
Abstract
Retinal vessel segmentation plays an important role in the diagnosis of eye-related diseases and biomarkers discovery. Existing works perform multi-scale feature aggregation in an inter-layer manner, namely inter-layer feature aggregation. However, such an approach only fuses features at either a lower scale or a higher scale, which may result in a limited segmentation performance, especially on thin vessels. This discovery motivates us to fuse multi-scale features in each layer, intra-layer feature aggregation, to mitigate the problem. Therefore, in this paper, we propose Pyramid-Net for accurate retinal vessel segmentation, which features intra-layer pyramid-scale aggregation blocks (IPABs). At each layer, IPABs generate two associated branches at a higher scale and a lower scale, respectively, and the two with the main branch at the current scale operate in a pyramid-scale manner. Three further enhancements including pyramid inputs enhancement, deep pyramid supervision, and pyramid skip connections are proposed to boost the performance. We have evaluated Pyramid-Net on three public retinal fundus photography datasets (DRIVE, STARE, and CHASE-DB1). The experimental results show that Pyramid-Net can effectively improve the segmentation performance especially on thin vessels, and outperforms the current state-of-the-art methods on all the adopted three datasets. In addition, our method is more efficient than existing methods with a large reduction in computational cost. We have released the source code at https://github.com/JerRuy/Pyramid-Net.
Collapse
Affiliation(s)
- Jiawei Zhang
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital, Guangdong Cardiovascular Institute, Guangdong Academy of Medical Sciences, Guangzhou, China
- Shanghai key Laboratory of Data Science, School of Computer Science, Fudan University, Shanghai, China
- Department of Computer Science and Engineering, University of Notre Dame, Notre Dame, IN, United States
- Oujiang Laboratory (Zhejiang Lab for Regenerative Medicine, Vision and Brain Health), Wenzhou, China
| | - Yanchun Zhang
- Oujiang Laboratory (Zhejiang Lab for Regenerative Medicine, Vision and Brain Health), Wenzhou, China
- Cyberspace Institute of Advanced Technology, Guangzhou University, Guangzhou, China
- College of Engineering and Science, Victoria University, Melbourne, VIC, Australia
| | - Hailong Qiu
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital, Guangdong Cardiovascular Institute, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Wen Xie
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital, Guangdong Cardiovascular Institute, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Zeyang Yao
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital, Guangdong Cardiovascular Institute, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Haiyun Yuan
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital, Guangdong Cardiovascular Institute, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Qianjun Jia
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital, Guangdong Cardiovascular Institute, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Tianchen Wang
- Department of Computer Science and Engineering, University of Notre Dame, Notre Dame, IN, United States
| | - Yiyu Shi
- Department of Computer Science and Engineering, University of Notre Dame, Notre Dame, IN, United States
| | - Meiping Huang
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital, Guangdong Cardiovascular Institute, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Jian Zhuang
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital, Guangdong Cardiovascular Institute, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Xiaowei Xu
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital, Guangdong Cardiovascular Institute, Guangdong Academy of Medical Sciences, Guangzhou, China
| |
Collapse
|
14
|
A Hybrid Method to Enhance Thick and Thin Vessels for Blood Vessel Segmentation. Diagnostics (Basel) 2021; 11:diagnostics11112017. [PMID: 34829365 PMCID: PMC8621384 DOI: 10.3390/diagnostics11112017] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2021] [Revised: 10/25/2021] [Accepted: 10/25/2021] [Indexed: 11/16/2022] Open
Abstract
Retinal blood vessels have been presented to contribute confirmation with regard to tortuosity, branching angles, or change in diameter as a result of ophthalmic disease. Although many enhancement filters are extensively utilized, the Jerman filter responds quite effectively at vessels, edges, and bifurcations and improves the visualization of structures. In contrast, curvelet transform is specifically designed to associate scale with orientation and can be used to recover from noisy data by curvelet shrinkage. This paper describes a method to improve the performance of curvelet transform further. A distinctive fusion of curvelet transform and the Jerman filter is presented for retinal blood vessel segmentation. Mean-C thresholding is employed for the segmentation purpose. The suggested method achieves average accuracies of 0.9600 and 0.9559 for DRIVE and CHASE_DB1, respectively. Simulation results establish a better performance and faster implementation of the suggested scheme in comparison with similar approaches seen in the literature.
Collapse
|
15
|
Shi Z, Wang T, Huang Z, Xie F, Liu Z, Wang B, Xu J. MD-Net: A multi-scale dense network for retinal vessel segmentation. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102977] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
16
|
Simultaneous segmentation and classification of the retinal arteries and veins from color fundus images. Artif Intell Med 2021; 118:102116. [PMID: 34412839 DOI: 10.1016/j.artmed.2021.102116] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2021] [Revised: 05/20/2021] [Accepted: 05/21/2021] [Indexed: 01/25/2023]
Abstract
BACKGROUND AND OBJECTIVES The study of the retinal vasculature represents a fundamental stage in the screening and diagnosis of many high-incidence diseases, both systemic and ophthalmic. A complete retinal vascular analysis requires the segmentation of the vascular tree along with the classification of the blood vessels into arteries and veins. Early automatic methods approach these complementary segmentation and classification tasks in two sequential stages. However, currently, these two tasks are approached as a joint semantic segmentation, because the classification results highly depend on the effectiveness of the vessel segmentation. In that regard, we propose a novel approach for the simultaneous segmentation and classification of the retinal arteries and veins from eye fundus images. METHODS We propose a novel method that, unlike previous approaches, and thanks to the proposal of a novel loss, decomposes the joint task into three segmentation problems targeting arteries, veins and the whole vascular tree. This configuration allows to handle vessel crossings intuitively and directly provides accurate segmentation masks of the different target vascular trees. RESULTS The provided ablation study on the public Retinal Images vessel Tree Extraction (RITE) dataset demonstrates that the proposed method provides a satisfactory performance, particularly in the segmentation of the different structures. Furthermore, the comparison with the state of the art shows that our method achieves highly competitive results in the artery/vein classification, while significantly improving the vascular segmentation. CONCLUSIONS The proposed multi-segmentation method allows to detect more vessels and better segment the different structures, while achieving a competitive classification performance. Also, in these terms, our approach outperforms the approaches of various reference works. Moreover, in contrast with previous approaches, the proposed method allows to directly detect the vessel crossings, as well as preserving the continuity of both arteries and veins at these complex locations.
Collapse
|
17
|
Ashraf MN, Hussain M, Habib Z. Review of Various Tasks Performed in the Preprocessing Phase of a Diabetic Retinopathy Diagnosis System. Curr Med Imaging 2021; 16:397-426. [PMID: 32410541 DOI: 10.2174/1573405615666190219102427] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2018] [Revised: 12/31/2018] [Accepted: 01/20/2019] [Indexed: 12/15/2022]
Abstract
Diabetic Retinopathy (DR) is a major cause of blindness in diabetic patients. The increasing population of diabetic patients and difficulty to diagnose it at an early stage are limiting the screening capabilities of manual diagnosis by ophthalmologists. Color fundus images are widely used to detect DR lesions due to their comfortable, cost-effective and non-invasive acquisition procedure. Computer Aided Diagnosis (CAD) of DR based on these images can assist ophthalmologists and help in saving many sight years of diabetic patients. In a CAD system, preprocessing is a crucial phase, which significantly affects its performance. Commonly used preprocessing operations are the enhancement of poor contrast, balancing the illumination imbalance due to the spherical shape of a retina, noise reduction, image resizing to support multi-resolution, color normalization, extraction of a field of view (FOV), etc. Also, the presence of blood vessels and optic discs makes the lesion detection more challenging because these two artifacts exhibit specific attributes, which are similar to those of DR lesions. Preprocessing operations can be broadly divided into three categories: 1) fixing the native defects, 2) segmentation of blood vessels, and 3) localization and segmentation of optic discs. This paper presents a review of the state-of-the-art preprocessing techniques related to three categories of operations, highlighting their significant aspects and limitations. The survey is concluded with the most effective preprocessing methods, which have been shown to improve the accuracy and efficiency of the CAD systems.
Collapse
Affiliation(s)
| | - Muhammad Hussain
- Department of Computer Science, College of Computer and Information Sciences, King Saud University, Riyadh, Saudi Arabia
| | - Zulfiqar Habib
- Department of Computer Science, COMSATS University Islamabad, Lahore, Pakistan
| |
Collapse
|
18
|
Lian S, Li L, Lian G, Xiao X, Luo Z, Li S. A Global and Local Enhanced Residual U-Net for Accurate Retinal Vessel Segmentation. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2021; 18:852-862. [PMID: 31095493 DOI: 10.1109/tcbb.2019.2917188] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Retinal vessel segmentation is a critical procedure towards the accurate visualization, diagnosis, early treatment, and surgery planning of ocular diseases. Recent deep learning-based approaches have achieved impressive performance in retinal vessel segmentation. However, they usually apply global image pre-processing and take the whole retinal images as input during network training, which have two drawbacks for accurate retinal vessel segmentation. First, these methods lack the utilization of the local patch information. Second, they overlook the geometric constraint that retina only occurs in a specific area within the whole image or the extracted patch. As a consequence, these global-based methods suffer in handling details, such as recognizing the small thin vessels, discriminating the optic disk, etc. To address these drawbacks, this study proposes a Global and Local enhanced residual U-nEt (GLUE) for accurate retinal vessel segmentation, which benefits from both the globally and locally enhanced information inside the retinal region. Experimental results on two benchmark datasets demonstrate the effectiveness of the proposed method, which consistently improves the segmentation accuracy over a conventional U-Net and achieves competitive performance compared to the state-of-the-art.
Collapse
|
19
|
Lessons learnt from harnessing deep learning for real-world clinical applications in ophthalmology: detecting diabetic retinopathy from retinal fundus photographs. Artif Intell Med 2021. [DOI: 10.1016/b978-0-12-821259-2.00013-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
20
|
Mahmudi T, Kafieh R, Rabbani H, Mehri A, Akhlaghi MR. Evaluation of Asymmetry in Right and Left Eyes of Normal Individuals Using Extracted Features from Optical Coherence Tomography and Fundus Images. JOURNAL OF MEDICAL SIGNALS & SENSORS 2021; 11:12-23. [PMID: 34026586 PMCID: PMC8043121 DOI: 10.4103/jmss.jmss_67_19] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2019] [Revised: 02/14/2020] [Accepted: 03/09/2020] [Indexed: 11/21/2022]
Abstract
BACKGROUND Asymmetry analysis of retinal layers in right and left eyes can be a valuable tool for early diagnoses of retinal diseases. To determine the limits of the normal interocular asymmetry in retinal layers around macula, thickness measurements are obtained with optical coherence tomography (OCT). METHODS For this purpose, after segmentation of intraretinal layer in threedimensional OCT data and calculating the midmacular point, the TM of each layer is obtained in 9 sectors in concentric circles around the macula. To compare corresponding sectors in the right and left eyes, the TMs of the left and right images are registered by alignment of retinal raphe (i.e. diskfovea axes). Since the retinal raphe of macular OCTs is not calculable due to limited region size, the TMs are registered by first aligning corresponding retinal raphe of fundus images and then registration of the OCTs to aligned fundus images. To analyze the asymmetry in each retinal layer, the mean and standard deviation of thickness in 9 sectors of 11 layers are calculated in 50 normal individuals. RESULTS The results demonstrate that some sectors of retinal layers have signifcant asymmetry with P < 0.05 in normal population. In this base, the tolerance limits for normal individuals are calculated. CONCLUSION This article shows that normal population does not have identical retinal information in both eyes, and without considering this reality, normal asymmetry in information gathered from both eyes might be interpreted as retinal disorders.
Collapse
Affiliation(s)
- Tahereh Mahmudi
- Department of Medical Physics and Biomedical Engineering, School of Medicine, Tehran University of Medical Sciences, Tehran, Iran
- Medical Image and Signal Processing Research Center, School of Advanced Technologies in Medicine, Isfahan University of Medical Sciences, Isfahan, Iran
| | - Raheleh Kafieh
- Department of Medical Physics and Biomedical Engineering, School of Medicine, Tehran University of Medical Sciences, Tehran, Iran
| | - Hossein Rabbani
- Department of Medical Physics and Biomedical Engineering, School of Medicine, Tehran University of Medical Sciences, Tehran, Iran
| | - Alireza Mehri
- Department of Medical Physics and Biomedical Engineering, School of Medicine, Tehran University of Medical Sciences, Tehran, Iran
| | - Mohammad-Reza Akhlaghi
- Department of Ophthalmology, School of Medicine, Isfahan University of Medical Sciences, Isfahan, Iran
| |
Collapse
|
21
|
Xie R, Liu J, Cao R, Qiu CS, Duan J, Garibaldi J, Qiu G. End-to-End Fovea Localisation in Colour Fundus Images With a Hierarchical Deep Regression Network. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:116-128. [PMID: 32915729 DOI: 10.1109/tmi.2020.3023254] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Accurately locating the fovea is a prerequisite for developing computer aided diagnosis (CAD) of retinal diseases. In colour fundus images of the retina, the fovea is a fuzzy region lacking prominent visual features and this makes it difficult to directly locate the fovea. While traditional methods rely on explicitly extracting image features from the surrounding structures such as the optic disc and various vessels to infer the position of the fovea, deep learning based regression technique can implicitly model the relation between the fovea and other nearby anatomical structures to determine the location of the fovea in an end-to-end fashion. Although promising, using deep learning for fovea localisation also has many unsolved challenges. In this paper, we present a new end-to-end fovea localisation method based on a hierarchical coarse-to-fine deep regression neural network. The innovative features of the new method include a multi-scale feature fusion technique and a self-attention technique to exploit location, semantic, and contextual information in an integrated framework, a multi-field-of-view (multi-FOV) feature fusion technique for context-aware feature learning and a Gaussian-shift-cropping method for augmenting effective training data. We present extensive experimental results on two public databases and show that our new method achieved state-of-the-art performances. We also present a comprehensive ablation study and analysis to demonstrate the technical soundness and effectiveness of the overall framework and its various constituent components.
Collapse
|
22
|
Wang D, Haytham A, Pottenburgh J, Saeedi O, Tao Y. Hard Attention Net for Automatic Retinal Vessel Segmentation. IEEE J Biomed Health Inform 2020; 24:3384-3396. [DOI: 10.1109/jbhi.2020.3002985] [Citation(s) in RCA: 48] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
23
|
Romero-Oraá R, García M, Oraá-Pérez J, López MI, Hornero R. A robust method for the automatic location of the optic disc and the fovea in fundus images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 196:105599. [PMID: 32574904 DOI: 10.1016/j.cmpb.2020.105599] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/19/2019] [Accepted: 06/01/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND AND OBJECTIVE The location of the optic disc (OD) and the fovea is usually crucial in automatic screening systems for diabetic retinopathy. Previous methods aimed at their location often fail when these structures do not have the standard appearance. The purpose of this work is to propose novel, robust methods for the automatic detection of the OD and the fovea. METHODS The proposed method comprises a preprocessing stage, a method for retinal background extraction, a vasculature segmentation phase and the computation of various novel saliency maps. The main novelty of this work is the combination of the proposed saliency maps, which represent the spatial relationships between some structures of the retina and the visual appearance of the OD and fovea. Another contribution is the method to extract the retinal background, based on region-growing. RESULTS The proposed methods were evaluated over a proprietary database and three public databases: DRIVE, DiaretDB1 and Messidor. For the OD, we achieved 100% accuracy for all databases except Messidor (99.50%). As for the fovea location, we also reached 100% accuracy for all databases except Messidor (99.67%). CONCLUSIONS Our results suggest that the proposed methods are robust and effective to automatically detect the OD and the fovea. This way, they can be useful in automatic screening systems for diabetic retinopathy as well as other retinal diseases.
Collapse
Affiliation(s)
- Roberto Romero-Oraá
- Biomedical Engineering Group, E.T.S. Ingenieros de Telecomunicación, Universidad de Valladolid, Paseo Belén 15, Valladolid 47011, Spain.; Centro de Investigación Biomédica en Red de Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), Spain.
| | - María García
- Biomedical Engineering Group, E.T.S. Ingenieros de Telecomunicación, Universidad de Valladolid, Paseo Belén 15, Valladolid 47011, Spain.; Centro de Investigación Biomédica en Red de Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), Spain.
| | - Javier Oraá-Pérez
- Biomedical Engineering Group, E.T.S. Ingenieros de Telecomunicación, Universidad de Valladolid, Paseo Belén 15, Valladolid 47011, Spain..
| | - María I López
- Biomedical Engineering Group, E.T.S. Ingenieros de Telecomunicación, Universidad de Valladolid, Paseo Belén 15, Valladolid 47011, Spain.; Centro de Investigación Biomédica en Red de Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), Spain; Department of Ophthalmology, Hospital Clínico Universitario de Valladolid, Valladolid 47003, Spain.; Instituto Universitario de Oftalmobiología Aplicada (IOBA), Universidad de Valladolid, Valladolid 47011, Spain..
| | - Roberto Hornero
- Biomedical Engineering Group, E.T.S. Ingenieros de Telecomunicación, Universidad de Valladolid, Paseo Belén 15, Valladolid 47011, Spain.; Centro de Investigación Biomédica en Red de Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), Spain; Instituto de Investigación en Matemáticas (IMUVA), Universidad de Valladolid, Valladolid 47011, Spain..
| |
Collapse
|
24
|
Mursch-Edlmayr AS, Ng WS, Diniz-Filho A, Sousa DC, Arnold L, Schlenker MB, Duenas-Angeles K, Keane PA, Crowston JG, Jayaram H. Artificial Intelligence Algorithms to Diagnose Glaucoma and Detect Glaucoma Progression: Translation to Clinical Practice. Transl Vis Sci Technol 2020; 9:55. [PMID: 33117612 PMCID: PMC7571273 DOI: 10.1167/tvst.9.2.55] [Citation(s) in RCA: 47] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2019] [Accepted: 09/18/2020] [Indexed: 12/11/2022] Open
Abstract
Purpose This concise review aims to explore the potential for the clinical implementation of artificial intelligence (AI) strategies for detecting glaucoma and monitoring glaucoma progression. Methods Nonsystematic literature review using the search combinations “Artificial Intelligence,” “Deep Learning,” “Machine Learning,” “Neural Networks,” “Bayesian Networks,” “Glaucoma Diagnosis,” and “Glaucoma Progression.” Information on sensitivity and specificity regarding glaucoma diagnosis and progression analysis as well as methodological details were extracted. Results Numerous AI strategies provide promising levels of specificity and sensitivity for structural (e.g. optical coherence tomography [OCT] imaging, fundus photography) and functional (visual field [VF] testing) test modalities used for the detection of glaucoma. Area under receiver operating curve (AROC) values of > 0.90 were achieved with every modality. Combining structural and functional inputs has been shown to even more improve the diagnostic ability. Regarding glaucoma progression, AI strategies can detect progression earlier than conventional methods or potentially from one single VF test. Conclusions AI algorithms applied to fundus photographs for screening purposes may provide good results using a simple and widely accessible test. However, for patients who are likely to have glaucoma more sophisticated methods should be used including data from OCT and perimetry. Outputs may serve as an adjunct to assist clinical decision making, whereas also enhancing the efficiency, productivity, and quality of the delivery of glaucoma care. Patients with diagnosed glaucoma may benefit from future algorithms to evaluate their risk of progression. Challenges are yet to be overcome, including the external validity of AI strategies, a move from a “black box” toward “explainable AI,” and likely regulatory hurdles. However, it is clear that AI can enhance the role of specialist clinicians and will inevitably shape the future of the delivery of glaucoma care to the next generation. Translational Relevance The promising levels of diagnostic accuracy reported by AI strategies across the modalities used in clinical practice for glaucoma detection can pave the way for the development of reliable models appropriate for their translation into clinical practice. Future incorporation of AI into healthcare models may help address the current limitations of access and timely management of patients with glaucoma across the world.
Collapse
Affiliation(s)
| | - Wai Siene Ng
- Cardiff Eye Unit, University Hospital of Wales, Cardiff, UK
| | - Alberto Diniz-Filho
- Department of Ophthalmology and Otorhinolaryngology, Federal University of Minas Gerais, Belo Horizonte, Brazil
| | - David C Sousa
- Department of Ophthalmology, Hospital de Santa Maria, Lisbon, Portugal
| | - Louis Arnold
- Department of Ophthalmology, University Hospital, Dijon, France
| | - Matthew B Schlenker
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Canada
| | - Karla Duenas-Angeles
- Department of Ophthalmology, Universidad Nacional Autónoma de Mexico, Mexico City, Mexico
| | - Pearse A Keane
- NIHR Biomedical Research Centre for Ophthalmology, UCL Institute of Ophthalmology & Moorfields Eye Hospital, London, UK
| | - Jonathan G Crowston
- Centre for Vision Research, Duke-NUS Medical School, Singapore.,Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Hari Jayaram
- NIHR Biomedical Research Centre for Ophthalmology, UCL Institute of Ophthalmology & Moorfields Eye Hospital, London, UK
| |
Collapse
|
25
|
Efficient and robust optic disc detection and fovea localization using region proposal network and cascaded network. Biomed Signal Process Control 2020. [DOI: 10.1016/j.bspc.2020.101939] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
26
|
Li Q, Li S, Wu Y, Guo W, Qi S, Huang G, Chen S, Liu Z, Chen X. Orientation-independent Feature Matching (OIFM) for Multimodal Retinal Image Registration. Biomed Signal Process Control 2020. [DOI: 10.1016/j.bspc.2020.101957] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
27
|
Simultaneous segmentation of the optic disc and fovea in retinal images using evolutionary algorithms. Neural Comput Appl 2020. [DOI: 10.1007/s00521-020-05060-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
28
|
Grey-Wolf-Based Wang's Demons for Retinal Image Registration. ENTROPY 2020; 22:e22060659. [PMID: 33286433 PMCID: PMC7517193 DOI: 10.3390/e22060659] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/07/2020] [Revised: 06/04/2020] [Accepted: 06/06/2020] [Indexed: 11/28/2022]
Abstract
Image registration has an imperative role in medical imaging. In this work, a grey-wolf optimizer (GWO)-based non-rigid demons registration is proposed to support the retinal image registration process. A comparative study of the proposed GWO-based demons registration framework with cuckoo search, firefly algorithm, and particle swarm optimization-based demons registration is conducted. In addition, a comparative analysis of different demons registration methods, such as Wang’s demons, Tang’s demons, and Thirion’s demons which are optimized using the proposed GWO is carried out. The results established the superiority of the GWO-based framework which achieved 0.9977 correlation, and fast processing compared to the use of the other optimization algorithms. Moreover, GWO-based Wang’s demons performed better accuracy compared to the Tang’s demons and Thirion’s demons framework. It also achieved the best less registration error of 8.36 × 10−5.
Collapse
|
29
|
Mou L, Chen L, Cheng J, Gu Z, Zhao Y, Liu J. Dense Dilated Network With Probability Regularized Walk for Vessel Detection. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:1392-1403. [PMID: 31675323 DOI: 10.1109/tmi.2019.2950051] [Citation(s) in RCA: 47] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
The detection of retinal vessel is of great importance in the diagnosis and treatment of many ocular diseases. Many methods have been proposed for vessel detection. However, most of the algorithms neglect the connectivity of the vessels, which plays an important role in the diagnosis. In this paper, we propose a novel method for retinal vessel detection. The proposed method includes a dense dilated network to get an initial detection of the vessels and a probability regularized walk algorithm to address the fracture issue in the initial detection. The dense dilated network integrates newly proposed dense dilated feature extraction blocks into an encoder-decoder structure to extract and accumulate features at different scales. A multi-scale Dice loss function is adopted to train the network. To improve the connectivity of the segmented vessels, we also introduce a probability regularized walk algorithm to connect the broken vessels. The proposed method has been applied on three public data sets: DRIVE, STARE and CHASE_DB1. The results show that the proposed method outperforms the state-of-the-art methods in accuracy, sensitivity, specificity and also area under receiver operating characteristic curve.
Collapse
|
30
|
Williamson TH. Artificial intelligence in diabetic retinopathy. Eye (Lond) 2020; 35:684. [PMID: 32291403 DOI: 10.1038/s41433-020-0855-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2020] [Revised: 03/19/2020] [Accepted: 03/20/2020] [Indexed: 01/21/2023] Open
|
31
|
Kaur S, Mann KS. Retinal Vessel Segmentation Using an Entropy-Based Optimization Algorithm. INTERNATIONAL JOURNAL OF HEALTHCARE INFORMATION SYSTEMS AND INFORMATICS 2020. [DOI: 10.4018/ijhisi.2020040105] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
This article presents an algorithm for the segmentation of retinal blood vessels for the detection of diabetic retinopathy eye diseases. This disease occurs in patients with untreated diabetes for a long time. Since this disease is related to the retina, it can eventually lead to vision impairment. The proposed algorithm is a supervised learning method of blood vessels segmentation in which the classification system is trained with the features that are extracted from the images. The proposed system is implemented on the images of DRIVE, STARE and CHASE_DB1 databases. The segmentation is done by forming clusters with the features of patterns. The features were extracted using independent component analysis and the classification is performed by support vector machines (SVM). The results of the parameters are grouped by accuracy, sensitivity, specificity, positive predictive value, false positive rate and are compared with particle swarm optimization (PSO), the firefly optimization algorithm (FA) and the lion optimization algorithm (LOA).
Collapse
|
32
|
Adapa D, Joseph Raj AN, Alisetti SN, Zhuang Z, K. G, Naik G. A supervised blood vessel segmentation technique for digital Fundus images using Zernike Moment based features. PLoS One 2020; 15:e0229831. [PMID: 32142540 PMCID: PMC7059933 DOI: 10.1371/journal.pone.0229831] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2019] [Accepted: 02/16/2020] [Indexed: 11/18/2022] Open
Abstract
This paper proposes a new supervised method for blood vessel segmentation using Zernike moment-based shape descriptors. The method implements a pixel wise classification by computing a 11-D feature vector comprising of both statistical (gray-level) features and shape-based (Zernike moment) features. Also the feature set contains optimal coefficients of the Zernike Moments which were derived based on the maximum differentiability between the blood vessel and background pixels. A manually selected training points obtained from the training set of the DRIVE dataset, covering all possible manifestations were used for training the ANN-based binary classifier. The method was evaluated on unknown test samples of DRIVE and STARE databases and returned accuracies of 0.945 and 0.9486 respectively, outperforming other existing supervised learning methods. Further, the segmented outputs were able to cover thinner blood vessels better than previous methods, aiding in early detection of pathologies.
Collapse
Affiliation(s)
- Dharmateja Adapa
- Key Laboratory of Digital Signal and Image Processing of Guangdong Province, Department of Electronic Engineering, College of Engineering, Shantou University, Shantou, Guangdong, China
| | - Alex Noel Joseph Raj
- Key Laboratory of Digital Signal and Image Processing of Guangdong Province, Department of Electronic Engineering, College of Engineering, Shantou University, Shantou, Guangdong, China
| | - Sai Nikhil Alisetti
- Key Laboratory of Digital Signal and Image Processing of Guangdong Province, Department of Electronic Engineering, College of Engineering, Shantou University, Shantou, Guangdong, China
| | - Zhemin Zhuang
- Key Laboratory of Digital Signal and Image Processing of Guangdong Province, Department of Electronic Engineering, College of Engineering, Shantou University, Shantou, Guangdong, China
| | - Ganesan K.
- TIFAC-CORE, School of Electronics, Vellore Institute of Technology, Vellore, India
| | - Ganesh Naik
- MARCS Institute, Western Sydney University, Australia
| |
Collapse
|
33
|
Ruamviboonsuk P, Cheung CY, Zhang X, Raman R, Park SJ, Ting DSW. Artificial Intelligence in Ophthalmology: Evolutions in Asia. Asia Pac J Ophthalmol (Phila) 2020; 9:78-84. [PMID: 32349114 DOI: 10.1097/01.apo.0000656980.41190.bf] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022] Open
Abstract
Artificial intelligence (AI) has been studied in ophthalmology since availability of digital information in ophthalmic care. The significant turning point was availability of commercial digital color fundus photography in the late 1990s, which caused digital screening for diabetic retinopathy (DR) to take off. Automated Retinal Disease Assessment software was then developed using machine learning to detect abnormal lesions in fundus to screen DR. The use of this version of AI had not been generalized because the specificity at 45% was not high enough, although the sensitivity reached 90%. The recent breakthrough in machine learning is the invent of deep learning, which accelerates its performance to be on par with experts. The first 2 breakthrough studies on deep learning for screening DR were conducted in Asia. The first represented collaboration of datasets between Asia and the United States for algorithms development, whereas the second represented algorithms developed in Asia but validated in different populations across the world. Both found accuracy for detecting referable DR of >95%. Diversity and variety are unique strengths of Asia for AI studies. There are many more studies of AI ongoing in Asia not only as prospective deployments in DR but in glaucoma, age-related macular degeneration, cataract, and systemic disease, such as Alzheimer's disease. Some Asian countries have laid out plans for digital health care system using AI as one of the puzzle pieces for solving blindness. More studies on AI and digital health are expected to come from Asia in this new decade.
Collapse
Affiliation(s)
| | - Carol Y Cheung
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong
| | - Xiulan Zhang
- Zhongshan Ophthalmic Center, State Key Laboratory of Ophthalmology, Sun Yat-Sen University, Guangzhou, People's Republic of China
| | - Rajiv Raman
- Shri Bhagwan Mahavir Vitreoretinal Services, Sankara Nethralaya, Chennai, India
| | - Sang Jun Park
- Duke-NUS Medical School Consultant, Vitreo-retinal Department, Singapore National Eye Center, Singapore
| | - Daniel Shu Wei Ting
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul National University Bundang Hospital, Seongnam, South Korea
| |
Collapse
|
34
|
Ortega-Santana F, Hernández-Morera P, Ruano-Ferrer F, Ortega-Centol A. Infrared Illumination and Subcutaneous Venous Network: Can it be of Help for the Study of CEAP C 1 Limbs? Eur J Vasc Endovasc Surg 2020; 59:625-634. [PMID: 32008931 DOI: 10.1016/j.ejvs.2019.11.034] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2019] [Revised: 10/25/2019] [Accepted: 11/27/2019] [Indexed: 11/25/2022]
Abstract
OBJECTIVE The subcutaneous venous network (SVN) is difficult to see with the naked eye. Near infrared illumination (NIr-I) claims to improve this. The aims of this observational study were to investigate whether there are differences between the different methods; to quantify the length and diameter of SVNs; and to confirm if they differ between C0A and C1 CEAP limbs. METHODS In total, 4 796 images, half of them from the visible spectrum (VS) and the other half from the nearninfrared spectrum (NIrS), belonging to 109 females (C0A: n = 50; C1 CEAP: n = 59) were used to establish the morphological characteristics of the SVN by visual analysis. With Photoshop CS4, SVN diameters and lengths were obtained by digital analysis of 3 052 images, once the images of whole extremities were excluded. RESULTS On NIr-I, the diameters, trajectories, and colouration of SVNs of C1 limbs appeared more irregular than SVNs of C0A limbs. Compared with the VS images, NIr-I allowed visualisation of a greater length of the SVN in both groups (p < .010). This capacity varied from 2.6 ± 0.9 times (C1) to 16.2 ± 11.9 (C0A). While the SVN length seen in the VS images from C1 limbs was greater than observed in C0A limbs (p < .001), differences between NIr-I images only existed in the lateral part of the lower leg (p = .016). With NIr-I, the median diameter of the C1 CEAP SVN veins was 5.8 mm (interquartile range [IQR] 4.3-7.5 mm), while the median diameter in C0A SVN limbs was 2.6 mm (IQR 2.0-3.6 mm) (p < .001). CONCLUSION The NIr-I reveals the characteristics of the SVN better than the naked eye. Further studies are required to determine the significance of the changes in the SVN in C0A and C1 limbs, and the factors causing them.
Collapse
Affiliation(s)
- Francisco Ortega-Santana
- Department of Morphology, University of Las Palmas de Gran Canaria, Edificio Ciencias de la Salud, Las Palmas de Gran Canaria, Spain; CliniVar, Clínica de Varices, Las Palmas de Gran Canaria, Spain.
| | - Pablo Hernández-Morera
- IUMA Information and Communication Systems, University of Las Palmas de Gran Canaria, Edificio Electrónica y Telecomunicación, Las Palmas de Gran Canaria, Spain
| | | | - Aritz Ortega-Centol
- CliniVar, Clínica de Varices, Las Palmas de Gran Canaria, Spain; Universitary Hospital of Bellvitge, L'Hospitalet de Llobregat, Barcelona, Spain
| |
Collapse
|
35
|
Guo X, Wang H, Lu X, Hu X, Che S, Lu Y. Robust Fovea Localization Based on Symmetry Measure. IEEE J Biomed Health Inform 2020; 24:2315-2326. [PMID: 32031956 DOI: 10.1109/jbhi.2020.2971593] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Automatic fovea localization is a challenging issue. In this article, we focus on the study of fovea localization and propose a robust fovea localization method. We propose concentric circular sectional symmetry measure (CCSSM) for symmetry axis detection, and region of interest (ROI) determination, which is a global feature descriptor robust against local feature changes, to solve the lesion interference issue, i.e., fovea visibility interference from lesions, using both structure features and morphological features. We propose the index of convexity and concavity (ICC) as the convexity-concavity measure of the surface and provide a quantitative evaluation tool for ophthalmologists to learn whether the occurrence of lesion within the ROI. We propose the weighted gradient accumulation map, which is insensitive to local intensity changes and can overcome the influence of noise and contamination, to perform refined localization. The advantages of the proposed method lies in two aspects. First, the accuracy and robustness can be achieved without typical sophisticated manner, i.e., blood vessel segmentation and parabola fitting. Second, the lesion interference is considered in our plan of fovea localization. Our proposed symmetry-based method is innovative in the solution of fovea detection, and it is simple, practical, and controllable. Experiment results show that the proposed method can resist the interference of unbalanced illumination and lesions, and achieve high accuracy rate in five datasets. Compared to the state-of-the-art methods, high robustness and accuracy of the proposed method guarantees its reliability.
Collapse
|
36
|
A region growing and local adaptive thresholding-based optic disc detection. PLoS One 2020; 15:e0227566. [PMID: 31999720 PMCID: PMC6991997 DOI: 10.1371/journal.pone.0227566] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2019] [Accepted: 12/21/2019] [Indexed: 11/23/2022] Open
Abstract
Automatic optic disc (OD) localization and segmentation is not a simple process as the OD appearance and size may significantly vary from person to person. This paper presents a novel approach for OD localization and segmentation which is fast as well as robust. In the proposed method, the image is first enhanced by de-hazing and then cropped around the OD region. The cropped image is converted to HSV domain and then V channel is used for OD detection. The vessels are extracted from the Green channel in the cropped region by multi-scale line detector and then removed by the Laplace Transform. Local adaptive thresholding and region growing are applied for binarization. Furthermore, two region properties, eccentricity, and area are then used to detect the true OD region. Finally, ellipse fitting is used to fill the region. Several datasets are used for testing the proposed method. Test results show that the accuracy and sensitivity of the proposed method are much higher than the existing state-of-the-art methods.
Collapse
|
37
|
Ajaz A, Aliahmad B, Kumar H, Sarossy M, Kumar DK. Association between Optical Coherence Tomography and Fluorescein Angiography based retinal features in the diagnosis of Macular Edema. Comput Biol Med 2020; 116:103546. [DOI: 10.1016/j.compbiomed.2019.103546] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2019] [Revised: 10/28/2019] [Accepted: 11/14/2019] [Indexed: 01/18/2023]
|
38
|
de Moura J, Novo J, Rouco J, Charlón P, Ortega M. Artery/Vein Vessel Tree Identification in Near-Infrared Reflectance Retinographies. J Digit Imaging 2019; 32:947-962. [PMID: 31144147 PMCID: PMC6841835 DOI: 10.1007/s10278-019-00235-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022] Open
Abstract
An accurate identification of the retinal arteries and veins is a relevant issue in the development of automatic computer-aided diagnosis systems that facilitate the analysis of different relevant diseases that affect the vascular system as diabetes or hypertension, among others. The proposed method offers a complete analysis of the retinal vascular tree structure by its identification and posterior classification into arteries and veins using optical coherence tomography (OCT) scans. These scans include the near-infrared reflectance retinography images, the ones we used in this work, in combination with the corresponding histological sections. The method, firstly, segments the vessel tree and identifies its characteristic points. Then, Global Intensity-Based Features (GIBS) are used to measure the differences in the intensity profiles between arteries and veins. A k-means clustering classifier employs these features to evaluate the potential of artery/vein identification of the proposed method. Finally, a post-processing stage is applied to correct misclassifications using context information and maximize the performance of the classification process. The methodology was validated using an OCT image dataset retrieved from 46 different patients, where 2,392 vessel segments and 97,294 vessel points were manually labeled by an expert clinician. The method achieved satisfactory results, reaching a best accuracy of 93.35% in the identification of arteries and veins, being the first proposal that faces this issue in this image modality.
Collapse
Affiliation(s)
- Joaquim de Moura
- Department of Computer Science, University of A Coruña, 15071 A Coruña, Spain
- CITIC - Research Center of Information and Communication Technologies, University of A Coruña, 15071 A Coruña, Spain
| | - Jorge Novo
- Department of Computer Science, University of A Coruña, 15071 A Coruña, Spain
- CITIC - Research Center of Information and Communication Technologies, University of A Coruña, 15071 A Coruña, Spain
| | - José Rouco
- Department of Computer Science, University of A Coruña, 15071 A Coruña, Spain
- CITIC - Research Center of Information and Communication Technologies, University of A Coruña, 15071 A Coruña, Spain
| | - Pablo Charlón
- Instituto Oftalmológico Victoria de Rojas, 15009 A Coruña, Spain
| | - Marcos Ortega
- Department of Computer Science, University of A Coruña, 15071 A Coruña, Spain
- CITIC - Research Center of Information and Communication Technologies, University of A Coruña, 15071 A Coruña, Spain
| |
Collapse
|
39
|
Devalla SK, Liang Z, Pham TH, Boote C, Strouthidis NG, Thiery AH, Girard MJA. Glaucoma management in the era of artificial intelligence. Br J Ophthalmol 2019; 104:301-311. [DOI: 10.1136/bjophthalmol-2019-315016] [Citation(s) in RCA: 36] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2019] [Revised: 09/07/2019] [Accepted: 10/05/2019] [Indexed: 12/20/2022]
Abstract
Glaucoma is a result of irreversible damage to the retinal ganglion cells. While an early intervention could minimise the risk of vision loss in glaucoma, its asymptomatic nature makes it difficult to diagnose until a late stage. The diagnosis of glaucoma is a complicated and expensive effort that is heavily dependent on the experience and expertise of a clinician. The application of artificial intelligence (AI) algorithms in ophthalmology has improved our understanding of many retinal, macular, choroidal and corneal pathologies. With the advent of deep learning, a number of tools for the classification, segmentation and enhancement of ocular images have been developed. Over the years, several AI techniques have been proposed to help detect glaucoma by analysis of functional and/or structural evaluations of the eye. Moreover, the use of AI has also been explored to improve the reliability of ascribing disease prognosis. This review summarises the role of AI in the diagnosis and prognosis of glaucoma, discusses the advantages and challenges of using AI systems in clinics and predicts likely areas of future progress.
Collapse
|
40
|
DCCMED-Net: Densely connected and concatenated multi Encoder-Decoder CNNs for retinal vessel extraction from fundus images. Med Hypotheses 2019; 134:109426. [PMID: 31622926 DOI: 10.1016/j.mehy.2019.109426] [Citation(s) in RCA: 30] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2019] [Accepted: 10/09/2019] [Indexed: 11/22/2022]
Abstract
Recent studies have shown that convolutional neural networks (CNNs) can be more accurate, efficient and even deeper on their training if they include direct connections from the layers close to the input to those close to the output in order to transfer activation maps. Through this observation, this study introduces a new CNN model, namely Densely Connected and Concatenated Multi Encoder-Decoder (DCCMED) network. DCCMED contains concatenated multi encoder-decoder CNNs and connects certain layers to the corresponding input of the subsequent encoder-decoder block in a feed-forward fashion, for retinal vessel extraction from fundus image. The DCCMED model has assertive aspects such as reducing pixel-vanishing and encouraging features reuse. A patch-based data augmentation strategy is also developed for the training of the proposed DCCMED model that increases the generalization ability of the network. Experiments are carried out on two publicly available datasets, namely Digital Retinal Images for Vessel Extraction (DRIVE) and Structured Analysis of the Retina (STARE). Evaluation criterions such as sensitivity (Se), specificity (Sp), accuracy (Acc), dice and area under the receiver operating characteristic curve (AUC) are used for verifying the effectiveness of the proposed method. The obtained results are compared with several supervised and unsupervised state-of-the-art methods based on AUC scores. The obtained results demonstrate that the proposed DCCMED model yields the best performance compared with the-state-of-the-art methods according to accuracy and AUC scores.
Collapse
|
41
|
Reddy S. Automatic Screening of Diabetic Maculopathy Using Image Processing. INTERNATIONAL JOURNAL OF TECHNOLOGY AND HUMAN INTERACTION 2019. [DOI: 10.4018/ijthi.2019100103] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Retinal imaging is a challenging screening method for detection of retinal abnormalities. Diabetic Maculopathy (DM) is a condition that can result from retinopathy. Regular screening is necessary for diabetic maculopathy in order to identify the risk of vision loss. Maculopathy is damage to macula, the key region responsible for high sharp colour vision. Diabetic Retinopathy and Diabetic Maculopathy needs regular observation in order to indicate visual impairment risk. In this article, the author first presents a brief summary of diabetic maculopathy and its causes. Then, an exhaustive literature review of different automated DM diagnosis systems offered. It is important for ophthalmologists to have an automated system which detects early symptoms of the disease and yields a high accurate result. A vital assessment of the image processing techniques used for DM feature detection is projected in this paper. Various methods have been proposed to identify and classify DM based on severity level.
Collapse
Affiliation(s)
- Shweta Reddy
- Visvesvaraya Technological University, Gulbarga, India
| |
Collapse
|
42
|
Abdullah AS, Rahebi J, Özok YE, Aljanabi M. A new and effective method for human retina optic disc segmentation with fuzzy clustering method based on active contour model. Med Biol Eng Comput 2019; 58:25-37. [DOI: 10.1007/s11517-019-02032-8] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2018] [Accepted: 08/13/2019] [Indexed: 10/26/2022]
|
43
|
|
44
|
Son J, Park SJ, Jung KH. Towards Accurate Segmentation of Retinal Vessels and the Optic Disc in Fundoscopic Images with Generative Adversarial Networks. J Digit Imaging 2019; 32:499-512. [PMID: 30291477 PMCID: PMC6499859 DOI: 10.1007/s10278-018-0126-3] [Citation(s) in RCA: 71] [Impact Index Per Article: 14.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022] Open
Abstract
Automatic segmentation of the retinal vasculature and the optic disc is a crucial task for accurate geometric analysis and reliable automated diagnosis. In recent years, Convolutional Neural Networks (CNN) have shown outstanding performance compared to the conventional approaches in the segmentation tasks. In this paper, we experimentally measure the performance gain for Generative Adversarial Networks (GAN) framework when applied to the segmentation tasks. We show that GAN achieves statistically significant improvement in area under the receiver operating characteristic (AU-ROC) and area under the precision and recall curve (AU-PR) on two public datasets (DRIVE, STARE) by segmenting fine vessels. Also, we found a model that surpassed the current state-of-the-art method by 0.2 - 1.0% in AU-ROC and 0.8 - 1.2% in AU-PR and 0.5 - 0.7% in dice coefficient. In contrast, significant improvements were not observed in the optic disc segmentation task on DRIONS-DB, RIM-ONE (r3) and Drishti-GS datasets in AU-ROC and AU-PR.
Collapse
Affiliation(s)
- Jaemin Son
- VUNO Inc., 6F, 507, Gangnam-daero, Seocho-gu, Seoul, Republic of Korea
| | - Sang Jun Park
- Department of Ophthalmology, Seoul National University College of Medicine, Seoul National University Bundang Hospital, Seongnam, Republic of Korea
| | - Kyu-Hwan Jung
- VUNO Inc., 6F, 507, Gangnam-daero, Seocho-gu, Seoul, Republic of Korea
| |
Collapse
|
45
|
Rasta SH, Mohammadi F, Esmaeili M, Javadzadeh A, Tabar HA. The computer based method to diabetic retinopathy assessment in retinal images: a review. ELECTRONIC JOURNAL OF GENERAL MEDICINE 2019. [DOI: 10.29333/ejgm/108619] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
46
|
Vigneshwaran V, Sands GB, LeGrice IJ, Smaill BH, Smith NP. Reconstruction of coronary circulation networks: A review of methods. Microcirculation 2019; 26:e12542. [DOI: 10.1111/micc.12542] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2018] [Revised: 01/25/2019] [Accepted: 02/27/2019] [Indexed: 12/12/2022]
Affiliation(s)
- Vibujithan Vigneshwaran
- Auckland Bioengineering Institute University of Auckland Auckland New Zealand
- Faculty of Engineering University of Auckland Auckland New Zealand
| | - Gregory B. Sands
- Auckland Bioengineering Institute University of Auckland Auckland New Zealand
| | - Ian J. LeGrice
- Department of Physiology University of Auckland Auckland New Zealand
| | - Bruce H. Smaill
- Auckland Bioengineering Institute University of Auckland Auckland New Zealand
| | - Nicolas P. Smith
- Auckland Bioengineering Institute University of Auckland Auckland New Zealand
- Faculty of Engineering University of Auckland Auckland New Zealand
| |
Collapse
|
47
|
Optic Disc Localization in Complicated Environment of Retinal Image Using Circular-Like Estimation. ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING 2019. [DOI: 10.1007/s13369-019-03756-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
48
|
Jeena R, Sukesh Kumar A, Mahadevan K. Stroke diagnosis from retinal fundus images using multi texture analysis. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2019. [DOI: 10.3233/jifs-169914] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Affiliation(s)
- R.S. Jeena
- Department of Electronics and Communication, Research Scholar, College of Engineering Trivandrum, Kerala
| | - A. Sukesh Kumar
- Department of Electronics and Communication, College of Engineering Trivandrum, Kerala
| | - K. Mahadevan
- Department of Ophthalmology, Sree Gokulam Medical College and Research Foundation, Trivandrum, Kerala
| |
Collapse
|
49
|
Balasubramanian K, Ananthamoorthy NP. Analysis of hybrid statistical textural and intensity features to discriminate retinal abnormalities through classifiers. Proc Inst Mech Eng H 2019; 233:506-514. [PMID: 30894077 DOI: 10.1177/0954411919835856] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Retinal image analysis relies on the effectiveness of computational techniques to discriminate various abnormalities in the eye like diabetic retinopathy, macular degeneration and glaucoma. The onset of the disease is often unnoticed in case of glaucoma, the effect of which is felt only at a later stage. Diagnosis of such degenerative diseases warrants early diagnosis and treatment. In this work, performance of statistical and textural features in retinal vessel segmentation is evaluated through classifiers like extreme learning machine, support vector machine and Random Forest. The fundus images are initially preprocessed for any noise reduction, image enhancement and contrast adjustment. The two-dimensional Gabor Wavelets and Partition Clustering is employed on the preprocessed image to extract the blood vessels. Finally, the combined hybrid features comprising statistical textural, intensity and vessel morphological features, extracted from the image, are used to detect glaucomatous abnormality through the classifiers. A crisp decision can be taken depending on the classifying rates of the classifiers. Public databases RIM-ONE and high-resolution fundus and local datasets are used for evaluation with threefold cross validation. The evaluation is based on performance metrics through accuracy, sensitivity and specificity. The evaluation of hybrid features obtained an overall accuracy of 97% when tested using classifiers. The support vector machine classifier is able to achieve an accuracy of 93.33% on high-resolution fundus, 93.8% on RIM-ONE dataset and 95.3% on local dataset. For extreme learning machine classifier, the accuracy is 95.1% on high-resolution fundus, 97.8% on RIM-ONE and 96.8% on local dataset. An accuracy of 94.5% on high-resolution fundus 92.5% on RIM-ONE and 94.2% on local dataset is obtained for the random forest classifier. Validation of the experiment results indicate that the hybrid features can be deployed in supervised classifiers to discriminate retinal abnormalities effectively.
Collapse
|
50
|
Wang W, Wang W, Hu Z. Segmenting retinal vessels with revised top-bottom-hat transformation and flattening of minimum circumscribed ellipse. Med Biol Eng Comput 2019; 57:1481-1496. [DOI: 10.1007/s11517-019-01967-2] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2018] [Accepted: 02/23/2019] [Indexed: 11/29/2022]
|