1
|
Nakayama LF, Restrepo D, Matos J, Ribeiro LZ, Malerbi FK, Celi LA, Regatieri CS. BRSET: A Brazilian Multilabel Ophthalmological Dataset of Retina Fundus Photos. PLOS DIGITAL HEALTH 2024; 3:e0000454. [PMID: 38991014 PMCID: PMC11239107 DOI: 10.1371/journal.pdig.0000454] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/23/2024] [Accepted: 06/03/2024] [Indexed: 07/13/2024]
Abstract
INTRODUCTION The Brazilian Multilabel Ophthalmological Dataset (BRSET) addresses the scarcity of publicly available ophthalmological datasets in Latin America. BRSET comprises 16,266 color fundus retinal photos from 8,524 Brazilian patients, aiming to enhance data representativeness, serving as a research and teaching tool. It contains sociodemographic information, enabling investigations into differential model performance across demographic groups. METHODS Data from three São Paulo outpatient centers yielded demographic and medical information from electronic records, including nationality, age, sex, clinical history, insulin use, and duration of diabetes diagnosis. A retinal specialist labeled images for anatomical features (optic disc, blood vessels, macula), quality control (focus, illumination, image field, artifacts), and pathologies (e.g., diabetic retinopathy). Diabetic retinopathy was graded using International Clinic Diabetic Retinopathy and Scottish Diabetic Retinopathy Grading. Validation used a ConvNext model trained during 50 epochs using a weighted cross entropy loss to avoid overfitting, with 70% training (20% validation), and 30% testing subsets. Performance metrics included area under the receiver operating curve (AUC) and Macro F1-score. Saliency maps were calculated for interpretability. RESULTS BRSET comprises 65.1% Canon CR2 and 34.9% Nikon NF5050 images. 61.8% of the patients are female, and the average age is 57.6 (± 18.26) years. Diabetic retinopathy affected 15.8% of patients, across a spectrum of disease severity. Anatomically, 20.2% showed abnormal optic discs, 4.9% abnormal blood vessels, and 28.8% abnormal macula. A ConvNext V2 model was trained and evaluated BRSET in four prediction tasks: "binary diabetic retinopathy diagnosis (Normal vs Diabetic Retinopathy)" (AUC: 97, F1: 89); "3 class diabetic retinopathy diagnosis (Normal, Proliferative, Non-Proliferative)" (AUC: 97, F1: 82); "diabetes diagnosis" (AUC: 91, F1: 83); "sex classification" (AUC: 87, F1: 70). DISCUSSION BRSET is the first multilabel ophthalmological dataset in Brazil and Latin America. It provides an opportunity for investigating model biases by evaluating performance across demographic groups. The model performance of three prediction tasks demonstrates the value of the dataset for external validation and for teaching medical computer vision to learners in Latin America using locally relevant data sources.
Collapse
Affiliation(s)
- Luis Filipe Nakayama
- Department of Ophthalmology, São Paulo Federal University, São Paulo, São Paulo, Brazil
- Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
| | - David Restrepo
- Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
- Telematics Department, University of Cauca, Popayán, Cauca, Colombia
| | - João Matos
- Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
- Faculty of Engineering of University of Porto, Porto, Portugal
| | - Lucas Zago Ribeiro
- Department of Ophthalmology, São Paulo Federal University, São Paulo, São Paulo, Brazil
| | - Fernando Korn Malerbi
- Department of Ophthalmology, São Paulo Federal University, São Paulo, São Paulo, Brazil
| | - Leo Anthony Celi
- Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
- Division of Pulmonary, Critical Care and Sleep Medicine, Beth Israel Deaconess Medical Center, Boston, Massachusetts, United States of America
- Department of Biostatistics, Harvard T.H. Chan School of Public Health, Boston, Massachusetts, United States of America
| | - Caio Saito Regatieri
- Department of Ophthalmology, São Paulo Federal University, São Paulo, São Paulo, Brazil
| |
Collapse
|
2
|
Hughes-Cano JA, Quiroz-Mercado H, Hernández-Zimbrón LF, García-Franco R, Rubio Mijangos JF, López-Star E, García-Roa M, Lansingh VC, Olivares-Pinto U, Thébault SC. Improved predictive diagnosis of diabetic macular edema based on hybrid models: An observational study. Comput Biol Med 2024; 170:107979. [PMID: 38219645 DOI: 10.1016/j.compbiomed.2024.107979] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Revised: 12/11/2023] [Accepted: 01/08/2024] [Indexed: 01/16/2024]
Abstract
Diabetic Macular Edema (DME) is the most common sight-threatening complication of type 2 diabetes. Optical Coherence Tomography (OCT) is the most useful imaging technique to diagnose, follow up, and evaluate treatments for DME. However, OCT exam and devices are expensive and unavailable in all clinics in low- and middle-income countries. Our primary goal was therefore to develop an alternative method to OCT for DME diagnosis by introducing spectral information derived from spontaneous electroretinogram (ERG) signals as a single input or combined with fundus that is much more widespread. Baseline ERGs were recorded in 233 patients and transformed into scalograms and spectrograms via Wavelet and Fourier transforms, respectively. Using transfer learning, distinct Convolutional Neural Networks (CNN) were trained as classifiers for DME using OCT, scalogram, spectrogram, and eye fundus images. Input data were randomly split into training and test sets with a proportion of 80 %-20 %, respectively. The top performers for each input type were selected, OpticNet-71 for OCT, DenseNet-201 for eye fundus, and non-evoked ERG-derived scalograms, to generate a combined model by assigning different weights for each of the selected models. Model validation was performed using a dataset alien to the training phase of the models. None of the models powered by mock ERG-derived input performed well. In contrast, hybrid models showed better results, in particular, the model powered by eye fundus combined with mock ERG-derived information with a 91 % AUC and 86 % F1-score, and the model powered by OCT and mock ERG-derived scalogram images with a 93 % AUC and 89 % F1-score. These data show that the spontaneous ERG-derived input adds predictive value to the fundus- and OCT-based models to diagnose DME, except for the sensitivity of the OCT model which remains the same. The inclusion of mock ERG signals, which have recently been shown to take only 5 min to record in daylight conditions, therefore represents a potential improvement over existing OCT-based models, as well as a reliable and cost-effective alternative when combined with the fundus, especially in underserved areas, to predict DME.
Collapse
Affiliation(s)
- J A Hughes-Cano
- Laboratorio de Investigación Traslacional en Salud Visual, Instituto de Neurobiología, Universidad Nacional Autónoma de México (UNAM), Campus Juriquilla, Querétaro, Mexico
| | - H Quiroz-Mercado
- Research Department, Asociación Para Evitar la Ceguera, Mexico City, Mexico
| | | | - R García-Franco
- Instituto Mexicano de Oftalmología (IMO), I.A.P., Circuito Exterior Estadio Corregidora Sn, Centro Sur, 76090 Santiago de Querétaro, Querétaro, Mexico
| | - J F Rubio Mijangos
- Instituto Mexicano de Oftalmología (IMO), I.A.P., Circuito Exterior Estadio Corregidora Sn, Centro Sur, 76090 Santiago de Querétaro, Querétaro, Mexico
| | - E López-Star
- Instituto Mexicano de Oftalmología (IMO), I.A.P., Circuito Exterior Estadio Corregidora Sn, Centro Sur, 76090 Santiago de Querétaro, Querétaro, Mexico
| | - M García-Roa
- Instituto Mexicano de Oftalmología (IMO), I.A.P., Circuito Exterior Estadio Corregidora Sn, Centro Sur, 76090 Santiago de Querétaro, Querétaro, Mexico
| | - V C Lansingh
- Instituto Mexicano de Oftalmología (IMO), I.A.P., Circuito Exterior Estadio Corregidora Sn, Centro Sur, 76090 Santiago de Querétaro, Querétaro, Mexico; HelpMeSee, Inc., 20 West 36th Street, Floor 4, New York, NY, 10018-8005, USA
| | - U Olivares-Pinto
- Escuela Nacional de Estudios Superiores Unidad Juriquilla, Universidad Nacional Autónoma de México (UNAM), Campus Juriquilla, Querétaro, Mexico
| | - S C Thébault
- Laboratorio de Investigación Traslacional en Salud Visual, Instituto de Neurobiología, Universidad Nacional Autónoma de México (UNAM), Campus Juriquilla, Querétaro, Mexico.
| |
Collapse
|
3
|
Gonçalves MB, Nakayama LF, Ferraz D, Faber H, Korot E, Malerbi FK, Regatieri CV, Maia M, Celi LA, Keane PA, Belfort R. Image quality assessment of retinal fundus photographs for diabetic retinopathy in the machine learning era: a review. Eye (Lond) 2024; 38:426-433. [PMID: 37667028 PMCID: PMC10858054 DOI: 10.1038/s41433-023-02717-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Revised: 06/26/2023] [Accepted: 08/25/2023] [Indexed: 09/06/2023] Open
Abstract
This study aimed to evaluate the image quality assessment (IQA) and quality criteria employed in publicly available datasets for diabetic retinopathy (DR). A literature search strategy was used to identify relevant datasets, and 20 datasets were included in the analysis. Out of these, 12 datasets mentioned performing IQA, but only eight specified the quality criteria used. The reported quality criteria varied widely across datasets, and accessing the information was often challenging. The findings highlight the importance of IQA for AI model development while emphasizing the need for clear and accessible reporting of IQA information. The study suggests that automated quality assessments can be a valid alternative to manual labeling and emphasizes the importance of establishing quality standards based on population characteristics, clinical use, and research purposes. In conclusion, image quality assessment is important for AI model development; however, strict data quality standards must not limit data sharing. Given the importance of IQA for developing, validating, and implementing deep learning (DL) algorithms, it's recommended that this information be reported in a clear, specific, and accessible way whenever possible. Automated quality assessments are a valid alternative to the traditional manual labeling process, and quality standards should be determined according to population characteristics, clinical use, and research purpose.
Collapse
Affiliation(s)
- Mariana Batista Gonçalves
- Department of Ophthalmology, Sao Paulo Federal University, São Paulo, SP, Brazil
- Instituto Paulista de Estudos e Pesquisas em Oftalmologia, IPEPO, Vision Institute, São Paulo, SP, Brazil
- NIHR Biomedical Research Centre for Ophthalmology, Moorfield Eye Hospital, NHS Foundation Trust, and UCL Institute of Ophthalmology, London, UK
| | - Luis Filipe Nakayama
- Department of Ophthalmology, Sao Paulo Federal University, São Paulo, SP, Brazil.
- Massachusetts Institute of Technology, Laboratory for Computational Physiology, Cambridge, MA, USA.
| | - Daniel Ferraz
- Department of Ophthalmology, Sao Paulo Federal University, São Paulo, SP, Brazil
- Instituto Paulista de Estudos e Pesquisas em Oftalmologia, IPEPO, Vision Institute, São Paulo, SP, Brazil
- NIHR Biomedical Research Centre for Ophthalmology, Moorfield Eye Hospital, NHS Foundation Trust, and UCL Institute of Ophthalmology, London, UK
| | - Hanna Faber
- Department of Ophthalmology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
- Department of Ophthalmology, University of Tuebingen, Tuebingen, Germany
| | - Edward Korot
- Retina Specialists of Michigan, Grand Rapids, MI, USA
- Stanford University Byers Eye Institute Palo Alto, Palo Alto, CA, USA
| | | | | | - Mauricio Maia
- Department of Ophthalmology, Sao Paulo Federal University, São Paulo, SP, Brazil
| | - Leo Anthony Celi
- Massachusetts Institute of Technology, Laboratory for Computational Physiology, Cambridge, MA, USA
- Harvard TH Chan School of Public Health, Department of Biostatistics, Boston, MA, USA
- Beth Israel Deaconess Medical Center, Department of Medicine, Boston, MA, USA
| | - Pearse A Keane
- NIHR Biomedical Research Centre for Ophthalmology, Moorfield Eye Hospital, NHS Foundation Trust, and UCL Institute of Ophthalmology, London, UK
| | - Rubens Belfort
- Department of Ophthalmology, Sao Paulo Federal University, São Paulo, SP, Brazil
- Instituto Paulista de Estudos e Pesquisas em Oftalmologia, IPEPO, Vision Institute, São Paulo, SP, Brazil
| |
Collapse
|
4
|
Manan MA, Jinchao F, Khan TM, Yaqub M, Ahmed S, Chuhan IS. Semantic segmentation of retinal exudates using a residual encoder-decoder architecture in diabetic retinopathy. Microsc Res Tech 2023; 86:1443-1460. [PMID: 37194727 DOI: 10.1002/jemt.24345] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2023] [Revised: 04/21/2023] [Accepted: 05/04/2023] [Indexed: 05/18/2023]
Abstract
Exudates are a common sign of diabetic retinopathy, which is a disease that affects the blood vessels in the retina. Early detection of exudates is critical to avoiding vision problems through continuous screening and treatment. In traditional clinical practice, the involved lesions are manually detected using photographs of the fundus. However, this task is cumbersome and time-consuming and requires intense effort due to the small size of the lesion and the low contrast of the images. Thus, computer-assisted diagnosis of retinal disease based on the detection of red lesions has been actively explored recently. In this paper, we present a comparison of deep convolutional neural network (CNN) architectures and propose a residual CNN with residual skip connections to reduce the parameter for the semantic segmentation of exudates in retinal images. A suitable image augmentation technique is used to improve the performance of network architecture. The proposed network can robustly segment exudates with high accuracy, which makes it suitable for diabetic retinopathy screening. A comparative performance analysis of three benchmark databases: E-ophtha, DIARETDB1, and Hamilton Ophthalmology Institute's Macular Edema, is presented. The proposed method achieves a precision of 0.95, 0.92, 0.97, accuracy of 0.98, 0.98, 0.98, sensitivity of 0.97, 0.95, 0.95, specificity of 0.99, 0.99, 0.99, and area under the curve of 0.97, 0.94, and 0.96, respectively. RESEARCH HIGHLIGHTS: The research focuses on the detection and segmentation of exudates in diabetic retinopathy, a disease affecting the retina. Early detection of exudates is important to avoid vision problems and requires continuous screening and treatment. Currently, manual detection is time-consuming and requires intense effort. The authors compare qualitative results of the state-of-the-art convolutional neural network (CNN) architectures and propose a computer-assisted diagnosis approach based on deep learning, using a residual CNN with residual skip connections to reduce parameters. The proposed method is evaluated on three benchmark databases and demonstrates high accuracy and suitability for diabetic retinopathy screening.
Collapse
Affiliation(s)
- Malik Abdul Manan
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Faculty of Information Technology, Beijing University of Technology, Beijing, China
| | - Feng Jinchao
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Faculty of Information Technology, Beijing University of Technology, Beijing, China
| | - Tariq M Khan
- School of IT, Deakin University, Waurn Ponds, Australia
| | - Muhammad Yaqub
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Faculty of Information Technology, Beijing University of Technology, Beijing, China
| | - Shahzad Ahmed
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Faculty of Information Technology, Beijing University of Technology, Beijing, China
| | - Imran Shabir Chuhan
- Interdisciplinary Research Institute, Faculty of Science, Beijing University of Technology, Beijing, China
| |
Collapse
|
5
|
Kaur J, Mittal D, Malebary S, Nayak SR, Kumar D, Kumar M, Gagandeep, Singh S. Automated Detection and Segmentation of Exudates for the Screening of Background Retinopathy. JOURNAL OF HEALTHCARE ENGINEERING 2023; 2023:4537253. [PMID: 37483301 PMCID: PMC10361834 DOI: 10.1155/2023/4537253] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/16/2022] [Accepted: 04/15/2022] [Indexed: 07/25/2023]
Abstract
Exudate, an asymptomatic yellow deposit on retina, is among the primary characteristics of background diabetic retinopathy. Background diabetic retinopathy is a retinopathy related to high blood sugar levels which slowly affects all the organs of the body. The early detection of exudates aids doctors in screening the patients suffering from background diabetic retinopathy. A computer-aided method proposed in the present work detects and then segments the exudates in the images of retina acquired using a digital fundus camera by (i) gradient method to trace the contour of exudates, (ii) marking the connected candidate pixels to remove false exudates pixels, and (iii) linking the edge pixels for the boundary extraction of exudates. The method is tested on 1307 retinal fundus images with varying characteristics. Six hundred and forty-nine images were acquired from hospital and the remaining 658 from open-source benchmark databases, namely, STARE, DRIVE MESSIDOR, DiaretDB1, and e-Ophtha. The exudates segmentation method proposed in this research work results in the retinal fundus image-based (i) accuracy of 98.04%, (ii) sensitivity of 95.345%, and (iii) specificity of 98.63%. The segmentation results for a number of exudates-based evaluations depict the average (i) accuracy of 95.68%, (ii) sensitivity of 93.44%, and (iii) specificity of 97.22%. The substantial combined performance at image and exudates-based evaluations proves the contribution of the proposed method in mass screening as well as treatment process of background diabetic retinopathy.
Collapse
Affiliation(s)
- Jaskirat Kaur
- Department of Electronics and Communication Engineering, Punjab Engineering College (Deemed to be University), Sector 12, Chandigarh 160012, India
| | - Deepti Mittal
- Electrical and Instrumentation Engineering Department, Thapar Institute of Engineering and Technology, Patiala 147004, India
| | - Sharaf Malebary
- Department of Information Technology, Faculty of Computing and Information Technology in Rabigh, King Abdulaziz University, Jeddah 21911, Saudi Arabia
| | - Soumya Ranjan Nayak
- School of Computer Engineering, KIIT Deemed to be University, Bhubaneswar 751024, Odisha, India
| | - Devendra Kumar
- Department of Computer Science, Wachemo University, Hosaena, Ethiopia
| | - Manoj Kumar
- Faculty of Engineering and Information Sciences, University of Wollongong in Dubai, Dubai Knowledge Park, UAE
- MEU Research Unit, Middle East University, Amman 11831, Jordan
| | - Gagandeep
- Computer Science Engineering Department, Chandigarh Engineering College, Mohali, India
| | - Simrandeep Singh
- Electronics and Communication Engineering Department, UCRD, Chandigarh University, Mohali, India
| |
Collapse
|
6
|
Krzywicki T, Brona P, Zbrzezny AM, Grzybowski AE. A Global Review of Publicly Available Datasets Containing Fundus Images: Characteristics, Barriers to Access, Usability, and Generalizability. J Clin Med 2023; 12:jcm12103587. [PMID: 37240693 DOI: 10.3390/jcm12103587] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Revised: 04/29/2023] [Accepted: 05/17/2023] [Indexed: 05/28/2023] Open
Abstract
This article provides a comprehensive and up-to-date overview of the repositories that contain color fundus images. We analyzed them regarding availability and legality, presented the datasets' characteristics, and identified labeled and unlabeled image sets. This study aimed to complete all publicly available color fundus image datasets to create a central catalog of available color fundus image datasets.
Collapse
Affiliation(s)
- Tomasz Krzywicki
- Faculty of Mathematics and Computer Science, University of Warmia and Mazury, 10-710 Olsztyn, Poland
| | - Piotr Brona
- Department of Ophthalmology, Poznan City Hospital, 61-285 Poznań, Poland
| | - Agnieszka M Zbrzezny
- Faculty of Mathematics and Computer Science, University of Warmia and Mazury, 10-710 Olsztyn, Poland
- Faculty of Design, SWPS University of Social Sciences and Humanities, Chodakowska 19/31, 03-815 Warsaw, Poland
| | - Andrzej E Grzybowski
- Institute for Research in Ophthalmology, Foundation for Ophthalmology Development, 60-836 Poznań, Poland
| |
Collapse
|
7
|
Tavana P, Akraminia M, Koochari A, Bagherifard A. Classification of spinal curvature types using radiography images: deep learning versus classical methods. Artif Intell Rev 2023; 56:1-33. [PMID: 37362895 PMCID: PMC10088798 DOI: 10.1007/s10462-023-10480-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/28/2023]
Abstract
Scoliosis is a spinal abnormality that has two types of curves (C-shaped or S-shaped). The vertebrae of the spine reach an equilibrium at different times, which makes it challenging to detect the type of curves. In addition, it may be challenging to detect curvatures due to observer bias and image quality. This paper aims to evaluate spinal deformity by automatically classifying the type of spine curvature. Automatic spinal curvature classification is performed using SVM and KNN algorithms, and pre-trained Xception and MobileNetV2 networks with SVM as the final activation function to avoid vanishing gradient. Different feature extraction methods should be used to investigate the SVM and KNN machine learning methods in detecting the curvature type. Features are extracted through the representation of radiographic images. These representations are of two groups: (i) Low-level image representation techniques such as texture features and (ii) local patch-based representations such as Bag of Words (BoW). Such features are utilized by various algorithms for classification by SVM and KNN. The feature extraction process is automated in pre-trained deep networks. In this study, 1000 anterior-posterior (AP) radiographic images of the spine were collected as a private dataset from Shafa Hospital, Tehran, Iran. The transfer learning was used due to the relatively small private dataset of anterior-posterior radiology images of the spine. Based on the results of these experiments, pre-trained deep networks were found to be approximately 10% more accurate than classical methods in classifying whether the spinal curvature is C-shaped or S-shaped. As a result of automatic feature extraction, it has been found that the pre-trained Xception and mobilenetV2 networks with SVM as the final activation function for controlling the vanishing gradient perform better than the classical machine learning methods of classification of spinal curvature types.
Collapse
Affiliation(s)
- Parisa Tavana
- Department of Computer Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran
| | - Mahdi Akraminia
- Mechanical Rotary Equipment Research Department, Niroo Research Institute, Tehran, Iran
| | - Abbas Koochari
- Department of Computer Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran
| | - Abolfazl Bagherifard
- Bone and Joint Reconstruction Research Center, Shafa Orthopedic Hospital, Iran University of Medical Sciences, Tehran, Iran
| |
Collapse
|
8
|
Bhati A, Gour N, Khanna P, Ojha A. Discriminative kernel convolution network for multi-label ophthalmic disease detection on imbalanced fundus image dataset. Comput Biol Med 2023; 153:106519. [PMID: 36608462 DOI: 10.1016/j.compbiomed.2022.106519] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2022] [Revised: 12/23/2022] [Accepted: 12/31/2022] [Indexed: 01/04/2023]
Abstract
It is feasible to recognize the presence and seriousness of eye disease by investigating the progressions in retinal biological structures. Fundus examination is a diagnostic procedure to examine the biological structure and anomalies present in the eye. Ophthalmic diseases like glaucoma, diabetic retinopathy, and cataracts are the main cause of visual impairment worldwide. Ocular Disease Intelligent Recognition (ODIR-5K) is a benchmark structured fundus image dataset utilized by researchers for multi-label multi-disease classification of fundus images. This work presents a Discriminative Kernel Convolution Network (DKCNet), which explores discriminative region-wise features without adding extra computational cost. DKCNet is composed of an attention block followed by a Squeeze-and-Excitation (SE) block. The attention block takes features from the backbone network and generates discriminative feature attention maps. The SE block takes the discriminative feature maps and improves channel interdependencies. Better performance of DKCNet is observed with InceptionResnet backbone network for multi-label classification of ODIR-5K fundus images with 96.08 AUC, 94.28 F1-score, and 0.81 kappa score. The proposed method splits the common target label for an eye pair based on the diagnostic keyword. Based on these labels, over-sampling and/or under-sampling are done to resolve the class imbalance. To check the bias of the proposed model towards training data, the model trained on the ODIR dataset is tested on three publicly available benchmark datasets. It is observed that the proposed DKCNet gives good performance on completely unseen fundus images also.
Collapse
Affiliation(s)
- Amit Bhati
- Departement of Computer Science and Engineering, PDPM Indian Institute of Information Technology, Design and Manufacturing, Jabalpur 482005, India
| | - Neha Gour
- Departement of Computer Science and Engineering, PDPM Indian Institute of Information Technology, Design and Manufacturing, Jabalpur 482005, India
| | - Pritee Khanna
- Departement of Computer Science and Engineering, PDPM Indian Institute of Information Technology, Design and Manufacturing, Jabalpur 482005, India.
| | - Aparajita Ojha
- Departement of Computer Science and Engineering, PDPM Indian Institute of Information Technology, Design and Manufacturing, Jabalpur 482005, India
| |
Collapse
|
9
|
Shahriari MH, Sabbaghi H, Asadi F, Hosseini A, Khorrami Z. Artificial intelligence in screening, diagnosis, and classification of diabetic macular edema: A systematic review. Surv Ophthalmol 2023; 68:42-53. [PMID: 35970233 DOI: 10.1016/j.survophthal.2022.08.004] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2022] [Revised: 08/03/2022] [Accepted: 08/08/2022] [Indexed: 02/01/2023]
Abstract
We review the application of artificial intelligence (AI) techniques in the screening, diagnosis, and classification of diabetic macular edema (DME) by searching six databases- PubMed, Scopus, Web of Science, Science Direct, IEEE, and ACM- from January 1, 2005 to July 4, 2021. A total of 879 articles were extracted, and by applying inclusion and exclusion criteria, 38 articles were selected for more evaluation. The methodological quality of included studies was evaluated using the Quality Assessment for Diagnostic Accuracy Studies (QUADAS-2). We provide an overview of the current state of various AI techniques for DME screening, diagnosis, and classification using retinal imaging modalities such as optical coherence tomography (OCT) and color fundus photography (CFP). Based on our findings, deep learning models have an extraordinary capacity to provide an accurate and efficient system for DME screening and diagnosis. Using these in the processing of modalities leads to a significant increase in sensitivity and specificity values. The use of decision support systems and applications based on AI in processing retinal images provided by OCT and CFP increases the sensitivity and specificity in DME screening and detection.
Collapse
Affiliation(s)
- Mohammad Hasan Shahriari
- Department of Health Information Technology and Management, School of Allied Medical Sciences, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Hamideh Sabbaghi
- Ophthalmic Epidemiology Research Center, Research Institute for Ophthalmology and Vision Science, Shahid Beheshti University of Medical Sciences, Tehran, Iran; Department of Optometry, School of Rehabilitation, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Farkhondeh Asadi
- Department of Health Information Technology and Management, School of Allied Medical Sciences, Shahid Beheshti University of Medical Sciences, Tehran, Iran.
| | - Azamosadat Hosseini
- Department of Health Information Technology and Management, School of Allied Medical Sciences, Shahid Beheshti University of Medical Sciences, Tehran, Iran.
| | - Zahra Khorrami
- Ophthalmic Epidemiology Research Center, Research Institute for Ophthalmology and Vision Science, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| |
Collapse
|
10
|
Pavithra K, Kumar P, Geetha M, Bhandary SV. Computer aided diagnosis of diabetic macular edema in retinal fundus and OCT images: A review. Biocybern Biomed Eng 2023. [DOI: 10.1016/j.bbe.2022.12.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2023]
|
11
|
Coan LJ, Williams BM, Krishna Adithya V, Upadhyaya S, Alkafri A, Czanner S, Venkatesh R, Willoughby CE, Kavitha S, Czanner G. Automatic detection of glaucoma via fundus imaging and artificial intelligence: A review. Surv Ophthalmol 2023; 68:17-41. [PMID: 35985360 DOI: 10.1016/j.survophthal.2022.08.005] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2022] [Revised: 08/04/2022] [Accepted: 08/08/2022] [Indexed: 02/01/2023]
Abstract
Glaucoma is a leading cause of irreversible vision impairment globally, and cases are continuously rising worldwide. Early detection is crucial, allowing timely intervention that can prevent further visual field loss. To detect glaucoma an examination of the optic nerve head via fundus imaging can be performed, at the center of which is the assessment of the optic cup and disc boundaries. Fundus imaging is noninvasive and low-cost; however, image examination relies on subjective, time-consuming, and costly expert assessments. A timely question to ask is: "Can artificial intelligence mimic glaucoma assessments made by experts?" Specifically, can artificial intelligence automatically find the boundaries of the optic cup and disc (providing a so-called segmented fundus image) and then use the segmented image to identify glaucoma with high accuracy? We conducted a comprehensive review on artificial intelligence-enabled glaucoma detection frameworks that produce and use segmented fundus images and summarized the advantages and disadvantages of such frameworks. We identified 36 relevant papers from 2011 to 2021 and 2 main approaches: 1) logical rule-based frameworks, based on a set of rules; and 2) machine learning/statistical modeling-based frameworks. We critically evaluated the state-of-art of the 2 approaches, identified gaps in the literature and pointed at areas for future research.
Collapse
Affiliation(s)
- Lauren J Coan
- School of Computer Science and Mathematics, Liverpool John Moores University, UK.
| | - Bryan M Williams
- School of Computing and Communications, Lancaster University, UK
| | | | - Swati Upadhyaya
- Department of Glaucoma, Aravind Eye Hospital, Pondicherry, India
| | - Ala Alkafri
- School of Computing, Engineering & Digital Technologies, Teesside University, UK
| | - Silvester Czanner
- School of Computer Science and Mathematics, Liverpool John Moores University, UK; Faculty of Informatics and Information Technologies, Slovak University of Technology, Slovakia
| | - Rengaraj Venkatesh
- Department of Glaucoma and Chief Medical Officer, Aravind Eye Hospital, Pondicherry, India
| | | | | | - Gabriela Czanner
- School of Computer Science and Mathematics, Liverpool John Moores University, UK; Faculty of Informatics and Information Technologies, Slovak University of Technology, Slovakia
| |
Collapse
|
12
|
Dubey S, Dixit M. Recent developments on computer aided systems for diagnosis of diabetic retinopathy: a review. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 82:14471-14525. [PMID: 36185322 PMCID: PMC9510498 DOI: 10.1007/s11042-022-13841-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Revised: 04/27/2022] [Accepted: 09/06/2022] [Indexed: 06/16/2023]
Abstract
Diabetes is a long-term condition in which the pancreas quits producing insulin or the body's insulin isn't utilised properly. One of the signs of diabetes is Diabetic Retinopathy. Diabetic retinopathy is the most prevalent type of diabetes, if remains unaddressed, diabetic retinopathy can affect all diabetics and become very serious, raising the chances of blindness. It is a chronic systemic condition that affects up to 80% of patients for more than ten years. Many researchers believe that if diabetes individuals are diagnosed early enough, they can be rescued from the condition in 90% of cases. Diabetes damages the capillaries, which are microscopic blood vessels in the retina. On images, blood vessel damage is usually noticeable. Therefore, in this study, several traditional, as well as deep learning-based approaches, are reviewed for the classification and detection of this particular diabetic-based eye disease known as diabetic retinopathy, and also the advantage of one approach over the other is also described. Along with the approaches, the dataset and the evaluation metrics useful for DR detection and classification are also discussed. The main finding of this study is to aware researchers about the different challenges occurs while detecting diabetic retinopathy using computer vision, deep learning techniques. Therefore, a purpose of this review paper is to sum up all the major aspects while detecting DR like lesion identification, classification and segmentation, security attacks on the deep learning models, proper categorization of datasets and evaluation metrics. As deep learning models are quite expensive and more prone to security attacks thus, in future it is advisable to develop a refined, reliable and robust model which overcomes all these aspects which are commonly found while designing deep learning models.
Collapse
Affiliation(s)
- Shradha Dubey
- Madhav Institute of Technology & Science (Department of Computer Science and Engineering), Gwalior, M.P. India
| | - Manish Dixit
- Madhav Institute of Technology & Science (Department of Computer Science and Engineering), Gwalior, M.P. India
| |
Collapse
|
13
|
|
14
|
Investigations of CNN for Medical Image Analysis for Illness Prediction. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:7968200. [PMID: 35676956 PMCID: PMC9168160 DOI: 10.1155/2022/7968200] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/19/2022] [Revised: 01/31/2022] [Accepted: 02/11/2022] [Indexed: 11/17/2022]
Abstract
When it comes to diabetic retinopathy, exudates are the most common sign; alarms for early screening and diagnosis are suggested. The images taken by cameras and high-definition ophthalmoscopes are riddled with flaws and noise. Overcoming noise difficulties and pursuing automated/computer-aided diagnosis is always a challenge. The major objective of this approach is to obtain a better prediction rate of diabetic retinopathy analysis. The accuracy, sensitivity, specificity, and prediction rate improvement are focused on the objective view. The images are separated into relevant patches of various sizes and stacked for use as inputs to CNN, which is then trained, tested, and validated. The article presents a mathematical approach to determine the prevalence, shape in precise, color, and density in the populations among image patches to operate and discover the fact the image collection consists of symptoms of exudates and methods to comprehend the diagnosis and suggest risks of early hospital treatment. The experimental result analysis of malignant quality shows the accuracy, sensitivity, specificity, and predictive value. Here, 78% of accuracy, 78.8% of sensitivity, and 78.3% of specificity are obtained, and both positive and negative predictive values are obtained.
Collapse
|
15
|
Wang TY, Chen YH, Chen JT, Liu JT, Wu PY, Chang SY, Lee YW, Su KC, Chen CL. Diabetic Macular Edema Detection Using End-to-End Deep Fusion Model and Anatomical Landmark Visualization on an Edge Computing Device. Front Med (Lausanne) 2022; 9:851644. [PMID: 35445051 PMCID: PMC9014123 DOI: 10.3389/fmed.2022.851644] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2022] [Accepted: 03/14/2022] [Indexed: 11/23/2022] Open
Abstract
Purpose Diabetic macular edema (DME) is a common cause of vision impairment and blindness in patients with diabetes. However, vision loss can be prevented by regular eye examinations during primary care. This study aimed to design an artificial intelligence (AI) system to facilitate ophthalmology referrals by physicians. Methods We developed an end-to-end deep fusion model for DME classification and hard exudate (HE) detection. Based on the architecture of fusion model, we also applied a dual model which included an independent classifier and object detector to perform these two tasks separately. We used 35,001 annotated fundus images from three hospitals between 2007 and 2018 in Taiwan to create a private dataset. The Private dataset, Messidor-1 and Messidor-2 were used to assess the performance of the fusion model for DME classification and HE detection. A second object detector was trained to identify anatomical landmarks (optic disc and macula). We integrated the fusion model and the anatomical landmark detector, and evaluated their performance on an edge device, a device with limited compute resources. Results For DME classification of our private testing dataset, Messidor-1 and Messidor-2, the area under the receiver operating characteristic curve (AUC) for the fusion model had values of 98.1, 95.2, and 95.8%, the sensitivities were 96.4, 88.7, and 87.4%, the specificities were 90.1, 90.2, and 90.2%, and the accuracies were 90.8, 90.0, and 89.9%, respectively. In addition, the AUC was not significantly different for the fusion and dual models for the three datasets (p = 0.743, 0.942, and 0.114, respectively). For HE detection, the fusion model achieved a sensitivity of 79.5%, a specificity of 87.7%, and an accuracy of 86.3% using our private testing dataset. The sensitivity of the fusion model was higher than that of the dual model (p = 0.048). For optic disc and macula detection, the second object detector achieved accuracies of 98.4% (optic disc) and 99.3% (macula). The fusion model and the anatomical landmark detector can be deployed on a portable edge device. Conclusion This portable AI system exhibited excellent performance for the classification of DME, and the visualization of HE and anatomical locations. It facilitates interpretability and can serve as a clinical reference for physicians. Clinically, this system could be applied to diabetic eye screening to improve the interpretation of fundus imaging in patients with DME.
Collapse
Affiliation(s)
- Ting-Yuan Wang
- Information and Communications Research Laboratories, Industrial Technology Research Institute, Hsinchu, Taiwan
| | - Yi-Hao Chen
- Department of Ophthalmology, Tri-Service General Hospital, National Defense Medical Center, Taipei, Taiwan
| | - Jiann-Torng Chen
- Department of Ophthalmology, Tri-Service General Hospital, National Defense Medical Center, Taipei, Taiwan
| | - Jung-Tzu Liu
- Information and Communications Research Laboratories, Industrial Technology Research Institute, Hsinchu, Taiwan
| | - Po-Yi Wu
- Information and Communications Research Laboratories, Industrial Technology Research Institute, Hsinchu, Taiwan
| | - Sung-Yen Chang
- Information and Communications Research Laboratories, Industrial Technology Research Institute, Hsinchu, Taiwan
| | - Ya-Wen Lee
- Information and Communications Research Laboratories, Industrial Technology Research Institute, Hsinchu, Taiwan
| | - Kuo-Chen Su
- Department of Optometry, Chung Shan Medical University, Taichung, Taiwan
| | - Ching-Long Chen
- Department of Ophthalmology, Tri-Service General Hospital, National Defense Medical Center, Taipei, Taiwan
| |
Collapse
|
16
|
GLCM and statistical features extraction technique with Extra-Tree Classifier in Macular Oedema risk diagnosis. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103471] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/24/2023]
|
17
|
Nakayama LF, Ribeiro LZ, Gonçalves MB, Ferraz DA, Dos Santos HNV, Malerbi FK, Morales PH, Maia M, Regatieri CVS, Mattos RB. Diabetic retinopathy classification for supervised machine learning algorithms. Int J Retina Vitreous 2022; 8:1. [PMID: 34980281 PMCID: PMC8722080 DOI: 10.1186/s40942-021-00352-2] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2021] [Accepted: 12/17/2021] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Artificial intelligence and automated technology were first reported more than 70 years ago and nowadays provide unprecedented diagnostic accuracy, screening capacity, risk stratification, and workflow optimization. Diabetic retinopathy is an important cause of preventable blindness worldwide, and artificial intelligence technology provides precocious diagnosis, monitoring, and guide treatment. High-quality exams are fundamental in supervised artificial intelligence algorithms, but the lack of ground truth standards in retinal exams datasets is a problem. MAIN BODY In this article, ETDRS, NHS, ICDR, SDGS diabetic retinopathy grading, and manual annotation are described and compared in publicly available datasets. The various DR labeling systems generate a fundamental problem for AI datasets. Possible solutions are standardization of DR classification and direct retinal-finding identifications. CONCLUSION Reliable labeling methods also need to be considered in datasets with more trustworthy labeling.
Collapse
Affiliation(s)
- Luis Filipe Nakayama
- Physician, Department of Ophthalmology, Universidade Federal de São Paulo - EPM, Botucatu Street, 821, Vila Clementino, São Paulo, SP, 04023-062, Brazil.
| | - Lucas Zago Ribeiro
- Physician, Department of Ophthalmology, Universidade Federal de São Paulo - EPM, Botucatu Street, 821, Vila Clementino, São Paulo, SP, 04023-062, Brazil
| | - Mariana Batista Gonçalves
- Physician, Department of Ophthalmology, Universidade Federal de São Paulo - EPM, Botucatu Street, 821, Vila Clementino, São Paulo, SP, 04023-062, Brazil.,Instituto Paulista de Estudos e Pesquisas em Oftalmologia, IPEPO, Vision Institute, São Paulo, SP, Brazil.,NIHR Biomedical Research Centre for Ophthalmology, Moorfield Eye Hospital, NHS Foundation Trust, and UCL Institute of Ophthalmology, London, UK
| | - Daniel A Ferraz
- Physician, Department of Ophthalmology, Universidade Federal de São Paulo - EPM, Botucatu Street, 821, Vila Clementino, São Paulo, SP, 04023-062, Brazil.,Instituto Paulista de Estudos e Pesquisas em Oftalmologia, IPEPO, Vision Institute, São Paulo, SP, Brazil.,NIHR Biomedical Research Centre for Ophthalmology, Moorfield Eye Hospital, NHS Foundation Trust, and UCL Institute of Ophthalmology, London, UK
| | - Helen Nazareth Veloso Dos Santos
- Physician, Department of Ophthalmology, Universidade Federal de São Paulo - EPM, Botucatu Street, 821, Vila Clementino, São Paulo, SP, 04023-062, Brazil
| | - Fernando Korn Malerbi
- Physician, Department of Ophthalmology, Universidade Federal de São Paulo - EPM, Botucatu Street, 821, Vila Clementino, São Paulo, SP, 04023-062, Brazil
| | - Paulo Henrique Morales
- Physician, Department of Ophthalmology, Universidade Federal de São Paulo - EPM, Botucatu Street, 821, Vila Clementino, São Paulo, SP, 04023-062, Brazil.,Instituto Paulista de Estudos e Pesquisas em Oftalmologia, IPEPO, Vision Institute, São Paulo, SP, Brazil
| | - Mauricio Maia
- Physician, Department of Ophthalmology, Universidade Federal de São Paulo - EPM, Botucatu Street, 821, Vila Clementino, São Paulo, SP, 04023-062, Brazil
| | - Caio Vinicius Saito Regatieri
- Physician, Department of Ophthalmology, Universidade Federal de São Paulo - EPM, Botucatu Street, 821, Vila Clementino, São Paulo, SP, 04023-062, Brazil
| | - Rubens Belfort Mattos
- Physician, Department of Ophthalmology, Universidade Federal de São Paulo - EPM, Botucatu Street, 821, Vila Clementino, São Paulo, SP, 04023-062, Brazil.,Instituto Paulista de Estudos e Pesquisas em Oftalmologia, IPEPO, Vision Institute, São Paulo, SP, Brazil
| |
Collapse
|
18
|
Huang X, Wang H, She C, Feng J, Liu X, Hu X, Chen L, Tao Y. Artificial intelligence promotes the diagnosis and screening of diabetic retinopathy. Front Endocrinol (Lausanne) 2022; 13:946915. [PMID: 36246896 PMCID: PMC9559815 DOI: 10.3389/fendo.2022.946915] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/18/2022] [Accepted: 09/12/2022] [Indexed: 11/13/2022] Open
Abstract
Deep learning evolves into a new form of machine learning technology that is classified under artificial intelligence (AI), which has substantial potential for large-scale healthcare screening and may allow the determination of the most appropriate specific treatment for individual patients. Recent developments in diagnostic technologies facilitated studies on retinal conditions and ocular disease in metabolism and endocrinology. Globally, diabetic retinopathy (DR) is regarded as a major cause of vision loss. Deep learning systems are effective and accurate in the detection of DR from digital fundus photographs or optical coherence tomography. Thus, using AI techniques, systems with high accuracy and efficiency can be developed for diagnosing and screening DR at an early stage and without the resources that are only accessible in special clinics. Deep learning enables early diagnosis with high specificity and sensitivity, which makes decisions based on minimally handcrafted features paving the way for personalized DR progression real-time monitoring and in-time ophthalmic or endocrine therapies. This review will discuss cutting-edge AI algorithms, the automated detecting systems of DR stage grading and feature segmentation, the prediction of DR outcomes and therapeutics, and the ophthalmic indications of other systemic diseases revealed by AI.
Collapse
Affiliation(s)
- Xuan Huang
- Department of Ophthalmology, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
- Medical Research Center, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| | - Hui Wang
- Department of Ophthalmology, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| | - Chongyang She
- Department of Ophthalmology, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| | - Jing Feng
- Department of Ophthalmology, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| | - Xuhui Liu
- Department of Ophthalmology, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| | - Xiaofeng Hu
- Department of Ophthalmology, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| | - Li Chen
- Department of Ophthalmology, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| | - Yong Tao
- Department of Ophthalmology, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
- *Correspondence: Yong Tao,
| |
Collapse
|
19
|
Wang R, Zuo G, Li K, Li W, Xuan Z, Han Y, Yang W. Systematic bibliometric and visualized analysis of research hotspots and trends on the application of artificial intelligence in diabetic retinopathy. Front Endocrinol (Lausanne) 2022; 13:1036426. [PMID: 36387891 PMCID: PMC9659570 DOI: 10.3389/fendo.2022.1036426] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/04/2022] [Accepted: 10/17/2022] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND Artificial intelligence (AI), which has been used to diagnose diabetic retinopathy (DR), may impact future medical and ophthalmic practices. Therefore, this study explored AI's general applications and research frontiers in the detection and gradation of DR. METHODS Citation data were obtained from the Web of Science Core Collection database (WoSCC) to assess the application of AI in diagnosing DR in the literature published from January 1, 2012, to June 30, 2022. These data were processed by CiteSpace 6.1.R3 software. RESULTS Overall, 858 publications from 77 countries and regions were examined, with the United States considered the leading country in this domain. The largest cluster labeled "automated detection" was employed in the generating stage from 2007 to 2014. The burst keywords from 2020 to 2022 were artificial intelligence and transfer learning. CONCLUSION Initial research focused on the study of intelligent algorithms used to localize or recognize lesions on fundus images to assist in diagnosing DR. Presently, the focus of research has changed from upgrading the accuracy and efficiency of DR lesion detection and classification to research on DR diagnostic systems. However, further studies on DR and computer engineering are required.
Collapse
Affiliation(s)
- Ruoyu Wang
- The Fourth School of Clinical Medicine, Nanjing Medical University, Nanjing, China
| | - Guangxi Zuo
- The First School of Clinical Medicine, Nanjing Medical University, Nanjing, China
| | - Kunke Li
- Shenzhen Eye Hospital, Jinan University, Shenzhen, China
| | - Wangting Li
- Shenzhen Eye Hospital, Jinan University, Shenzhen, China
| | - Zhiqiang Xuan
- Institute of Occupational Health and Radiation Protection, Zhejiang Provincial Center for Disease Control and Prevention, Hangzhou, China
- *Correspondence: Zhiqiang Xuan, ; Yongzhao Han, ; Weihua Yang,
| | - Yongzhao Han
- Affiliated Jiangning Hospital, Nanjing Medical University, Nanjing, China
- *Correspondence: Zhiqiang Xuan, ; Yongzhao Han, ; Weihua Yang,
| | - Weihua Yang
- Shenzhen Eye Hospital, Jinan University, Shenzhen, China
- *Correspondence: Zhiqiang Xuan, ; Yongzhao Han, ; Weihua Yang,
| |
Collapse
|
20
|
The Determination of Type 2 Diabetes Mellitus's Impact on the Density of Retinal Blood Vessels and the Choriocapillaris: Optical Coherence Tomography Angiography Study. J Ophthalmol 2021; 2021:7043251. [PMID: 34853704 PMCID: PMC8629665 DOI: 10.1155/2021/7043251] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2021] [Accepted: 11/10/2021] [Indexed: 11/17/2022] Open
Abstract
Optical coherence tomography angiography (OCTA) was used to analyze the alterations in the density of retinal blood vessels and the choriocapillaris (VD) in patients suffering from type 2 diabetes mellitus (T2DM). One hundred sixty-six eyes of 83 patients (43 of whom were men and 40 women, with a mean age of 58.59 ± 14.04) with T2DM and without diabetic retinopathy were examined for the purpose of conducting the observational prospective study. The control group (CG) consisted of 66 eyes in 33 healthy subjects (15 male and 18 female, with a mean age of 55.12 ± 12.70). The measurement regions of vessel density (VD) included the deep capillary plexus (DCP), the superficial capillary plexus (SCP), and the choriocapillaris. The results indicate considerable differences in the VD of the DCP and SCP when comparing the control group with the study groups (p < 0.001). In comparison with the control group (p < 0.001), there was a statistically significant reduction in the VD of the choriocapillaris in the study group. Furthermore, patients with T2DM showed a significantly decreased VD concerning the control in different macular regions. Thickness in several macular regions in the study group significantly decreased compared to the ones in the control group. OCTA was used to gather relevant information about the vascular changes which occurred in T2DM patients, assessed through the quantitative analysis of the blood flow in the retina and choriocapillaris.
Collapse
|
21
|
Lakshminarayanan V, Kheradfallah H, Sarkar A, Jothi Balaji J. Automated Detection and Diagnosis of Diabetic Retinopathy: A Comprehensive Survey. J Imaging 2021; 7:165. [PMID: 34460801 PMCID: PMC8468161 DOI: 10.3390/jimaging7090165] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Revised: 08/23/2021] [Accepted: 08/24/2021] [Indexed: 12/16/2022] Open
Abstract
Diabetic Retinopathy (DR) is a leading cause of vision loss in the world. In the past few years, artificial intelligence (AI) based approaches have been used to detect and grade DR. Early detection enables appropriate treatment and thus prevents vision loss. For this purpose, both fundus and optical coherence tomography (OCT) images are used to image the retina. Next, Deep-learning (DL)-/machine-learning (ML)-based approaches make it possible to extract features from the images and to detect the presence of DR, grade its severity and segment associated lesions. This review covers the literature dealing with AI approaches to DR such as ML and DL in classification and segmentation that have been published in the open literature within six years (2016-2021). In addition, a comprehensive list of available DR datasets is reported. This list was constructed using both the PICO (P-Patient, I-Intervention, C-Control, O-Outcome) and Preferred Reporting Items for Systematic Review and Meta-analysis (PRISMA) 2009 search strategies. We summarize a total of 114 published articles which conformed to the scope of the review. In addition, a list of 43 major datasets is presented.
Collapse
Affiliation(s)
- Vasudevan Lakshminarayanan
- Theoretical and Experimental Epistemology Lab, School of Optometry and Vision Science, University of Waterloo, Waterloo, ON N2L 3G1, Canada;
| | - Hoda Kheradfallah
- Theoretical and Experimental Epistemology Lab, School of Optometry and Vision Science, University of Waterloo, Waterloo, ON N2L 3G1, Canada;
| | - Arya Sarkar
- Department of Computer Engineering, University of Engineering and Management, Kolkata 700 156, India;
| | | |
Collapse
|
22
|
Cen LP, Ji J, Lin JW, Ju ST, Lin HJ, Li TP, Wang Y, Yang JF, Liu YF, Tan S, Tan L, Li D, Wang Y, Zheng D, Xiong Y, Wu H, Jiang J, Wu Z, Huang D, Shi T, Chen B, Yang J, Zhang X, Luo L, Huang C, Zhang G, Huang Y, Ng TK, Chen H, Chen W, Pang CP, Zhang M. Automatic detection of 39 fundus diseases and conditions in retinal photographs using deep neural networks. Nat Commun 2021; 12:4828. [PMID: 34376678 PMCID: PMC8355164 DOI: 10.1038/s41467-021-25138-w] [Citation(s) in RCA: 69] [Impact Index Per Article: 23.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2020] [Accepted: 07/22/2021] [Indexed: 02/05/2023] Open
Abstract
Retinal fundus diseases can lead to irreversible visual impairment without timely diagnoses and appropriate treatments. Single disease-based deep learning algorithms had been developed for the detection of diabetic retinopathy, age-related macular degeneration, and glaucoma. Here, we developed a deep learning platform (DLP) capable of detecting multiple common referable fundus diseases and conditions (39 classes) by using 249,620 fundus images marked with 275,543 labels from heterogenous sources. Our DLP achieved a frequency-weighted average F1 score of 0.923, sensitivity of 0.978, specificity of 0.996 and area under the receiver operating characteristic curve (AUC) of 0.9984 for multi-label classification in the primary test dataset and reached the average level of retina specialists. External multihospital test, public data test and tele-reading application also showed high efficiency for multiple retinal diseases and conditions detection. These results indicate that our DLP can be applied for retinal fundus disease triage, especially in remote areas around the world.
Collapse
Affiliation(s)
- Ling-Ping Cen
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Jie Ji
- Network & Information Centre, Shantou University, Shantou, Guangdong, China
- Shantou University Medical College, Shantou, Guangdong, China
- XuanShi Med Tech (Shanghai) Company Limited, Shanghai, China
| | - Jian-Wei Lin
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Si-Tong Ju
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Hong-Jie Lin
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Tai-Ping Li
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Yun Wang
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Jian-Feng Yang
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Yu-Fen Liu
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Shaoying Tan
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Li Tan
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Dongjie Li
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Yifan Wang
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Dezhi Zheng
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Yongqun Xiong
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Hanfu Wu
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Jingjing Jiang
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Zhenggen Wu
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Dingguo Huang
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Tingkun Shi
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Binyao Chen
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Jianling Yang
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Xiaoling Zhang
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Li Luo
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Chukai Huang
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Guihua Zhang
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Yuqiang Huang
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Tsz Kin Ng
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
- Shantou University Medical College, Shantou, Guangdong, China
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Shatin, Hong Kong
| | - Haoyu Chen
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Weiqi Chen
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
| | - Chi Pui Pang
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Shatin, Hong Kong
| | - Mingzhi Zhang
- Joint Shantou International Eye Centre of Shantou University and The Chinese University of Hong Kong, Shantou, Guangdong, China.
| |
Collapse
|
23
|
Kurilová V, Goga J, Oravec M, Pavlovičová J, Kajan S. Support vector machine and deep-learning object detection for localisation of hard exudates. Sci Rep 2021; 11:16045. [PMID: 34362989 PMCID: PMC8346563 DOI: 10.1038/s41598-021-95519-0] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2020] [Accepted: 07/26/2021] [Indexed: 02/08/2023] Open
Abstract
Hard exudates are one of the main clinical findings in the retinal images of patients with diabetic retinopathy. Detecting them early significantly impacts the treatment of underlying diseases; therefore, there is a need for automated systems with high reliability. We propose a novel method for identifying and localising hard exudates in retinal images. To achieve fast image pre-scanning, a support vector machine (SVM) classifier was combined with a faster region-based convolutional neural network (faster R-CNN) object detector for the localisation of exudates. Rapid pre-scanning filtered out exudate-free samples using a feature vector extracted from the pre-trained ResNet-50 network. Subsequently, the remaining samples were processed using a faster R-CNN detector for detailed analysis. When evaluating all the exudates as individual objects, the SVM classifier reduced the false positive rate by 29.7% and marginally increased the false negative rate by 16.2%. When evaluating all the images, we recorded a 50% reduction in the false positive rate, without any decrease in the number of false negatives. The interim results suggested that pre-scanning the samples using the SVM prior to implementing the deep-network object detector could simultaneously improve and speed up the current hard exudates detection method, especially when there is paucity of training data.
Collapse
Affiliation(s)
- Veronika Kurilová
- Faculty of Electrical Engineering and Information Technology, Slovak University of Technology, Ilkovičova 3, 812 19, Bratislava, Slovakia.
| | - Jozef Goga
- Faculty of Electrical Engineering and Information Technology, Slovak University of Technology, Ilkovičova 3, 812 19, Bratislava, Slovakia
| | - Miloš Oravec
- Faculty of Electrical Engineering and Information Technology, Slovak University of Technology, Ilkovičova 3, 812 19, Bratislava, Slovakia.
| | - Jarmila Pavlovičová
- Faculty of Electrical Engineering and Information Technology, Slovak University of Technology, Ilkovičova 3, 812 19, Bratislava, Slovakia
| | - Slavomír Kajan
- Faculty of Electrical Engineering and Information Technology, Slovak University of Technology, Ilkovičova 3, 812 19, Bratislava, Slovakia
| |
Collapse
|
24
|
Gupta D, Choudhury A, Gupta U, Singh P, Prasad M. Computational approach to clinical diagnosis of diabetes disease: a comparative study. MULTIMEDIA TOOLS AND APPLICATIONS 2021; 80:30091-30116. [DOI: 10.1007/s11042-020-10242-8] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/11/2019] [Revised: 10/14/2020] [Accepted: 12/09/2020] [Indexed: 08/30/2023]
|
25
|
|
26
|
Wang YL, Yang JY, Yang JY, Zhao XY, Chen YX, Yu WH. Progress of artificial intelligence in diabetic retinopathy screening. Diabetes Metab Res Rev 2021; 37:e3414. [PMID: 33010796 DOI: 10.1002/dmrr.3414] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/10/2020] [Revised: 08/22/2020] [Accepted: 08/23/2020] [Indexed: 12/29/2022]
Abstract
Diabetic retinopathy (DR) is one of the leading causes of blindness worldwide, and the limited availability of qualified ophthalmologists restricts its early diagnosis. For the past few years, artificial intelligence technology has developed rapidly and has been applied in DR screening. The upcoming technology provides support on DR screening and improves the identification of DR lesions with a high sensitivity and specificity. This review aims to summarize the progress on automatic detection and classification models for the diagnosis of DR.
Collapse
Affiliation(s)
- Yue-Lin Wang
- Department of Ophthalmology, Peking Union Medical College Hospital & Chinese Academy of Medical Sciences, Beijing, China
- Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China
| | - Jing-Yun Yang
- Division of Statistics, School of Economics & Research Center of Financial Information, Shanghai University, Shanghai, China
- Rush Alzheimer's Disease Center & Department of Neurological Sciences, Rush University Medical Center, Chicago, Illinois, USA
| | - Jing-Yuan Yang
- Department of Ophthalmology, Peking Union Medical College Hospital & Chinese Academy of Medical Sciences, Beijing, China
- Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China
| | - Xin-Yu Zhao
- Department of Ophthalmology, Peking Union Medical College Hospital & Chinese Academy of Medical Sciences, Beijing, China
- Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China
| | - You-Xin Chen
- Department of Ophthalmology, Peking Union Medical College Hospital & Chinese Academy of Medical Sciences, Beijing, China
- Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China
| | - Wei-Hong Yu
- Department of Ophthalmology, Peking Union Medical College Hospital & Chinese Academy of Medical Sciences, Beijing, China
- Key Laboratory of Ocular Fundus Diseases, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China
| |
Collapse
|
27
|
A review of diabetic retinopathy: Datasets, approaches, evaluation metrics and future trends. JOURNAL OF KING SAUD UNIVERSITY - COMPUTER AND INFORMATION SCIENCES 2021. [DOI: 10.1016/j.jksuci.2021.06.006] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
|
28
|
Chen M, Jin K, You K, Xu Y, Wang Y, Yip CC, Wu J, Ye J. Automatic detection of leakage point in central serous chorioretinopathy of fundus fluorescein angiography based on time sequence deep learning. Graefes Arch Clin Exp Ophthalmol 2021; 259:2401-2411. [PMID: 33846835 DOI: 10.1007/s00417-021-05151-x] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2020] [Revised: 02/16/2021] [Accepted: 03/02/2021] [Indexed: 01/23/2023] Open
Abstract
PURPOSE To detect the leakage points of central serous chorioretinopathy (CSC) automatically from dynamic images of fundus fluorescein angiography (FFA) using a deep learning algorithm (DLA). METHODS The study included 2104 FFA images from 291 FFA sequences of 291 eyes (137 right eyes and 154 left eyes) from 262 patients. The leakage points were segmented with an attention gated network (AGN). The optic disk (OD) and macula region were segmented simultaneously using a U-net. To reduce the number of false positives based on time sequence, the leakage points were matched according to their positions in relation to the OD and macula. RESULTS With the AGN alone, the number of cases whose detection results perfectly matched the ground truth was only 37 out of 61 cases (60.7%) in the test set. The dice on the lesion level were 0.811. Using an elimination procedure to remove false positives, the number of accurate detection cases increased to 57 (93.4%). The dice on the lesion level also improved to 0.949. CONCLUSIONS Using DLA, the CSC leakage points in FFA can be identified reproducibly and accurately with a good match to the ground truth. This novel finding may pave the way for potential application of artificial intelligence to guide laser therapy.
Collapse
Affiliation(s)
- Menglu Chen
- Department of Ophthalmology, the Second Affiliated Hospital of Zhejiang University, College of Medicine, Hangzhou, 310009, China
| | - Kai Jin
- Department of Ophthalmology, the Second Affiliated Hospital of Zhejiang University, College of Medicine, Hangzhou, 310009, China
| | - Kun You
- Hangzhou Truth Medical Technology Ltd, Hangzhou, 311215, China
| | - Yufeng Xu
- Department of Ophthalmology, the Second Affiliated Hospital of Zhejiang University, College of Medicine, Hangzhou, 310009, China
| | - Yao Wang
- Department of Ophthalmology, the Second Affiliated Hospital of Zhejiang University, College of Medicine, Hangzhou, 310009, China
| | - Chee-Chew Yip
- Department of Ophthalmology, Khoo Teck Puat Hospital, Yishun Central, Singapore
| | - Jian Wu
- College of Computer Science and Technology, Zhejiang University, Hangzhou, 310027, China.
| | - Juan Ye
- Department of Ophthalmology, the Second Affiliated Hospital of Zhejiang University, College of Medicine, Hangzhou, 310009, China.
| |
Collapse
|
29
|
Li T, Bo W, Hu C, Kang H, Liu H, Wang K, Fu H. Applications of deep learning in fundus images: A review. Med Image Anal 2021; 69:101971. [PMID: 33524824 DOI: 10.1016/j.media.2021.101971] [Citation(s) in RCA: 81] [Impact Index Per Article: 27.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2020] [Accepted: 01/12/2021] [Indexed: 02/06/2023]
Abstract
The use of fundus images for the early screening of eye diseases is of great clinical importance. Due to its powerful performance, deep learning is becoming more and more popular in related applications, such as lesion segmentation, biomarkers segmentation, disease diagnosis and image synthesis. Therefore, it is very necessary to summarize the recent developments in deep learning for fundus images with a review paper. In this review, we introduce 143 application papers with a carefully designed hierarchy. Moreover, 33 publicly available datasets are presented. Summaries and analyses are provided for each task. Finally, limitations common to all tasks are revealed and possible solutions are given. We will also release and regularly update the state-of-the-art results and newly-released datasets at https://github.com/nkicsl/Fundus_Review to adapt to the rapid development of this field.
Collapse
Affiliation(s)
- Tao Li
- College of Computer Science, Nankai University, Tianjin 300350, China
| | - Wang Bo
- College of Computer Science, Nankai University, Tianjin 300350, China
| | - Chunyu Hu
- College of Computer Science, Nankai University, Tianjin 300350, China
| | - Hong Kang
- College of Computer Science, Nankai University, Tianjin 300350, China
| | - Hanruo Liu
- Beijing Tongren Hospital, Capital Medical University, Address, Beijing 100730 China
| | - Kai Wang
- College of Computer Science, Nankai University, Tianjin 300350, China.
| | - Huazhu Fu
- Inception Institute of Artificial Intelligence (IIAI), Abu Dhabi, UAE
| |
Collapse
|
30
|
Xie R, Liu J, Cao R, Qiu CS, Duan J, Garibaldi J, Qiu G. End-to-End Fovea Localisation in Colour Fundus Images With a Hierarchical Deep Regression Network. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:116-128. [PMID: 32915729 DOI: 10.1109/tmi.2020.3023254] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Accurately locating the fovea is a prerequisite for developing computer aided diagnosis (CAD) of retinal diseases. In colour fundus images of the retina, the fovea is a fuzzy region lacking prominent visual features and this makes it difficult to directly locate the fovea. While traditional methods rely on explicitly extracting image features from the surrounding structures such as the optic disc and various vessels to infer the position of the fovea, deep learning based regression technique can implicitly model the relation between the fovea and other nearby anatomical structures to determine the location of the fovea in an end-to-end fashion. Although promising, using deep learning for fovea localisation also has many unsolved challenges. In this paper, we present a new end-to-end fovea localisation method based on a hierarchical coarse-to-fine deep regression neural network. The innovative features of the new method include a multi-scale feature fusion technique and a self-attention technique to exploit location, semantic, and contextual information in an integrated framework, a multi-field-of-view (multi-FOV) feature fusion technique for context-aware feature learning and a Gaussian-shift-cropping method for augmenting effective training data. We present extensive experimental results on two public databases and show that our new method achieved state-of-the-art performances. We also present a comprehensive ablation study and analysis to demonstrate the technical soundness and effectiveness of the overall framework and its various constituent components.
Collapse
|
31
|
Exudates as Landmarks Identified through FCM Clustering in Retinal Images. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app11010142] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The aim of this work was to develop a method for the automatic identification of exudates, using an unsupervised clustering approach. The ability to classify each pixel as belonging to an eventual exudate, as a warning of disease, allows for the tracking of a patient’s status through a noninvasive approach. In the field of diabetic retinopathy detection, we considered four public domain datasets (DIARETDB0/1, IDRID, and e-optha) as benchmarks. In order to refine the final results, a specialist ophthalmologist manually segmented a random selection of DIARETDB0/1 fundus images that presented exudates. An innovative pipeline of morphological procedures and fuzzy C-means clustering was integrated in order to extract exudates with a pixel-wise approach. Our methodology was optimized, and verified and the parameters were fine-tuned in order to define both suitable values and to produce a more accurate segmentation. The method was used on 100 tested images, resulting in averages of sensitivity, specificity, and accuracy equal to 83.3%, 99.2%, and 99.1%, respectively.
Collapse
|
32
|
Lin L, Li M, Huang Y, Cheng P, Xia H, Wang K, Yuan J, Tang X. The SUSTech-SYSU dataset for automated exudate detection and diabetic retinopathy grading. Sci Data 2020; 7:409. [PMID: 33219237 PMCID: PMC7679367 DOI: 10.1038/s41597-020-00755-0] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2020] [Accepted: 11/03/2020] [Indexed: 11/09/2022] Open
Abstract
Automated detection of exudates from fundus images plays an important role in diabetic retinopathy (DR) screening and evaluation, for which supervised or semi-supervised learning methods are typically preferred. However, a potential limitation of supervised and semi-supervised learning based detection algorithms is that they depend substantially on the sample size of training data and the quality of annotations, which is the fundamental motivation of this work. In this study, we construct a dataset containing 1219 fundus images (from DR patients and healthy controls) with annotations of exudate lesions. In addition to exudate annotations, we also provide four additional labels for each image: left-versus-right eye label, DR grade (severity scale) from three different grading protocols, the bounding box of the optic disc (OD), and fovea location. This dataset provides a great opportunity to analyze the accuracy and reliability of different exudate detection, OD detection, fovea localization, and DR classification algorithms. Moreover, it will facilitate the development of such algorithms in the realm of supervised and semi-supervised learning.
Collapse
Affiliation(s)
- Li Lin
- Department of Electrical and Electronic Engineering, Southern University of Science and Technology, Shenzhen, 5180000, China.,School of Electronics and Information Technology, Sun Yat-sen University, Guangzhou, 510000, China
| | - Meng Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, Guangzhou, 510000, China
| | - Yijin Huang
- Department of Electrical and Electronic Engineering, Southern University of Science and Technology, Shenzhen, 5180000, China
| | - Pujin Cheng
- Department of Electrical and Electronic Engineering, Southern University of Science and Technology, Shenzhen, 5180000, China
| | - Honghui Xia
- Department of Ophthalmology, Gaoyao People's Hospital, Zhaoqing, 526000, China
| | - Kai Wang
- School of Electronics and Information Technology, Sun Yat-sen University, Guangzhou, 510000, China
| | - Jin Yuan
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Centre, Sun Yat-sen University, Guangzhou, 510000, China.
| | - Xiaoying Tang
- Department of Electrical and Electronic Engineering, Southern University of Science and Technology, Shenzhen, 5180000, China.
| |
Collapse
|
33
|
Infrared retinal images for flashless detection of macular edema. Sci Rep 2020; 10:14384. [PMID: 32873818 PMCID: PMC7463268 DOI: 10.1038/s41598-020-71010-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2020] [Accepted: 08/07/2020] [Indexed: 11/08/2022] Open
Abstract
This study evaluates the use of infrared (IR) images of the retina, obtained without flashes of light, for machine-based detection of macular oedema (ME). A total of 41 images of 21 subjects, here with 23 cases and 18 controls, were studied. Histogram and gray-level co-occurrence matrix (GLCM) parameters were extracted from the IR retinal images. The diagnostic performance of the histogram and GLCM parameters was calculated in hindsight based on the known labels of each image. The results from the one-way ANOVA indicated there was a significant difference between ME eyes and the controls when using GLCM features, with the correlation feature having the highest area under the curve (AUC) (AZ) value. The performance of the proposed method was also evaluated using a support vector machine (SVM) classifier that gave sensitivity and specificity of 100%. This research shows that the texture of the IR images of the retina has a significant difference between ME eyes and the controls and that it can be considered for machine-based detection of ME without requiring flashes of light.
Collapse
|
34
|
|
35
|
Rim TH, Soh ZD, Tham YC, Yang HHS, Lee G, Kim Y, Nusinovici S, Ting DSW, Wong TY, Cheng CY. Deep Learning for Automated Sorting of Retinal Photographs. Ophthalmol Retina 2020; 4:793-800. [PMID: 32362553 DOI: 10.1016/j.oret.2020.03.007] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2019] [Revised: 02/04/2020] [Accepted: 03/06/2020] [Indexed: 06/11/2023]
Abstract
PURPOSE Though the domain of big data and artificial intelligence in health care continues to evolve, there is a lack of systemic methods to improve data quality and streamline the preparation process. To address this, we aimed to develop an automated sorting system (RetiSort) that accurately labels the type and laterality of retinal photographs. DESIGN Cross-sectional study. PARTICIPANTS RetiSort was developed with retinal photographs from the Singapore Epidemiology of Eye Diseases (SEED) study. METHODS The development of RetiSort was composed of 3 steps: 2 deep-learning (DL) algorithms and 1 rule-based classifier. For step 1, a DL algorithm was developed to locate the optic disc, the "landmark feature." For step 2, based on the location of the optic disc derived from step 1, a rule-based classifier was developed to sort retinal photographs into 3 types: macular-centered, optic disc-centered, or related to other fields. Step 2 concurrently distinguished laterality (i.e., the left or right eye) of macular-centered photographs. For step 3, an additional DL algorithm was developed to differentiate the laterality of disc-centered photographs. Via the 3 steps, RetiSort sorted and labeled retinal images into (1) right macular-centered, (2) left macular-centered, (3) right optic disc-centered, (4) left optic disc-centered, and (5) images relating to other fields. Subsequently, the accuracy of RetiSort was evaluated on 5000 randomly selected retinal images from SEED as well as on 3 publicly available image databases (DIARETDB0, HEI-MED, and Drishti-GS). The main outcome measure was the accuracy for sorting of retinal photographs. RESULTS RetiSort mislabeled 48 out of 5000 retinal images from SEED, representing an overall accuracy of 99.0% (95% confidence interval [CI], 98.7-99.3). In external tests, RetiSort mislabeled 1, 0, and 2 images, respectively, from DIARETDB0, HEI-MED, and Drishti-GS, representing an accuracy of 99.2% (95% CI, 95.8-99.9), 100%, and 98.0% (95% CI, 93.1-99.8), respectively. Saliency maps consistently showed that the DL algorithm in step 3 required pixels in the central left lateral border and optic disc of optic disc-centered retinal photographs to differentiate the laterality. CONCLUSIONS RetiSort is a highly accurate automated sorting system. It can aid in data preparation and has practical applications in DL research that uses retinal photographs.
Collapse
Affiliation(s)
- Tyler Hyungtaek Rim
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Ophthalmology & Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore
| | - Zhi Da Soh
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Yih-Chung Tham
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Ophthalmology & Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore
| | | | | | | | - Simon Nusinovici
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Daniel Shu Wei Ting
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Ophthalmology & Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore
| | - Tien Yin Wong
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Ophthalmology & Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore
| | - Ching-Yu Cheng
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Ophthalmology & Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore; Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore.
| |
Collapse
|
36
|
Wang H, Yuan G, Zhao X, Peng L, Wang Z, He Y, Qu C, Peng Z. Hard exudate detection based on deep model learned information and multi-feature joint representation for diabetic retinopathy screening. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 191:105398. [PMID: 32092614 DOI: 10.1016/j.cmpb.2020.105398] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/12/2019] [Revised: 01/18/2020] [Accepted: 02/14/2020] [Indexed: 06/10/2023]
Abstract
BACKGROUND AND OBJECTIVE Diabetic retinopathy (DR), which is generally diagnosed by the presence of hemorrhages and hard exudates, is one of the most prevalent causes of visual impairment and blindness. Early detection of hard exudates (HEs) in color fundus photographs can help in preventing such destructive damage. However, this is a challenging task due to high intra-class diversity and high similarity with other structures in the fundus images. Most of the existing methods for detecting HEs are based on characterizing HEs using hand crafted features (HCFs) only, which can not characterize HEs accurately. Deep learning methods are scarce in this domain because they require large-scale sample sets for training which are not generally available for most routine medical imaging research. METHODS To address these challenges, we propose a novel methodology for HE detection using deep convolutional neural network (DCNN) and multi-feature joint representation. Specifically, we present a new optimized mathematical morphological approach that first segments HE candidates accurately. Then, each candidate is characterized using combined features based on deep features with HCFs incorporated, which is implemented by a ridge regression-based feature fusion. This method employs multi-space-based intensity features, geometric features, a gray-level co-occurrence matrix (GLCM)-based texture descriptor, a gray-level size zone matrix (GLSZM)-based texture descriptor to construct HCFs, and a DCNN to automatically learn the deep information of HE. Finally, a random forest is employed to identify the true HEs among candidates. RESULTS The proposed method is evaluated on two benchmark databases. It obtains an F-score of 0.8929 with an area under curve (AUC) of 0.9644 on the e-optha database and an F-score of 0.9326 with an AUC of 0.9323 on the HEI-MED database. These results demonstrate that our approach outperforms state-of-the-art methods. Our model also proves to be suitable for clinical applications based on private clinical images from a local hospital. CONCLUSIONS This newly proposed method integrates the traditional HCFs and deep features learned from DCNN for detecting HEs. It achieves a new state-of-the-art in both detecting HEs and DR screening. Furthermore, the proposed feature selection and fusion strategy reduces feature dimension and improves HE detection performance.
Collapse
Affiliation(s)
- Hui Wang
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China; Laboratory of Imaging Detection and Intelligent Perception, University of Electronic Science and Technology of China, Chengdu, 611731, China.
| | - Guohui Yuan
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China; Laboratory of Imaging Detection and Intelligent Perception, University of Electronic Science and Technology of China, Chengdu, 611731, China.
| | - Xuegong Zhao
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China; Laboratory of Imaging Detection and Intelligent Perception, University of Electronic Science and Technology of China, Chengdu, 611731, China.
| | - Lingbing Peng
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China.
| | - Zhuoran Wang
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China; Laboratory of Imaging Detection and Intelligent Perception, University of Electronic Science and Technology of China, Chengdu, 611731, China.
| | - Yanmin He
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China; Laboratory of Imaging Detection and Intelligent Perception, University of Electronic Science and Technology of China, Chengdu, 611731, China.
| | - Chao Qu
- Department of Ophthalmology, Sichuan Academy of Medical Sciences and Sichuan Provincial People's Hospital, Chengdu 610072, China.
| | - Zhenming Peng
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China; Laboratory of Imaging Detection and Intelligent Perception, University of Electronic Science and Technology of China, Chengdu, 611731, China.
| |
Collapse
|
37
|
Intelligent optic disc segmentation using improved particle swarm optimization and evolving ensemble models. Appl Soft Comput 2020. [DOI: 10.1016/j.asoc.2020.106328] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/23/2023]
|
38
|
Stolte S, Fang R. A survey on medical image analysis in diabetic retinopathy. Med Image Anal 2020; 64:101742. [PMID: 32540699 DOI: 10.1016/j.media.2020.101742] [Citation(s) in RCA: 48] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2019] [Revised: 02/03/2020] [Accepted: 05/28/2020] [Indexed: 01/12/2023]
Abstract
Diabetic Retinopathy (DR) represents a highly-prevalent complication of diabetes in which individuals suffer from damage to the blood vessels in the retina. The disease manifests itself through lesion presence, starting with microaneurysms, at the nonproliferative stage before being characterized by neovascularization in the proliferative stage. Retinal specialists strive to detect DR early so that the disease can be treated before substantial, irreversible vision loss occurs. The level of DR severity indicates the extent of treatment necessary - vision loss may be preventable by effective diabetes management in mild (early) stages, rather than subjecting the patient to invasive laser surgery. Using artificial intelligence (AI), highly accurate and efficient systems can be developed to help assist medical professionals in screening and diagnosing DR earlier and without the full resources that are available in specialty clinics. In particular, deep learning facilitates diagnosis earlier and with higher sensitivity and specificity. Such systems make decisions based on minimally handcrafted features and pave the way for personalized therapies. Thus, this survey provides a comprehensive description of the current technology used in each step of DR diagnosis. First, it begins with an introduction to the disease and the current technologies and resources available in this space. It proceeds to discuss the frameworks that different teams have used to detect and classify DR. Ultimately, we conclude that deep learning systems offer revolutionary potential to DR identification and prevention of vision loss.
Collapse
Affiliation(s)
- Skylar Stolte
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, 1275 Center Drive, Biomedical Sciences Building JG56 P.O. Box 116131 Gainesville, FL 32611-6131, USA.
| | - Ruogu Fang
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, 1275 Center Drive, Biomedical Sciences Building JG56 P.O. Box 116131 Gainesville, FL 32611-6131, USA.
| |
Collapse
|
39
|
Detection of Early Signs of Diabetic Retinopathy Based on Textural and Morphological Information in Fundus Images. SENSORS 2020; 20:s20041005. [PMID: 32069912 PMCID: PMC7071097 DOI: 10.3390/s20041005] [Citation(s) in RCA: 34] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/16/2019] [Revised: 01/30/2020] [Accepted: 02/10/2020] [Indexed: 02/01/2023]
Abstract
Estimated blind people in the world will exceed 40 million by 2025. To develop novel algorithms based on fundus image descriptors that allow the automatic classification of retinal tissue into healthy and pathological in early stages is necessary. In this paper, we focus on one of the most common pathologies in the current society: diabetic retinopathy. The proposed method avoids the necessity of lesion segmentation or candidate map generation before the classification stage. Local binary patterns and granulometric profiles are locally computed to extract texture and morphological information from retinal images. Different combinations of this information feed classification algorithms to optimally discriminate bright and dark lesions from healthy tissues. Through several experiments, the ability of the proposed system to identify diabetic retinopathy signs is validated using different public databases with a large degree of variability and without image exclusion.
Collapse
|
40
|
Cao L, Li H. Enhancement of blurry retinal image based on non-uniform contrast stretching and intensity transfer. Med Biol Eng Comput 2020; 58:483-496. [PMID: 31897799 DOI: 10.1007/s11517-019-02106-7] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2019] [Accepted: 12/18/2019] [Indexed: 11/26/2022]
Abstract
Proper contrast and sufficient illuminance are important in clearly identifying the retinal structures, while the required quality cannot always be guaranteed due to major reasons like acquisition process and diseases. To ensure the effectiveness of enhancement, two solutions are developed for blurry retinal images with sufficient illuminance and insufficient illuminance, respectively. The proposed contrast stretching and intensity transfer are main steps in both of the two solutions. The contrast stretching is based on base-intensity removal and non-uniform addition. We assume that a base-intensity exists in an image, which mainly supports the basic illuminance but has less contribution to texture information. The base-intensity is estimated by the constrained Gaussian function and then removed. The non-uniform addition using compressed Gamma map is further developed to improve the contrast. Additionally, an effective intensity transfer strategy is introduced, which can provide required illuminance for a single channel after contrast stretching. The color correction can be achieved if the intensity transfer is performed on three channels. Results show that the proposed solutions can effectively improve the contrast and illuminance, and good visual perception for quality degraded retinal images is obtained. Illustration of contrast stretching based on a signal colour channel.
Collapse
Affiliation(s)
- Lvchen Cao
- School of Information and Electronics, Beijing Institute of Technology, Beijing, 100081, China
| | - Huiqi Li
- School of Information and Electronics, Beijing Institute of Technology, Beijing, 100081, China.
| |
Collapse
|
41
|
Algorithms for Diagnosis of Diabetic Retinopathy and Diabetic Macula Edema- A Review. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2020; 1307:357-373. [PMID: 32166636 DOI: 10.1007/5584_2020_499] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Human eye is one of the important organs in human body, with iris, pupil, sclera, cornea, lens, retina and optic nerve. Many important eye diseases as well as systemic diseases manifest themselves in the retina. The most widespread causes of blindness in the industrialized world are glaucoma, Age Related Macular Degeneration (ARMD), Diabetic Retinopathy (DR) and Diabetic Macula Edema (DME). The development of a retinal image analysis system is a demanding research topic for early detection, progression analysis and diagnosis of eye diseases. Early diagnosis and treatment of retinal diseases are essential to prevent vision loss. The huge and growing number of retinal disease affected patients, cost of current hospital-based detection methods (by eye care specialists) and scarcity in the number of ophthalmologists are the barriers to achieve the recommended screening compliance in the patient who is at the risk of retinal diseases. Developing an automated system which uses pattern recognition, computer vision and machine learning to diagnose retinal diseases is a potential solution to this problem. Damage to the tiny blood vessels in the retina in the posterior part of the eye due to diabetes is named as DR. Diabetes is a disease which occurs when the pancreas does not secrete enough insulin or the body does not utilize it properly. This disease slowly affects the circulatory system including that of the retina. As diabetes intensifies, the vision of a patient may start deteriorating and leading to DR. The retinal landmarks like OD and blood vessels, white lesions and red lesions are segmented to develop automated screening system for DR. DME is an advanced symptom of DR that can lead to irreversible vision loss. DME is a general term defined as retinal thickening or exudates present within 2 disk diameter of the fovea center; it can either focal or diffuse DME in distribution. In this paper, review the algorithms used in diagnosis of DR and DME.
Collapse
|
42
|
Diabetic retinopathy detection through deep learning techniques: A review. INFORMATICS IN MEDICINE UNLOCKED 2020. [DOI: 10.1016/j.imu.2020.100377] [Citation(s) in RCA: 79] [Impact Index Per Article: 19.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
|
43
|
Bhardwaj C, Jain S, Sood M. Diabetic Retinopathy Lesion Discriminative Diagnostic System for Retinal Fundus Images. ADVANCED BIOMEDICAL ENGINEERING 2020. [DOI: 10.14326/abe.9.71] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022] Open
Affiliation(s)
- Charu Bhardwaj
- Department of Electronics and Communication Engineering, JUIT Waknaghat
| | - Shruti Jain
- Department of Electronics and Communication Engineering, JUIT Waknaghat
| | | |
Collapse
|
44
|
Multiloss Function Based Deep Convolutional Neural Network for Segmentation of Retinal Vasculature into Arterioles and Venules. BIOMED RESEARCH INTERNATIONAL 2019; 2019:4747230. [PMID: 31111055 PMCID: PMC6487175 DOI: 10.1155/2019/4747230] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/31/2018] [Revised: 02/20/2019] [Accepted: 03/20/2019] [Indexed: 02/02/2023]
Abstract
The arterioles and venules (AV) classification of retinal vasculature is considered as the first step in the development of an automated system for analysing the vasculature biomarker association with disease prognosis. Most of the existing AV classification methods depend on the accurate segmentation of retinal blood vessels. Moreover, the unavailability of large-scale annotated data is a major hindrance in the application of deep learning techniques for AV classification. This paper presents an encoder-decoder based fully convolutional neural network for classification of retinal vasculature into arterioles and venules, without requiring the preliminary step of vessel segmentation. An optimized multiloss function is used to learn the pixel-wise and segment-wise retinal vessel labels. The proposed method is trained and evaluated on DRIVE, AVRDB, and a newly created AV classification dataset; and it attains 96%, 98%, and 97% accuracy, respectively. The new AV classification dataset is comprised of 700 annotated retinal images, which will offer the researchers a benchmark to compare their AV classification results.
Collapse
|
45
|
Xu J, Xue K, Zhang K. Current status and future trends of clinical diagnoses via image-based deep learning. Am J Cancer Res 2019; 9:7556-7565. [PMID: 31695786 PMCID: PMC6831476 DOI: 10.7150/thno.38065] [Citation(s) in RCA: 44] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2019] [Accepted: 07/28/2019] [Indexed: 12/26/2022] Open
Abstract
With the recent developments in deep learning technologies, artificial intelligence (AI) has gradually been transformed from cutting-edge technology into practical applications. AI plays an important role in disease diagnosis and treatment, health management, drug research and development, and precision medicine. Interdisciplinary collaborations will be crucial to develop new AI algorithms for medical applications. In this paper, we review the basic workflow for building an AI model, identify publicly available databases of ocular fundus images, and summarize over 60 papers contributing to the field of AI development.
Collapse
|
46
|
Sarhan A, Rokne J, Alhajj R. Glaucoma detection using image processing techniques: A literature review. Comput Med Imaging Graph 2019; 78:101657. [PMID: 31675645 DOI: 10.1016/j.compmedimag.2019.101657] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2019] [Revised: 09/02/2019] [Accepted: 09/09/2019] [Indexed: 11/26/2022]
Abstract
The term glaucoma refers to a group of heterogeneous diseases that cause the degeneration of retinal ganglion cells (RGCs). The degeneration of RGCs leads to two main issues: (i) structural changes to the optic nerve head as well as the nerve fiber layer, and (ii) simultaneous functional failure of the visual field. These two effects of glaucoma may lead to peripheral vision loss and, if the condition is left to progress it may eventually lead to blindness. No cure for glaucoma exists apart from early detection and treatment by optometrists and ophthalmologists. The degeneration of RGCs is normally detected from retinal images which are assessed by an expert. These retinal images also provide other vital information about the health of an eye. Thus, it is essential to develop automated techniques for extracting this information. The rapid development of digital images and computer vision techniques have increased the potential for analysis of eye health from images. This paper surveys current approaches to detect glaucoma from 2D and 3D images; both the limitations and possible future directions are highlighted. This study also describes the datasets used for retinal analysis along with existing evaluation algorithms. The main topics covered by this study may be enumerated as follows.
Collapse
Affiliation(s)
- Abdullah Sarhan
- Department of Computer Science, University of Calgary, Calgary, AB, Canada.
| | - Jon Rokne
- Department of Computer Science, University of Calgary, Calgary, AB, Canada
| | - Reda Alhajj
- Department of Computer Science, University of Calgary, Calgary, AB, Canada; Department of Computer Engineering, Istanbul Medipol University, Istanbul, Turkey
| |
Collapse
|
47
|
Porwal P, Pachade S, Kokare M, Deshmukh G, Son J, Bae W, Liu L, Wang J, Liu X, Gao L, Wu T, Xiao J, Wang F, Yin B, Wang Y, Danala G, He L, Choi YH, Lee YC, Jung SH, Li Z, Sui X, Wu J, Li X, Zhou T, Toth J, Baran A, Kori A, Chennamsetty SS, Safwan M, Alex V, Lyu X, Cheng L, Chu Q, Li P, Ji X, Zhang S, Shen Y, Dai L, Saha O, Sathish R, Melo T, Araújo T, Harangi B, Sheng B, Fang R, Sheet D, Hajdu A, Zheng Y, Mendonça AM, Zhang S, Campilho A, Zheng B, Shen D, Giancardo L, Quellec G, Mériaudeau F. IDRiD: Diabetic Retinopathy - Segmentation and Grading Challenge. Med Image Anal 2019; 59:101561. [PMID: 31671320 DOI: 10.1016/j.media.2019.101561] [Citation(s) in RCA: 69] [Impact Index Per Article: 13.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2019] [Revised: 09/09/2019] [Accepted: 09/16/2019] [Indexed: 02/07/2023]
Abstract
Diabetic Retinopathy (DR) is the most common cause of avoidable vision loss, predominantly affecting the working-age population across the globe. Screening for DR, coupled with timely consultation and treatment, is a globally trusted policy to avoid vision loss. However, implementation of DR screening programs is challenging due to the scarcity of medical professionals able to screen a growing global diabetic population at risk for DR. Computer-aided disease diagnosis in retinal image analysis could provide a sustainable approach for such large-scale screening effort. The recent scientific advances in computing capacity and machine learning approaches provide an avenue for biomedical scientists to reach this goal. Aiming to advance the state-of-the-art in automatic DR diagnosis, a grand challenge on "Diabetic Retinopathy - Segmentation and Grading" was organized in conjunction with the IEEE International Symposium on Biomedical Imaging (ISBI - 2018). In this paper, we report the set-up and results of this challenge that is primarily based on Indian Diabetic Retinopathy Image Dataset (IDRiD). There were three principal sub-challenges: lesion segmentation, disease severity grading, and localization of retinal landmarks and segmentation. These multiple tasks in this challenge allow to test the generalizability of algorithms, and this is what makes it different from existing ones. It received a positive response from the scientific community with 148 submissions from 495 registrations effectively entered in this challenge. This paper outlines the challenge, its organization, the dataset used, evaluation methods and results of top-performing participating solutions. The top-performing approaches utilized a blend of clinical information, data augmentation, and an ensemble of models. These findings have the potential to enable new developments in retinal image analysis and image-based DR screening in particular.
Collapse
Affiliation(s)
- Prasanna Porwal
- Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded, India; School of Biomedical Informatics, University of Texas Health Science Center at Houston, USA.
| | - Samiksha Pachade
- Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded, India; School of Biomedical Informatics, University of Texas Health Science Center at Houston, USA
| | - Manesh Kokare
- Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded, India
| | | | | | | | - Lihong Liu
- Ping An Technology (Shenzhen) Co.,Ltd, China
| | | | - Xinhui Liu
- Ping An Technology (Shenzhen) Co.,Ltd, China
| | | | - TianBo Wu
- Ping An Technology (Shenzhen) Co.,Ltd, China
| | - Jing Xiao
- Ping An Technology (Shenzhen) Co.,Ltd, China
| | | | | | - Yunzhi Wang
- School of Electrical and Computer Engineering, University of Oklahoma, USA
| | - Gopichandh Danala
- School of Electrical and Computer Engineering, University of Oklahoma, USA
| | - Linsheng He
- School of Electrical and Computer Engineering, University of Oklahoma, USA
| | - Yoon Ho Choi
- Samsung Advanced Institute for Health Sciences & Technology (SAIHST), Sungkyunkwan University, Seoul, Republic of Korea
| | - Yeong Chan Lee
- Samsung Advanced Institute for Health Sciences & Technology (SAIHST), Sungkyunkwan University, Seoul, Republic of Korea
| | - Sang-Hyuk Jung
- Samsung Advanced Institute for Health Sciences & Technology (SAIHST), Sungkyunkwan University, Seoul, Republic of Korea
| | - Zhongyu Li
- Department of Computer Science, University of North Carolina at Charlotte, USA
| | - Xiaodan Sui
- School of Information Science and Engineering, Shandong Normal University, China
| | - Junyan Wu
- Cleerly Inc., New York, United States
| | | | - Ting Zhou
- University at Buffalo, New York, United States
| | - Janos Toth
- University of Debrecen, Faculty of Informatics 4002 Debrecen, POB 400, Hungary
| | - Agnes Baran
- University of Debrecen, Faculty of Informatics 4002 Debrecen, POB 400, Hungary
| | | | | | | | | | - Xingzheng Lyu
- College of Computer Science and Technology, Zhejiang University, Hangzhou, China; Machine Learning for Bioimage Analysis Group, Bioinformatics Institute, A*STAR, Singapore
| | - Li Cheng
- Machine Learning for Bioimage Analysis Group, Bioinformatics Institute, A*STAR, Singapore; Department of Electric and Computer Engineering, University of Alberta, Canada
| | - Qinhao Chu
- School of Computing, National University of Singapore, Singapore
| | - Pengcheng Li
- School of Computing, National University of Singapore, Singapore
| | - Xin Ji
- Beijing Shanggong Medical Technology Co., Ltd., China
| | - Sanyuan Zhang
- College of Computer Science and Technology, Zhejiang University, Hangzhou, China
| | - Yaxin Shen
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, China; MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University, China
| | - Ling Dai
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, China; MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University, China
| | | | | | - Tânia Melo
- INESC TEC - Institute for Systems and Computer Engineering, Technology and Science, Porto, Portugal
| | - Teresa Araújo
- INESC TEC - Institute for Systems and Computer Engineering, Technology and Science, Porto, Portugal; FEUP - Faculty of Engineering of the University of Porto, Porto, Portugal
| | - Balazs Harangi
- University of Debrecen, Faculty of Informatics 4002 Debrecen, POB 400, Hungary
| | - Bin Sheng
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, China; MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University, China
| | - Ruogu Fang
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, USA
| | | | - Andras Hajdu
- University of Debrecen, Faculty of Informatics 4002 Debrecen, POB 400, Hungary
| | - Yuanjie Zheng
- School of Information Science and Engineering, Shandong Normal University, China
| | - Ana Maria Mendonça
- INESC TEC - Institute for Systems and Computer Engineering, Technology and Science, Porto, Portugal; FEUP - Faculty of Engineering of the University of Porto, Porto, Portugal
| | - Shaoting Zhang
- Department of Computer Science, University of North Carolina at Charlotte, USA
| | - Aurélio Campilho
- INESC TEC - Institute for Systems and Computer Engineering, Technology and Science, Porto, Portugal; FEUP - Faculty of Engineering of the University of Porto, Porto, Portugal
| | - Bin Zheng
- School of Electrical and Computer Engineering, University of Oklahoma, USA
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, USA; Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea
| | - Luca Giancardo
- School of Biomedical Informatics, University of Texas Health Science Center at Houston, USA
| | | | - Fabrice Mériaudeau
- Department of Electrical and Electronic Engineering, Universiti Teknologi PETRONAS, Malaysia; ImViA/IFTIM, Université de Bourgogne, Dijon, France
| |
Collapse
|
48
|
Yan Q, Zhao Y, Zheng Y, Liu Y, Zhou K, Frangi A, Liu J. Automated retinal lesion detection via image saliency analysis. Med Phys 2019; 46:4531-4544. [PMID: 31381173 DOI: 10.1002/mp.13746] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2019] [Revised: 07/11/2019] [Accepted: 07/22/2019] [Indexed: 01/02/2023] Open
Abstract
BACKGROUND AND OBJECTIVE The detection of abnormalities such as lesions or leakage from retinal images is an important health informatics task for automated early diagnosis of diabetic and malarial retinopathy or other eye diseases, in order to prevent blindness and common systematic conditions. In this work, we propose a novel retinal lesion detection method by adapting the concepts of saliency. METHODS Retinal images are first segmented as superpixels, two new saliency feature representations: uniqueness and compactness, are then derived to represent the superpixels. The pixel level saliency is then estimated from these superpixel saliency values via a bilateral filter. These extracted saliency features form a matrix for low-rank analysis to achieve saliency detection. The precise contour of a lesion is finally extracted from the generated saliency map after removing confounding structures such as blood vessels, the optic disk, and the fovea. The main novelty of this method is that it is an effective tool for detecting different abnormalities at the pixel level from different modalities of retinal images, without the need to tune parameters. RESULTS To evaluate its effectiveness, we have applied our method to seven public datasets of diabetic and malarial retinopathy with four different types of lesions: exudate, hemorrhage, microaneurysms, and leakage. The evaluation was undertaken at the pixel level, lesion level, or image level according to ground truth availability in these datasets. CONCLUSIONS The experimental results show that the proposed method outperforms existing state-of-the-art ones in applicability, effectiveness, and accuracy.
Collapse
Affiliation(s)
- Qifeng Yan
- University of Chinese Academy of Sciences, Beijing, 100049, China.,Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Cixi, 315399, China
| | - Yitian Zhao
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Cixi, 315399, China
| | - Yalin Zheng
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Cixi, 315399, China.,Department of Eye and Vision Science, University of Liverpool, Liverpool, L7 8TX, UK
| | - Yonghuai Liu
- Department of Computer Science, Edge Hill University, Ormskirk, L39 4QP, UK
| | - Kang Zhou
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Cixi, 315399, China.,School of Information Science and Technology, ShanghaiTech University, Shanghai, 201210, China
| | - Alejandro Frangi
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Cixi, 315399, China.,School of Computing, University of Leeds, Leeds, S2 9JT, UK
| | - Jiang Liu
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Cixi, 315399, China.,Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, 518055, China
| |
Collapse
|
49
|
Deep Ensemble Learning Based Objective Grading of Macular Edema by Extracting Clinically Significant Findings from Fused Retinal Imaging Modalities. SENSORS 2019; 19:s19132970. [PMID: 31284442 PMCID: PMC6651513 DOI: 10.3390/s19132970] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/26/2019] [Revised: 06/21/2019] [Accepted: 06/26/2019] [Indexed: 12/22/2022]
Abstract
Macular edema (ME) is a retinal condition in which central vision of a patient is affected. ME leads to accumulation of fluid in the surrounding macular region resulting in a swollen macula. Optical coherence tomography (OCT) and the fundus photography are the two widely used retinal examination techniques that can effectively detect ME. Many researchers have utilized retinal fundus and OCT imaging for detecting ME. However, to the best of our knowledge, no work is found in the literature that fuses the findings from both retinal imaging modalities for the effective and more reliable diagnosis of ME. In this paper, we proposed an automated framework for the classification of ME and healthy eyes using retinal fundus and OCT scans. The proposed framework is based on deep ensemble learning where the input fundus and OCT scans are recognized through the deep convolutional neural network (CNN) and are processed accordingly. The processed scans are further passed to the second layer of the deep CNN model, which extracts the required feature descriptors from both images. The extracted descriptors are then concatenated together and are passed to the supervised hybrid classifier made through the ensemble of the artificial neural networks, support vector machines and naïve Bayes. The proposed framework has been trained on 73,791 retinal scans and is validated on 5100 scans of publicly available Zhang dataset and Rabbani dataset. The proposed framework achieved the accuracy of 94.33% for diagnosing ME and healthy subjects and achieved the mean dice coefficient of 0.9019 ± 0.04 for accurately extracting the retinal fluids, 0.7069 ± 0.11 for accurately extracting hard exudates and 0.8203 ± 0.03 for accurately extracting retinal blood vessels against the clinical markings.
Collapse
|
50
|
Wang R, Chen B, Meng D, Wang L. Weakly Supervised Lesion Detection From Fundus Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:1501-1512. [PMID: 30530359 DOI: 10.1109/tmi.2018.2885376] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Early diagnosis and continuous monitoring of patients suffering from eye diseases have been major concerns in the computer-aided detection techniques. Detecting one or several specific types of retinal lesions has made a significant breakthrough in computer-aided screen in the past few decades. However, due to the variety of retinal lesions and complex normal anatomical structures, automatic detection of lesions with unknown and diverse types from a retina remains a challenging task. In this paper, a weakly supervised method, requiring only a series of normal and abnormal retinal images without need to specifically annotate their locations and types, is proposed for this task. Specifically, a fundus image is understood as a superposition of background, blood vessels, and background noise (lesions included for abnormal images). Background is formulated as a low-rank structure after a series of simple preprocessing steps, including spatial alignment, color normalization, and blood vessels removal. Background noise is regarded as stochastic variable and modeled through Gaussian for normal images and mixture of Gaussian for abnormal images, respectively. The proposed method encodes both the background knowledge of fundus images and the background noise into one unique model, and corporately optimizes the model using normal and abnormal images, which fully depict the low-rank subspace of the background and distinguish the lesions from the background noise in abnormal fundus images. Experimental results demonstrate that the proposed method is of fine arts accuracy and outperforms the previous related methods.
Collapse
|