1
|
Tripathi A, Kumar P, Tulsani A, Chakrapani PK, Maiya G, Bhandary SV, Mayya V, Pathan S, Achar R, Acharya UR. Fuzzy Logic-Based System for Identifying the Severity of Diabetic Macular Edema from OCT B-Scan Images Using DRIL, HRF, and Cystoids. Diagnostics (Basel) 2023; 13:2550. [PMID: 37568913 PMCID: PMC10416860 DOI: 10.3390/diagnostics13152550] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2023] [Revised: 07/19/2023] [Accepted: 07/21/2023] [Indexed: 08/13/2023] Open
Abstract
Diabetic Macular Edema (DME) is a severe ocular complication commonly found in patients with diabetes. The condition can precipitate a significant drop in VA and, in extreme cases, may result in irreversible vision loss. Optical Coherence Tomography (OCT), a technique that yields high-resolution retinal images, is often employed by clinicians to assess the extent of DME in patients. However, the manual interpretation of OCT B-scan images for DME identification and severity grading can be error-prone, with false negatives potentially resulting in serious repercussions. In this paper, we investigate an Artificial Intelligence (AI) driven system that offers an end-to-end automated model, designed to accurately determine DME severity using OCT B-Scan images. This model operates by extracting specific biomarkers such as Disorganization of Retinal Inner Layers (DRIL), Hyper Reflective Foci (HRF), and cystoids from the OCT image, which are then utilized to ascertain DME severity. The rules guiding the fuzzy logic engine are derived from contemporary research in the field of DME and its association with various biomarkers evident in the OCT image. The proposed model demonstrates high efficacy, identifying images with DRIL with 93.3% accuracy and successfully segmenting HRF and cystoids from OCT images with dice similarity coefficients of 91.30% and 95.07% respectively. This study presents a comprehensive system capable of accurately grading DME severity using OCT B-scan images, serving as a potentially invaluable tool in the clinical assessment and treatment of DME.
Collapse
Affiliation(s)
- Aditya Tripathi
- Department of Information & Communication Technology, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India
| | - Preetham Kumar
- Department of Information & Communication Technology, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India
| | - Akshat Tulsani
- Department of Information & Communication Technology, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India
| | - Pavithra Kodiyalbail Chakrapani
- Department of Information & Communication Technology, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India
| | - Geetha Maiya
- Department of Computer Science and Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India
| | - Sulatha V. Bhandary
- Department of Ophthalmology, Kasturba Medical College, Manipal Academy of Higher Education, Manipal 576104, India
| | - Veena Mayya
- Department of Information & Communication Technology, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India
| | - Sameena Pathan
- Department of Information & Communication Technology, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India
| | - Raghavendra Achar
- Department of Information & Communication Technology, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India
| | - U. Rajendra Acharya
- School of Mathematics, Physics and Computing, University of Southern Queensland, Springfield Central, QLD 4300, Australia
| |
Collapse
|
2
|
An empirical study of preprocessing techniques with convolutional neural networks for accurate detection of chronic ocular diseases using fundus images. APPL INTELL 2023; 53:1548-1566. [PMID: 35528131 PMCID: PMC9059700 DOI: 10.1007/s10489-022-03490-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/08/2022] [Indexed: 01/07/2023]
Abstract
Chronic Ocular Diseases (COD) such as myopia, diabetic retinopathy, age-related macular degeneration, glaucoma, and cataract can affect the eye and may even lead to severe vision impairment or blindness. According to a recent World Health Organization (WHO) report on vision, at least 2.2 billion individuals worldwide suffer from vision impairment. Often, overt signs indicative of COD do not manifest until the disease has progressed to an advanced stage. However, if COD is detected early, vision impairment can be avoided by early intervention and cost-effective treatment. Ophthalmologists are trained to detect COD by examining certain minute changes in the retina, such as microaneurysms, macular edema, hemorrhages, and alterations in the blood vessels. The range of eye conditions is diverse, and each of these conditions requires a unique patient-specific treatment. Convolutional neural networks (CNNs) have demonstrated significant potential in multi-disciplinary fields, including the detection of a variety of eye diseases. In this study, we combined several preprocessing approaches with convolutional neural networks to accurately detect COD in eye fundus images. To the best of our knowledge, this is the first work that provides a qualitative analysis of preprocessing approaches for COD classification using CNN models. Experimental results demonstrate that CNNs trained on the region of interest segmented images outperform the models trained on the original input images by a substantial margin. Additionally, an ensemble of three preprocessing techniques outperformed other state-of-the-art approaches by 30% and 3%, in terms of Kappa and F 1 scores, respectively. The developed prototype has been extensively tested and can be evaluated on more comprehensive COD datasets for deployment in the clinical setup.
Collapse
|
3
|
Coan LJ, Williams BM, Krishna Adithya V, Upadhyaya S, Alkafri A, Czanner S, Venkatesh R, Willoughby CE, Kavitha S, Czanner G. Automatic detection of glaucoma via fundus imaging and artificial intelligence: A review. Surv Ophthalmol 2023; 68:17-41. [PMID: 35985360 DOI: 10.1016/j.survophthal.2022.08.005] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2022] [Revised: 08/04/2022] [Accepted: 08/08/2022] [Indexed: 02/01/2023]
Abstract
Glaucoma is a leading cause of irreversible vision impairment globally, and cases are continuously rising worldwide. Early detection is crucial, allowing timely intervention that can prevent further visual field loss. To detect glaucoma an examination of the optic nerve head via fundus imaging can be performed, at the center of which is the assessment of the optic cup and disc boundaries. Fundus imaging is noninvasive and low-cost; however, image examination relies on subjective, time-consuming, and costly expert assessments. A timely question to ask is: "Can artificial intelligence mimic glaucoma assessments made by experts?" Specifically, can artificial intelligence automatically find the boundaries of the optic cup and disc (providing a so-called segmented fundus image) and then use the segmented image to identify glaucoma with high accuracy? We conducted a comprehensive review on artificial intelligence-enabled glaucoma detection frameworks that produce and use segmented fundus images and summarized the advantages and disadvantages of such frameworks. We identified 36 relevant papers from 2011 to 2021 and 2 main approaches: 1) logical rule-based frameworks, based on a set of rules; and 2) machine learning/statistical modeling-based frameworks. We critically evaluated the state-of-art of the 2 approaches, identified gaps in the literature and pointed at areas for future research.
Collapse
Affiliation(s)
- Lauren J Coan
- School of Computer Science and Mathematics, Liverpool John Moores University, UK.
| | - Bryan M Williams
- School of Computing and Communications, Lancaster University, UK
| | | | - Swati Upadhyaya
- Department of Glaucoma, Aravind Eye Hospital, Pondicherry, India
| | - Ala Alkafri
- School of Computing, Engineering & Digital Technologies, Teesside University, UK
| | - Silvester Czanner
- School of Computer Science and Mathematics, Liverpool John Moores University, UK; Faculty of Informatics and Information Technologies, Slovak University of Technology, Slovakia
| | - Rengaraj Venkatesh
- Department of Glaucoma and Chief Medical Officer, Aravind Eye Hospital, Pondicherry, India
| | | | | | - Gabriela Czanner
- School of Computer Science and Mathematics, Liverpool John Moores University, UK; Faculty of Informatics and Information Technologies, Slovak University of Technology, Slovakia
| |
Collapse
|
4
|
Haider A, Arsalan M, Park C, Sultan H, Park KR. Exploring deep feature-blending capabilities to assist glaucoma screening. Appl Soft Comput 2022. [DOI: 10.1016/j.asoc.2022.109918] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
5
|
Rojas-Hernández R, Díaz-de-León-Santiago JL, Barceló-Alonso G, Bautista-López J, Trujillo-Mora V, Salgado-Ramírez JC. Lossless Medical Image Compression by Using Difference Transform. ENTROPY 2022; 24:e24070951. [PMID: 35885174 PMCID: PMC9323066 DOI: 10.3390/e24070951] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/27/2022] [Revised: 07/01/2022] [Accepted: 07/05/2022] [Indexed: 12/04/2022]
Abstract
This paper introduces a new method of compressing digital images by using the Difference Transform applied in medical imaging. The Difference Transform algorithm performs the decorrelation process of image data, and in this way improves the encoding process, achieving a file with a smaller size than the original. The proposed method proves to be competitive and in many cases better than the standards used for medical images such as TIFF or PNG. In addition, the Difference Transform can replace other transforms like Cosine or Wavelet.
Collapse
Affiliation(s)
- Rafael Rojas-Hernández
- Ingeniería en Computación, Universidad Autónoma del Estado de México, Zumpango 55600, Mexico; (R.R.-H.); (J.B.-L.); (V.T.-M.)
| | - Juan Luis Díaz-de-León-Santiago
- Centro de Investigación en Computación, Instituto Politécnico Nacional (CIC-IPN), Mexico City 07700, Mexico
- Correspondence: (J.L.D.-d.-L.S.); (J.C.S.-R.)
| | | | - Jorge Bautista-López
- Ingeniería en Computación, Universidad Autónoma del Estado de México, Zumpango 55600, Mexico; (R.R.-H.); (J.B.-L.); (V.T.-M.)
| | - Valentin Trujillo-Mora
- Ingeniería en Computación, Universidad Autónoma del Estado de México, Zumpango 55600, Mexico; (R.R.-H.); (J.B.-L.); (V.T.-M.)
| | - Julio César Salgado-Ramírez
- Ingeniería Biomédica, Universidad Politécnica de Pachuca(UPP), Zempoala 43830, Mexico
- Correspondence: (J.L.D.-d.-L.S.); (J.C.S.-R.)
| |
Collapse
|
6
|
WU JOHSUAN, NISHIDA TAKASHI, WEINREB ROBERTN, LIN JOUWEI. Performances of Machine Learning in Detecting Glaucoma Using Fundus and Retinal Optical Coherence Tomography Images: A Meta-Analysis. Am J Ophthalmol 2022; 237:1-12. [PMID: 34942113 DOI: 10.1016/j.ajo.2021.12.008] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2021] [Revised: 11/24/2021] [Accepted: 12/03/2021] [Indexed: 11/01/2022]
Abstract
PURPOSE To evaluate the performance of machine learning (ML) in detecting glaucoma using fundus and retinal optical coherence tomography (OCT) images. DESIGN Meta-analysis. METHODS PubMed and EMBASE were searched on August 11, 2021. A bivariate random-effects model was used to pool ML's diagnostic sensitivity, specificity, and area under the curve (AUC). Subgroup analyses were performed based on ML classifier categories and dataset types. RESULTS One hundred and five studies (3.3%) were retrieved. Seventy-three (69.5%), 30 (28.6%), and 2 (1.9%) studies tested ML using fundus, OCT, and both image types, respectively. Total testing data numbers were 197,174 for fundus and 16,039 for OCT. Overall, ML showed excellent performances for both fundus (pooled sensitivity = 0.92 [95% CI, 0.91-0.93]; specificity = 0.93 [95% CI, 0.91-0.94]; and AUC = 0.97 [95% CI, 0.95-0.98]) and OCT (pooled sensitivity = 0.90 [95% CI, 0.86-0.92]; specificity = 0.91 [95% CI, 0.89-0.92]; and AUC = 0.96 [95% CI, 0.93-0.97]). ML performed similarly using all data and external data for fundus and the external test result of OCT was less robust (AUC = 0.87). When comparing different classifier categories, although support vector machine showed the highest performance (pooled sensitivity, specificity, and AUC ranges, 0.92-0.96, 0.95-0.97, and 0.96-0.99, respectively), results by neural network and others were still good (pooled sensitivity, specificity, and AUC ranges, 0.88-0.93, 0.90-0.93, 0.95-0.97, respectively). When analyzed based on dataset types, ML demonstrated consistent performances on clinical datasets (fundus AUC = 0.98 [95% CI, 0.97-0.99] and OCT AUC = 0.95 [95% 0.93-0.97]). CONCLUSIONS Performance of ML in detecting glaucoma compares favorably to that of experts and is promising for clinical application. Future prospective studies are needed to better evaluate its real-world utility.
Collapse
|
7
|
Sonti K, Dhuli DR. Shape and texture based identification of glaucoma from retinal fundus images. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103473] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
8
|
Singh LK, Khanna M, Pooja. A novel multimodality based dual fusion integrated approach for efficient and early prediction of glaucoma. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103468] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
|
9
|
Sharifi M, Khatibi T, Emamian MH, Sadat S, Hashemi H, Fotouhi A. Development of glaucoma predictive model and risk factors assessment based on supervised models. BioData Min 2021; 14:48. [PMID: 34819128 PMCID: PMC8611977 DOI: 10.1186/s13040-021-00281-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2021] [Accepted: 10/31/2021] [Indexed: 11/22/2022] Open
Abstract
Objectives To develop and to propose a machine learning model for predicting glaucoma and identifying its risk factors. Method Data analysis pipeline is designed for this study based on Cross-Industry Standard Process for Data Mining (CRISP-DM) methodology. The main steps of the pipeline include data sampling, preprocessing, classification and evaluation and validation. Data sampling for providing the training dataset was performed with balanced sampling based on over-sampling and under-sampling methods. Data preprocessing steps were missing value imputation and normalization. For classification step, several machine learning models were designed for predicting glaucoma including Decision Trees (DTs), K-Nearest Neighbors (K-NN), Support Vector Machines (SVM), Random Forests (RFs), Extra Trees (ETs) and Bagging Ensemble methods. Moreover, in the classification step, a novel stacking ensemble model is designed and proposed using the superior classifiers. Results The data were from Shahroud Eye Cohort Study including demographic and ophthalmology data for 5190 participants aged 40-64 living in Shahroud, northeast Iran. The main variables considered in this dataset were 67 demographics, ophthalmologic, optometric, perimetry, and biometry features for 4561 people, including 4474 non-glaucoma participants and 87 glaucoma patients. Experimental results show that DTs and RFs trained based on under-sampling of the training dataset have superior performance for predicting glaucoma than the compared single classifiers and bagging ensemble methods with the average accuracy of 87.61 and 88.87, the sensitivity of 73.80 and 72.35, specificity of 87.88 and 89.10 and area under the curve (AUC) of 91.04 and 94.53, respectively. The proposed stacking ensemble has an average accuracy of 83.56, a sensitivity of 82.21, a specificity of 81.32, and an AUC of 88.54. Conclusions In this study, a machine learning model is proposed and developed to predict glaucoma disease among persons aged 40-64. Top predictors in this study considered features for discriminating and predicting non-glaucoma persons from glaucoma patients include the number of the visual field detect on perimetry, vertical cup to disk ratio, white to white diameter, systolic blood pressure, pupil barycenter on Y coordinate, age, and axial length.
Collapse
Affiliation(s)
- Mahyar Sharifi
- School of Industrial and Systems Engineering, Tarbiat Modares University, Tehran, Iran
| | - Toktam Khatibi
- School of Industrial and Systems Engineering, Tarbiat Modares University, Tehran, Iran.
| | - Mohammad Hassan Emamian
- Ophthalmic Epidemiology Research Center, Shahroud University of Medical Sciences, Shahroud, Iran
| | - Somayeh Sadat
- Centre for Analytics and Artificial Intelligence Engineering, University of Toronto, Toronto, Canada
| | - Hassan Hashemi
- Noor Ophthalmology Research Center, Noor Eye Hospital, Tehran, Iran
| | - Akbar Fotouhi
- Department of Epidemiology and Biostatistics, School of Public Health, Tehran University of Medical Sciences, Tehran, Iran
| |
Collapse
|
10
|
Kavya R, Christopher J, Panda S, Lazarus YB. Machine Learning and XAI approaches for Allergy Diagnosis. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102681] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
11
|
Classification of Diseases Using Machine Learning Algorithms: A Comparative Study. MATHEMATICS 2021. [DOI: 10.3390/math9151817] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
Machine learning in the medical area has become a very important requirement. The healthcare professional needs useful tools to diagnose medical illnesses. Classifiers are important to provide tools that can be useful to the health professional for this purpose. However, questions arise: which classifier to use? What metrics are appropriate to measure the performance of the classifier? How to determine a good distribution of the data so that the classifier does not bias the medical patterns to be classified in a particular class? Then most important question: does a classifier perform well for a particular disease? This paper will present some answers to the questions mentioned above, making use of classification algorithms widely used in machine learning research with datasets relating to medical illnesses under the supervised learning scheme. In addition to state-of-the-art algorithms in pattern classification, we introduce a novelty: the use of meta-learning to determine, a priori, which classifier would be the ideal for a specific dataset. The results obtained show numerically and statistically that there are reliable classifiers to suggest medical diagnoses. In addition, we provide some insights about the expected performance of classifiers for such a task.
Collapse
|
12
|
Luján-García JE, Villuendas-Rey Y, López-Yáñez I, Camacho-Nieto O, Yáñez-Márquez C. NanoChest-Net: A Simple Convolutional Network for Radiological Studies Classification. Diagnostics (Basel) 2021; 11:775. [PMID: 33925844 PMCID: PMC8145173 DOI: 10.3390/diagnostics11050775] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2021] [Revised: 04/13/2021] [Accepted: 04/22/2021] [Indexed: 11/16/2022] Open
Abstract
The new coronavirus disease (COVID-19), pneumonia, tuberculosis, and breast cancer have one thing in common: these diseases can be diagnosed using radiological studies such as X-rays images. With radiological studies and technology, computer-aided diagnosis (CAD) results in a very useful technique to analyze and detect abnormalities using the images generated by X-ray machines. Some deep-learning techniques such as a convolutional neural network (CNN) can help physicians to obtain an effective pre-diagnosis. However, popular CNNs are enormous models and need a huge amount of data to obtain good results. In this paper, we introduce NanoChest-net, which is a small but effective CNN model that can be used to classify among different diseases using images from radiological studies. NanoChest-net proves to be effective in classifying among different diseases such as tuberculosis, pneumonia, and COVID-19. In two of the five datasets used in the experiments, NanoChest-net obtained the best results, while on the remaining datasets our model proved to be as good as baseline models from the state of the art such as the ResNet50, Xception, and DenseNet121. In addition, NanoChest-net is useful to classify radiological studies on the same level as state-of-the-art algorithms with the advantage that it does not require a large number of operations.
Collapse
Affiliation(s)
| | - Yenny Villuendas-Rey
- Centro de Innovación y Desarrollo Tecnológico en Cómputo, Instituto Politécnico Nacional, Mexico City 07738, Mexico
| | - Itzamá López-Yáñez
- Centro de Innovación y Desarrollo Tecnológico en Cómputo, Instituto Politécnico Nacional, Mexico City 07738, Mexico
| | - Oscar Camacho-Nieto
- Centro de Innovación y Desarrollo Tecnológico en Cómputo, Instituto Politécnico Nacional, Mexico City 07738, Mexico
| | - Cornelio Yáñez-Márquez
- Centro de Investigación en Computación, Instituto Politécnico Nacional, Mexico City 07700, Mexico
| |
Collapse
|
13
|
Automated segmentation of optic disc and optic cup for glaucoma assessment using improved UNET++ architecture. Biocybern Biomed Eng 2021. [DOI: 10.1016/j.bbe.2021.05.011] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/29/2023]
|