1
|
Bashir I, Sajid MZ, Kalsoom R, Ali Khan N, Qureshi I, Abbas F, Abbas Q. RDS-DR: An Improved Deep Learning Model for Classifying Severity Levels of Diabetic Retinopathy. Diagnostics (Basel) 2023; 13:3116. [PMID: 37835859 PMCID: PMC10572213 DOI: 10.3390/diagnostics13193116] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Revised: 09/14/2023] [Accepted: 09/25/2023] [Indexed: 10/15/2023] Open
Abstract
A well-known eye disorder called diabetic retinopathy (DR) is linked to elevated blood glucose levels. Cotton wool spots, confined veins in the cranial nerve, AV nicking, and hemorrhages in the optic disc are some of its symptoms, which often appear later. Serious side effects of DR might include vision loss, damage to the visual nerves, and obstruction of the retinal arteries. Researchers have devised an automated method utilizing AI and deep learning models to enable the early diagnosis of this illness. This research gathered digital fundus images from renowned Pakistani eye hospitals to generate a new "DR-Insight" dataset and known online sources. A novel methodology named the residual-dense system (RDS-DR) was then devised to assess diabetic retinopathy. To develop this model, we have integrated residual and dense blocks, along with a transition layer, into a deep neural network. The RDS-DR system is trained on the collected dataset of 9860 fundus images. The RDS-DR categorization method demonstrated an impressive accuracy of 97.5% on this dataset. These findings show that the model produces beneficial outcomes and may be used by healthcare practitioners as a diagnostic tool. It is important to emphasize that the system's goal is to augment optometrists' expertise rather than replace it. In terms of accuracy, the RDS-DR technique fared better than the cutting-edge models VGG19, VGG16, Inception V-3, and Xception. This emphasizes how successful the suggested method is for classifying diabetic retinopathy (DR).
Collapse
Affiliation(s)
- Ijaz Bashir
- Department of Computer Software Engineering, Military College of Signals, National University of Sciences and Technology, Islamabad 44000, Pakistan; (I.B.); (M.Z.S.); (N.A.K.)
| | - Muhammad Zaheer Sajid
- Department of Computer Software Engineering, Military College of Signals, National University of Sciences and Technology, Islamabad 44000, Pakistan; (I.B.); (M.Z.S.); (N.A.K.)
| | - Rizwana Kalsoom
- Faculty of Computer Science and Engineering, Ghulam Ishaq Khan Institute of Engineering Sciences and Technology, Topi 23460, Pakistan;
| | - Nauman Ali Khan
- Department of Computer Software Engineering, Military College of Signals, National University of Sciences and Technology, Islamabad 44000, Pakistan; (I.B.); (M.Z.S.); (N.A.K.)
| | - Imran Qureshi
- College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia;
| | - Fakhar Abbas
- Centre for Trusted Internet and Community, National University of Singapore (NUS), Singapore 119228, Singapore;
| | - Qaisar Abbas
- College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia;
| |
Collapse
|
2
|
Ullah Z, Usman M, Latif S, Khan A, Gwak J. SSMD-UNet: semi-supervised multi-task decoders network for diabetic retinopathy segmentation. Sci Rep 2023; 13:9087. [PMID: 37277554 PMCID: PMC10240139 DOI: 10.1038/s41598-023-36311-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2022] [Accepted: 05/31/2023] [Indexed: 06/07/2023] Open
Abstract
Diabetic retinopathy (DR) is a diabetes complication that can cause vision loss among patients due to damage to blood vessels in the retina. Early retinal screening can avoid the severe consequences of DR and enable timely treatment. Nowadays, researchers are trying to develop automated deep learning-based DR segmentation tools using retinal fundus images to help Ophthalmologists with DR screening and early diagnosis. However, recent studies are unable to design accurate models due to the unavailability of larger training data with consistent and fine-grained annotations. To address this problem, we propose a semi-supervised multitask learning approach that exploits widely available unlabelled data (i.e., Kaggle-EyePACS) to improve DR segmentation performance. The proposed model consists of novel multi-decoder architecture and involves both unsupervised and supervised learning phases. The model is trained for the unsupervised auxiliary task to effectively learn from additional unlabelled data and improve the performance of the primary task of DR segmentation. The proposed technique is rigorously evaluated on two publicly available datasets (i.e., FGADR and IDRiD) and results show that the proposed technique not only outperforms existing state-of-the-art techniques but also exhibits improved generalisation and robustness for cross-data evaluation.
Collapse
Affiliation(s)
- Zahid Ullah
- Department of Software, Korea National University of Transportation, Chungju, 27469, South Korea
| | - Muhammad Usman
- Department of Computer Science and Engineering, Seoul National University, Seoul, 08826, South Korea
| | - Siddique Latif
- Faculty of Health and Computing, University of Southern Queensland, Toowoomba, QL, 4300, Australia
| | - Asifullah Khan
- Pattern Recognition Lab, DCIS, PIEAS, Nilore, Islamabad, 45650, Pakistan
| | - Jeonghwan Gwak
- Department of Software, Korea National University of Transportation, Chungju, 27469, South Korea.
- Department of Biomedical Engineering, Korea National University of Transportation, Chungju, 27469, South Korea.
- Department of AI Robotics Engineering, Korea National University of Transportation, Chungju, 27469, South Korea.
- Department of IT Energy Convergence (BK21 FOUR), Korea National University of Transportation, Chungju, 27469, South Korea.
| |
Collapse
|
3
|
Huang Y, Lin L, Cheng P, Lyu J, Tam R, Tang X. Identifying the Key Components in ResNet-50 for Diabetic Retinopathy Grading from Fundus Images: A Systematic Investigation. Diagnostics (Basel) 2023; 13:diagnostics13101664. [PMID: 37238149 DOI: 10.3390/diagnostics13101664] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2023] [Revised: 04/29/2023] [Accepted: 05/01/2023] [Indexed: 05/28/2023] Open
Abstract
Although deep learning-based diabetic retinopathy (DR) classification methods typically benefit from well-designed architectures of convolutional neural networks, the training setting also has a non-negligible impact on prediction performance. The training setting includes various interdependent components, such as an objective function, a data sampling strategy, and a data augmentation approach. To identify the key components in a standard deep learning framework (ResNet-50) for DR grading, we systematically analyze the impact of several major components. Extensive experiments are conducted on a publicly available dataset EyePACS. We demonstrate that (1) the DR grading framework is sensitive to input resolution, objective function, and composition of data augmentation; (2) using mean square error as the loss function can effectively improve the performance with respect to a task-specific evaluation metric, namely the quadratically weighted Kappa; (3) utilizing eye pairs boosts the performance of DR grading and; (4) using data resampling to address the problem of imbalanced data distribution in EyePACS hurts the performance. Based on these observations and an optimal combination of the investigated components, our framework, without any specialized network design, achieves a state-of-the-art result (0.8631 for Kappa) on the EyePACS test set (a total of 42,670 fundus images) with only image-level labels. We also examine the proposed training practices on other fundus datasets and other network architectures to evaluate their generalizability. Our codes and pre-trained model are available online.
Collapse
Affiliation(s)
- Yijin Huang
- Department of Electronic and Electrical Engineering, Southern University of Science and Technology, Shenzhen 518055, China
- School of Biomedical Engineering, The University of British Columbia, Vancouver, BC V6T 1Z4, Canada
| | - Li Lin
- Department of Electronic and Electrical Engineering, Southern University of Science and Technology, Shenzhen 518055, China
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, China
| | - Pujin Cheng
- Department of Electronic and Electrical Engineering, Southern University of Science and Technology, Shenzhen 518055, China
| | - Junyan Lyu
- Department of Electronic and Electrical Engineering, Southern University of Science and Technology, Shenzhen 518055, China
- Queensland Brain Institute, The University of Queensland, Brisbane, QLD 4072, Australia
| | - Roger Tam
- School of Biomedical Engineering, The University of British Columbia, Vancouver, BC V6T 1Z4, Canada
| | - Xiaoying Tang
- Department of Electronic and Electrical Engineering, Southern University of Science and Technology, Shenzhen 518055, China
| |
Collapse
|
4
|
Bhakar S, Sinwar D, Pradhan N, Dhaka VS, Cherrez-Ojeda I, Parveen A, Hassan MU. Computational Intelligence-Based Disease Severity Identification: A Review of Multidisciplinary Domains. Diagnostics (Basel) 2023; 13:diagnostics13071212. [PMID: 37046431 PMCID: PMC10093052 DOI: 10.3390/diagnostics13071212] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2023] [Revised: 03/06/2023] [Accepted: 03/08/2023] [Indexed: 04/14/2023] Open
Abstract
Disease severity identification using computational intelligence-based approaches is gaining popularity nowadays. Artificial intelligence and deep-learning-assisted approaches are proving to be significant in the rapid and accurate diagnosis of several diseases. In addition to disease identification, these approaches have the potential to identify the severity of a disease. The problem of disease severity identification can be considered multi-class classification, where the class labels are the severity levels of the disease. Plenty of computational intelligence-based solutions have been presented by researchers for severity identification. This paper presents a comprehensive review of recent approaches for identifying disease severity levels using computational intelligence-based approaches. We followed the PRISMA guidelines and compiled several works related to the severity identification of multidisciplinary diseases of the last decade from well-known publishers, such as MDPI, Springer, IEEE, Elsevier, etc. This article is devoted toward the severity identification of two main diseases, viz. Parkinson's Disease and Diabetic Retinopathy. However, severity identification of a few other diseases, such as COVID-19, autonomic nervous system dysfunction, tuberculosis, sepsis, sleep apnea, psychosis, traumatic brain injury, breast cancer, knee osteoarthritis, and Alzheimer's disease, was also briefly covered. Each work has been carefully examined against its methodology, dataset used, and the type of disease on several performance metrics, accuracy, specificity, etc. In addition to this, we also presented a few public repositories that can be utilized to conduct research on disease severity identification. We hope that this review not only acts as a compendium but also provides insights to the researchers working on disease severity identification using computational intelligence-based approaches.
Collapse
Affiliation(s)
- Suman Bhakar
- Department of Computer and Communication Engineering, Manipal University Jaipur, Dehmi Kalan, Jaipur 303007, Rajasthan, India
| | - Deepak Sinwar
- Department of Computer and Communication Engineering, Manipal University Jaipur, Dehmi Kalan, Jaipur 303007, Rajasthan, India
| | - Nitesh Pradhan
- Department of Computer Science and Engineering, Manipal University Jaipur, Dehmi Kalan, Jaipur 303007, Rajasthan, India
| | - Vijaypal Singh Dhaka
- Department of Computer and Communication Engineering, Manipal University Jaipur, Dehmi Kalan, Jaipur 303007, Rajasthan, India
| | - Ivan Cherrez-Ojeda
- Allergy and Pulmonology, Espíritu Santo University, Samborondón 0901-952, Ecuador
| | - Amna Parveen
- College of Pharmacy, Gachon University, Medical Campus, No. 191, Hambakmoero, Yeonsu-gu, Incheon 21936, Republic of Korea
| | - Muhammad Umair Hassan
- Department of ICT and Natural Sciences, Norwegian University of Science and Technology (NTNU), 6009 Ålesund, Norway
| |
Collapse
|
5
|
Mishra A, Singh L, Pandey M, Lakra S. Image based early detection of diabetic retinopathy: A systematic review on Artificial Intelligence (AI) based recent trends and approaches. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2022. [DOI: 10.3233/jifs-220772] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Diabetic Retinopathy (DR) is a disease that damages the retina of the human eye due to diabetic complications, resulting in a loss of vision. Blindness may be avoided If the DR disease is detected at an early stage. Unfortunately, DR is irreversible process, however, early detection and treatment of DR can significantly reduce the risk of vision loss. The manual diagnosis done by ophthalmologists on DR retina fundus images is time consuming, and error prone process. Nowadays, machine learning and deep learning have become one of the most effective approaches, which have even surpassed the human performance as well as performance of traditional image processing-based algorithms and other computer aided diagnosis systems in the analysis and classification of medical images. This paper addressed and evaluated the various recent state-of-the-art methodologies that have been used for detection and classification of Diabetic Retinopathy disease using machine learning and deep learning approaches in the past decade. Furthermore, this study also provides the authors observation and performance evaluation of available research using several parameters, such as accuracy, disease status, and sensitivity. Finally, we conclude with limitations, remedies, and future directions in DR detection. In addition, various challenging issues that need further study are also discussed.
Collapse
Affiliation(s)
- Anju Mishra
- Manav Rachna University, Faridabad, Haryana, India
| | - Laxman Singh
- Noida Institute of Engineering and Technology, Greater Noida, U.P, India
| | | | - Sachin Lakra
- Manav Rachna University, Faridabad, Haryana, India
| |
Collapse
|
6
|
Hervella ÁS, Rouco J, Novo J, Ortega M. Multimodal image encoding pre-training for diabetic retinopathy grading. Comput Biol Med 2022; 143:105302. [PMID: 35219187 DOI: 10.1016/j.compbiomed.2022.105302] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2021] [Revised: 01/11/2022] [Accepted: 01/26/2022] [Indexed: 11/18/2022]
Abstract
Diabetic retinopathy is an increasingly prevalent eye disorder that can lead to severe vision impairment. The severity grading of the disease using retinal images is key to provide an adequate treatment. However, in order to learn the diverse patterns and complex relations that are required for the grading, deep neural networks require very large annotated datasets that are not always available. This has been typically addressed by reusing networks that were pre-trained for natural image classification, hence relying on additional annotated data from a different domain. In contrast, we propose a novel pre-training approach that takes advantage of unlabeled multimodal visual data commonly available in ophthalmology. The use of multimodal visual data for pre-training purposes has been previously explored by training a network in the prediction of one image modality from another. However, that approach does not ensure a broad understanding of the retinal images, given that the network may exclusively focus on the similarities between modalities while ignoring the differences. Thus, we propose a novel self-supervised pre-training that explicitly teaches the networks to learn the common characteristics between modalities as well as the characteristics that are exclusive to the input modality. This provides a complete comprehension of the input domain and facilitates the training of downstream tasks that require a broad understanding of the retinal images, such as the grading of diabetic retinopathy. To validate and analyze the proposed approach, we performed an exhaustive experimentation on different public datasets. The transfer learning performance for the grading of diabetic retinopathy is evaluated under different settings while also comparing against previous state-of-the-art pre-training approaches. Additionally, a comparison against relevant state-of-the-art works for the detection and grading of diabetic retinopathy is also provided. The results show a satisfactory performance of the proposed approach, which outperforms previous pre-training alternatives in the grading of diabetic retinopathy.
Collapse
Affiliation(s)
- Álvaro S Hervella
- Centro de Investigación CITIC, Universidade da Coruña, A Coruña, Spain; VARPA Research Group, Instituto de Investigación Biomédica de A Coruña (INIBIC), Universidade da Coruña, A Coruña, Spain.
| | - José Rouco
- Centro de Investigación CITIC, Universidade da Coruña, A Coruña, Spain; VARPA Research Group, Instituto de Investigación Biomédica de A Coruña (INIBIC), Universidade da Coruña, A Coruña, Spain.
| | - Jorge Novo
- Centro de Investigación CITIC, Universidade da Coruña, A Coruña, Spain; VARPA Research Group, Instituto de Investigación Biomédica de A Coruña (INIBIC), Universidade da Coruña, A Coruña, Spain.
| | - Marcos Ortega
- Centro de Investigación CITIC, Universidade da Coruña, A Coruña, Spain; VARPA Research Group, Instituto de Investigación Biomédica de A Coruña (INIBIC), Universidade da Coruña, A Coruña, Spain.
| |
Collapse
|
7
|
Das D, Biswas SK, Bandyopadhyay S. A critical review on diagnosis of diabetic retinopathy using machine learning and deep learning. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 81:25613-25655. [PMID: 35342328 PMCID: PMC8940593 DOI: 10.1007/s11042-022-12642-4] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/25/2020] [Revised: 06/29/2021] [Accepted: 02/09/2022] [Indexed: 06/12/2023]
Abstract
Diabetic Retinopathy (DR) is a health condition caused due to Diabetes Mellitus (DM). It causes vision problems and blindness due to disfigurement of human retina. According to statistics, 80% of diabetes patients battling from long diabetic period of 15 to 20 years, suffer from DR. Hence, it has become a dangerous threat to the health and life of people. To overcome DR, manual diagnosis of the disease is feasible but overwhelming and cumbersome at the same time and hence requires a revolutionary method. Thus, such a health condition necessitates primary recognition and diagnosis to prevent DR from developing into severe stages and prevent blindness. Innumerable Machine Learning (ML) models are proposed by researchers across the globe, to achieve this purpose. Various feature extraction techniques are proposed for extraction of DR features for early detection. However, traditional ML models have shown either meagre generalization throughout feature extraction and classification for deploying smaller datasets or consumes more of training time causing inefficiency in prediction while using larger datasets. Hence Deep Learning (DL), a new domain of ML, is introduced. DL models can handle a smaller dataset with help of efficient data processing techniques. However, they generally incorporate larger datasets for their deep architectures to enhance performance in feature extraction and image classification. This paper gives a detailed review on DR, its features, causes, ML models, state-of-the-art DL models, challenges, comparisons and future directions, for early detection of DR.
Collapse
Affiliation(s)
- Dolly Das
- National Institute of Technology Silchar, Cachar, Assam India
| | | | | |
Collapse
|
8
|
Balasuganya B, Chinnasamy A, Sheela D. An Effective Framework for the Classification of Retinopathy Grade and Risk of Macular Edema for Diabetic Retinopathy Images. JOURNAL OF MEDICAL IMAGING AND HEALTH INFORMATICS 2022. [DOI: 10.1166/jmihi.2022.3933] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
It is well know that for a diabetic patient, Diabetic Retinopathy (DR) is a speedy spreading infection which results in total loss of vision. Hence for diabetic patient, prior DR identification is important issue to protect eyes furthermore supportive for opportune treatment. The DR
identification should be possible physically and could likewise distinguished consequently. In previous framework, assessment of fundus pictures of retina for checking the phonological variety in Micro Aneurysms (MA), exudates, hemorrhages, macula and veins is a drawn-out and lavish errand.
However in the robotized framework, picture handling strategies can be utilized for before DR identification. Here, a framework for DR discovery is proposed. To start with, the information picture is pre-prepared utilizing crossover CLAHE and circular average filter round normal channel and
veins are extricated by Coye Filter. A short time later, picture is exposed to irregularities division, where division of MA, hemorrhages, exudates, and neovascularization are conveyed. Almost 36 distinct highlights are removed from sectioned pictures. A half breed salp swarm-feline multitude
advancement (CSO) calculation is used for choosing the appropriate highlights. At last, an arrangement is conveyed by changed RNN-LSTM. Three orders are conveyed, (I) Classification of kind of retinopathy, (ii) Classification of evaluation of retinopathy, (iii) Risk of Macular Edema (ME).
The order correctness’s got are: 99.73% for kind of DR, 95.6% for NPDR grade and 99.4% for NPDR Macular Edema Risk, 92.3% for PDR Macular Edema Risk. Our simulation results reveals that with Decision Tree (DT) and Random Forest (RF) Algorithm, this framework provides better results in
terms of accuracy of affectability and explicitness and Precision.
Collapse
Affiliation(s)
- B. Balasuganya
- Research Scholar Department of Information and Communication Engineering, Anna University, Guindy, Chennai 600025, Tamil Nadu, India
| | - A. Chinnasamy
- Department of Computer Science and Engineering, Sri Sairam Engineering College, Chennai 600044, Tamil Nadu, India
| | - D. Sheela
- Department of Electronics and Communication Engineering, Saveetha School of Engineering, Saveetha University, Chennai 602105, Tamil Nadu, India
| |
Collapse
|
9
|
Kishore Kumar A, Udhayakumar A, Kalaiselvi K. Convolutional Neural Networks Based Classifier for Diabetic Retinopathy. JOURNAL OF MEDICAL IMAGING AND HEALTH INFORMATICS 2022. [DOI: 10.1166/jmihi.2022.3932] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
Diabetic Retinopathy (DR) is a consequence of diabetes which causes damage to the retinal blood vessel networks. In most diabetics, this is a major vision-threatening problem. Color fundus pictures are used to diagnose DR, which requires competent doctors to determine lesions presence.
The job of detecting DR in an automated manner is difficult. In terms of automated illness identification, feature extraction is quite useful. In the current setting, Convolutional Neural Networks (CNN) outperforms prior handcrafted feature-based image classification approaches in terms of
image classification efficiency. This paper introduces CNN structure for extracting characteristics from retinal fundus pictures in order to develop the accuracy of classification. This proposed method, the output features of CNN are employed as input to many classifiers of machine learning.
Using images from the MESSIDOR datasets, this method is tested under Random Tree, Hoeffiding Tree and Random Forest classifiers. Accuracy, False Positive Rate (FPR), Precision, Recall, F-1 score, specificity and Kappa-score for used classifiers are compared to find out the efficiency of the
classifier. For the MESSIDOR datasets, the suggested feature extraction approach combined with the Random forest classifier surpasses all other classifiers which gains 88% and 0.7288 of average accuracy and Kappa-score (k-score) respectively.
Collapse
Affiliation(s)
- A. Kishore Kumar
- Department of Robotics and Automation, Sri Ramakrishna Engineering College, Coimbatore 641022, Tamilnadu, India
| | - A. Udhayakumar
- Department of ECE, Hindustan College of Engineering and Technology, Coimbatore 641022, Tamilnadu, India
| | - K. Kalaiselvi
- Department of ECE, Hindustan College of Engineering and Technology, Coimbatore 641022, Tamilnadu, India
| |
Collapse
|
10
|
Suri JS, Agarwal S, Elavarthi P, Pathak R, Ketireddy V, Columbu M, Saba L, Gupta SK, Faa G, Singh IM, Turk M, Chadha PS, Johri AM, Khanna NN, Viskovic K, Mavrogeni S, Laird JR, Pareek G, Miner M, Sobel DW, Balestrieri A, Sfikakis PP, Tsoulfas G, Protogerou A, Misra DP, Agarwal V, Kitas GD, Teji JS, Al-Maini M, Dhanjil SK, Nicolaides A, Sharma A, Rathore V, Fatemi M, Alizad A, Krishnan PR, Ferenc N, Ruzsa Z, Gupta A, Naidu S, Kalra MK. Inter-Variability Study of COVLIAS 1.0: Hybrid Deep Learning Models for COVID-19 Lung Segmentation in Computed Tomography. Diagnostics (Basel) 2021; 11:2025. [PMID: 34829372 PMCID: PMC8625039 DOI: 10.3390/diagnostics11112025] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2021] [Revised: 10/26/2021] [Accepted: 10/27/2021] [Indexed: 02/05/2023] Open
Abstract
Background: For COVID-19 lung severity, segmentation of lungs on computed tomography (CT) is the first crucial step. Current deep learning (DL)-based Artificial Intelligence (AI) models have a bias in the training stage of segmentation because only one set of ground truth (GT) annotations are evaluated. We propose a robust and stable inter-variability analysis of CT lung segmentation in COVID-19 to avoid the effect of bias. Methodology: The proposed inter-variability study consists of two GT tracers for lung segmentation on chest CT. Three AI models, PSP Net, VGG-SegNet, and ResNet-SegNet, were trained using GT annotations. We hypothesized that if AI models are trained on the GT tracings from multiple experience levels, and if the AI performance on the test data between these AI models is within the 5% range, one can consider such an AI model robust and unbiased. The K5 protocol (training to testing: 80%:20%) was adapted. Ten kinds of metrics were used for performance evaluation. Results: The database consisted of 5000 CT chest images from 72 COVID-19-infected patients. By computing the coefficient of correlations (CC) between the output of the two AI models trained corresponding to the two GT tracers, computing their differences in their CC, and repeating the process for all three AI-models, we show the differences as 0%, 0.51%, and 2.04% (all < 5%), thereby validating the hypothesis. The performance was comparable; however, it had the following order: ResNet-SegNet > PSP Net > VGG-SegNet. Conclusions: The AI models were clinically robust and stable during the inter-variability analysis on the CT lung segmentation on COVID-19 patients.
Collapse
Affiliation(s)
- Jasjit S. Suri
- Stroke Diagnostic and Monitoring Division, AtheroPoint™, Roseville, CA 95661, USA; (I.M.S.); (P.S.C.)
- Advanced Knowledge Engineering Centre, GBTI, Roseville, CA 95661, USA; (S.A.); (P.E.)
| | - Sushant Agarwal
- Advanced Knowledge Engineering Centre, GBTI, Roseville, CA 95661, USA; (S.A.); (P.E.)
- Department of Computer Science Engineering, PSIT, Kanpur 209305, India
| | - Pranav Elavarthi
- Advanced Knowledge Engineering Centre, GBTI, Roseville, CA 95661, USA; (S.A.); (P.E.)
- Thomas Jefferson High School for Science and Technology, Alexandria, VA 22312, USA
| | - Rajesh Pathak
- Department of Computer Science Engineering, Rawatpura Sarkar University, Raipur 492001, India;
| | | | - Marta Columbu
- Department of Radiology, Azienda Ospedaliero Universitaria (A.O.U.), 10015 Cagliari, Italy; (M.C.); (L.S.); (A.B.)
| | - Luca Saba
- Department of Radiology, Azienda Ospedaliero Universitaria (A.O.U.), 10015 Cagliari, Italy; (M.C.); (L.S.); (A.B.)
| | - Suneet K. Gupta
- Department of Computer Science, Bennett University, Noida 201310, India;
| | - Gavino Faa
- Department of Pathology, Azienda Ospedaliero Universitaria (A.O.U.), 10015 Cagliari, Italy;
| | - Inder M. Singh
- Stroke Diagnostic and Monitoring Division, AtheroPoint™, Roseville, CA 95661, USA; (I.M.S.); (P.S.C.)
| | - Monika Turk
- The Hanse-Wissenschaftskolleg Institute for Advanced Study, 27753 Delmenhorst, Germany;
| | - Paramjit S. Chadha
- Stroke Diagnostic and Monitoring Division, AtheroPoint™, Roseville, CA 95661, USA; (I.M.S.); (P.S.C.)
| | - Amer M. Johri
- Department of Medicine, Division of Cardiology, Queen’s University, Kingston, ON K7L 3N6, Canada;
| | - Narendra N. Khanna
- Department of Cardiology, Indraprastha APOLLO Hospitals, New Delhi 110076, India;
| | | | - Sophie Mavrogeni
- Cardiology Clinic, Onassis Cardiac Surgery Center, 10558 Athens, Greece;
| | - John R. Laird
- Heart and Vascular Institute, Adventist Health St. Helena, St. Helena, CA 94574, USA;
| | - Gyan Pareek
- Minimally Invasive Urology Institute, Brown University, Providence, RI 02912, USA; (G.P.); (D.W.S.)
| | - Martin Miner
- Men’s Health Center, Miriam Hospital, Providence, RI 02906, USA;
| | - David W. Sobel
- Minimally Invasive Urology Institute, Brown University, Providence, RI 02912, USA; (G.P.); (D.W.S.)
| | - Antonella Balestrieri
- Department of Radiology, Azienda Ospedaliero Universitaria (A.O.U.), 10015 Cagliari, Italy; (M.C.); (L.S.); (A.B.)
| | - Petros P. Sfikakis
- Rheumatology Unit, National & Kapodistrian University of Athens, 10679 Athens, Greece;
| | - George Tsoulfas
- Aristoteleion University of Thessaloniki, 54636 Thessaloniki, Greece;
| | | | - Durga Prasanna Misra
- Department of Immunology, Sanjay Gandhi Postgraduate Institute of Medical Sciences, Lucknow 226014, India; (D.P.M.); (V.A.)
| | - Vikas Agarwal
- Department of Immunology, Sanjay Gandhi Postgraduate Institute of Medical Sciences, Lucknow 226014, India; (D.P.M.); (V.A.)
| | - George D. Kitas
- Academic Affairs, Dudley Group NHS Foundation Trust, Dudley DY1 2HQ, UK;
- Arthritis Research UK Epidemiology Unit, Manchester University, Manchester M13 9PT, UK
| | - Jagjit S. Teji
- Ann and Robert H. Lurie Children’s Hospital of Chicago, Chicago, IL 60611, USA;
| | - Mustafa Al-Maini
- Allergy, Clinical Immunology and Rheumatology Institute, Toronto, ON L4Z 4C4, Canada;
| | | | - Andrew Nicolaides
- Vascular Screening and Diagnostic Centre, University of Nicosia Medical School, Nicosia 2368, Cyprus;
| | - Aditya Sharma
- Division of Cardiovascular Medicine, University of Virginia, Charlottesville, VA 22904, USA;
| | - Vijay Rathore
- AtheroPoint LLC, Roseville, CA 95611, USA; (S.K.D.); (V.R.)
| | - Mostafa Fatemi
- Department of Physiology & Biomedical Engineering, Mayo Clinic College of Medicine and Science, Rochester, MN 55905, USA;
| | - Azra Alizad
- Department of Radiology, Mayo Clinic College of Medicine and Science, Rochester, MN 55905, USA;
| | | | - Nagy Ferenc
- Internal Medicine Department, University of Szeged, 6725 Szeged, Hungary;
| | - Zoltan Ruzsa
- Zoltan Invasive Cardiology Division, University of Szeged, 6725 Szeged, Hungary;
| | - Archna Gupta
- Radiology Department, Sanjay Gandhi Postgraduate Institute of Medical Sciences, Lucknow 226014, India;
| | - Subbaram Naidu
- Electrical Engineering Department, University of Minnesota, Duluth, MN 55812, USA;
| | - Mannudeep K. Kalra
- Department of Radiology, Massachusetts General Hospital, 55 Fruit Street, Boston, MA 02114, USA;
| |
Collapse
|
11
|
Suri JS, Agarwal S, Pathak R, Ketireddy V, Columbu M, Saba L, Gupta SK, Faa G, Singh IM, Turk M, Chadha PS, Johri AM, Khanna NN, Viskovic K, Mavrogeni S, Laird JR, Pareek G, Miner M, Sobel DW, Balestrieri A, Sfikakis PP, Tsoulfas G, Protogerou A, Misra DP, Agarwal V, Kitas GD, Teji JS, Al-Maini M, Dhanjil SK, Nicolaides A, Sharma A, Rathore V, Fatemi M, Alizad A, Krishnan PR, Frence N, Ruzsa Z, Gupta A, Naidu S, Kalra M. COVLIAS 1.0: Lung Segmentation in COVID-19 Computed Tomography Scans Using Hybrid Deep Learning Artificial Intelligence Models. Diagnostics (Basel) 2021; 11:1405. [PMID: 34441340 PMCID: PMC8392426 DOI: 10.3390/diagnostics11081405] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Revised: 07/28/2021] [Accepted: 07/29/2021] [Indexed: 02/05/2023] Open
Abstract
BACKGROUND COVID-19 lung segmentation using Computed Tomography (CT) scans is important for the diagnosis of lung severity. The process of automated lung segmentation is challenging due to (a) CT radiation dosage and (b) ground-glass opacities caused by COVID-19. The lung segmentation methodologies proposed in 2020 were semi- or automated but not reliable, accurate, and user-friendly. The proposed study presents a COVID Lung Image Analysis System (COVLIAS 1.0, AtheroPoint™, Roseville, CA, USA) consisting of hybrid deep learning (HDL) models for lung segmentation. METHODOLOGY The COVLIAS 1.0 consists of three methods based on solo deep learning (SDL) or hybrid deep learning (HDL). SegNet is proposed in the SDL category while VGG-SegNet and ResNet-SegNet are designed under the HDL paradigm. The three proposed AI approaches were benchmarked against the National Institute of Health (NIH)-based conventional segmentation model using fuzzy-connectedness. A cross-validation protocol with a 40:60 ratio between training and testing was designed, with 10% validation data. The ground truth (GT) was manually traced by a radiologist trained personnel. For performance evaluation, nine different criteria were selected to perform the evaluation of SDL or HDL lung segmentation regions and lungs long axis against GT. RESULTS Using the database of 5000 chest CT images (from 72 patients), COVLIAS 1.0 yielded AUC of ~0.96, ~0.97, ~0.98, and ~0.96 (p-value < 0.001), respectively within 5% range of GT area, for SegNet, VGG-SegNet, ResNet-SegNet, and NIH. The mean Figure of Merit using four models (left and right lung) was above 94%. On benchmarking against the National Institute of Health (NIH) segmentation method, the proposed model demonstrated a 58% and 44% improvement in ResNet-SegNet, 52% and 36% improvement in VGG-SegNet for lung area, and lung long axis, respectively. The PE statistics performance was in the following order: ResNet-SegNet > VGG-SegNet > NIH > SegNet. The HDL runs in <1 s on test data per image. CONCLUSIONS The COVLIAS 1.0 system can be applied in real-time for radiology-based clinical settings.
Collapse
Affiliation(s)
- Jasjit S. Suri
- Stroke Diagnostic and Monitoring Division, AtheroPoint™, Roseville, CA 95661, USA; (I.M.S.); (P.S.C.)
- Advanced Knowledge Engineering Centre, GBTI, Roseville, CA 95661, USA;
| | - Sushant Agarwal
- Advanced Knowledge Engineering Centre, GBTI, Roseville, CA 95661, USA;
- Department of Computer Science Engineering, PSIT, Kanpur 209305, India
| | - Rajesh Pathak
- Department of Computer Science Engineering, Rawatpura Sarkar University, Raipur 492015, India;
| | | | - Marta Columbu
- Department of Radiology, Azienda Ospedaliero Universitaria (A.O.U.), 09124 Cagliari, Italy; (M.C.); (L.S.); (A.B.)
| | - Luca Saba
- Department of Radiology, Azienda Ospedaliero Universitaria (A.O.U.), 09124 Cagliari, Italy; (M.C.); (L.S.); (A.B.)
| | - Suneet K. Gupta
- Department of Computer Science, Bennett University, Noida 201310, India;
| | - Gavino Faa
- Department of Pathology—AOU of Cagliari, 09124 Cagliari, Italy;
| | - Inder M. Singh
- Stroke Diagnostic and Monitoring Division, AtheroPoint™, Roseville, CA 95661, USA; (I.M.S.); (P.S.C.)
| | - Monika Turk
- The Hanse-Wissenschaftskolleg Institute for Advanced Study, 27753 Delmenhorst, Germany;
| | - Paramjit S. Chadha
- Stroke Diagnostic and Monitoring Division, AtheroPoint™, Roseville, CA 95661, USA; (I.M.S.); (P.S.C.)
| | - Amer M. Johri
- Department of Medicine, Division of Cardiology, Queen’s University, Kingston, ON K7L 3N6, Canada;
| | - Narendra N. Khanna
- Department of Cardiology, Indraprastha APOLLO Hospitals, New Delhi 208011, India;
| | - Klaudija Viskovic
- Department of Radiology, University Hospital for Infectious Diseases, 10000 Zagreb, Croatia;
| | - Sophie Mavrogeni
- Cardiology Clinic, Onassis Cardiac Surgery Center, 176 74 Athens, Greece;
| | - John R. Laird
- Heart and Vascular Institute, Adventist Health St. Helena, St. Helena, CA 94574, USA;
| | - Gyan Pareek
- Minimally Invasive Urology Institute, Brown University, Providence City, RI 02912, USA; (G.P.); (D.W.S.)
| | - Martin Miner
- Men’s Health Center, Miriam Hospital Providence, Providence, RI 02906, USA;
| | - David W. Sobel
- Minimally Invasive Urology Institute, Brown University, Providence City, RI 02912, USA; (G.P.); (D.W.S.)
| | - Antonella Balestrieri
- Department of Radiology, Azienda Ospedaliero Universitaria (A.O.U.), 09124 Cagliari, Italy; (M.C.); (L.S.); (A.B.)
| | - Petros P. Sfikakis
- Rheumatology Unit, National Kapodistrian University of Athens, 157 72 Athens, Greece;
| | - George Tsoulfas
- Department of Transplantation Surgery, Aristoteleion University of Thessaloniki, 541 24 Thessaloniki, Greece;
| | | | - Durga Prasanna Misra
- Department of Immunology, Sanjay Gandhi Postgraduate Institute of Medical Sciences, Lucknow 226014, India; (D.P.M.); (V.A.)
| | - Vikas Agarwal
- Department of Immunology, Sanjay Gandhi Postgraduate Institute of Medical Sciences, Lucknow 226014, India; (D.P.M.); (V.A.)
| | - George D. Kitas
- Academic Affairs, Dudley Group NHS Foundation Trust, Dudley DY1 2HQ, UK;
- Arthritis Research UK Epidemiology Unit, Manchester University, Manchester M13 9PL, UK
| | - Jagjit S. Teji
- Ann and Robert H. Lurie Children’s Hospital of Chicago, Chicago, IL 60611, USA;
| | - Mustafa Al-Maini
- Allergy, Clinical Immunology and Rheumatology Institute, Toronto, ON M5G 1N8, Canada;
| | | | - Andrew Nicolaides
- Vascular Screening and Diagnostic Centre, University of Nicosia Medical School, Nicosia 2408, Cyprus;
| | - Aditya Sharma
- Division of Cardiovascular Medicine, University of Virginia, Charlottesville, VA 22904, USA;
| | - Vijay Rathore
- Athero Point LLC, Roseville, CA 95611, USA; (S.K.D.); (V.R.)
| | - Mostafa Fatemi
- Department of Physiology & Biomedical Engg., Mayo Clinic College of Medicine and Science, Rochester, MN 55905, USA;
| | - Azra Alizad
- Department of Radiology, Mayo Clinic College of Medicine and Science, Rochester, MN 55905, USA;
| | | | - Nagy Frence
- Department of Internal Medicines, Invasive Cardiology Division, University of Szeged, 6720 Szeged, Hungary; (N.F.); (Z.R.)
| | - Zoltan Ruzsa
- Department of Internal Medicines, Invasive Cardiology Division, University of Szeged, 6720 Szeged, Hungary; (N.F.); (Z.R.)
| | - Archna Gupta
- Radiology Department, Sanjay Gandhi Postgraduate Institute of Medical Sciences, Lucknow 226014, India;
| | - Subbaram Naidu
- Electrical Engineering Department, University of Minnesota, Duluth, MN 55455, USA;
| | - Mannudeep Kalra
- Department of Radiology, Massachusetts General Hospital, 55 Fruit Street, Boston, MA 02114, USA;
| |
Collapse
|
12
|
Maheshwari S, Sharma RR, Kumar M. LBP-based information assisted intelligent system for COVID-19 identification. Comput Biol Med 2021; 134:104453. [PMID: 33957343 PMCID: PMC8087862 DOI: 10.1016/j.compbiomed.2021.104453] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2021] [Revised: 04/19/2021] [Accepted: 04/24/2021] [Indexed: 01/08/2023]
Abstract
A real-time COVID-19 detection system is an utmost requirement of the present situation. This article presents a chest X-ray image-based automated COVID-19 detection system which can be employed with the RT-PCR test to improve the diagnosis rate. In the proposed approach, the textural features are extracted from the chest X-ray images and local binary pattern (LBP) based images. Further, the image-based and LBP image-based features are jointly investigated. Thereafter, highly discriminatory features are provided to the classifier for developing an automated model for COVID-19 identification. The performance of the proposed approach is investigated over 2905 chest X-ray images of normal, pneumonia, and COVID-19 infected persons on various class combinations to analyze the robustness. The developed method achieves 97.97% accuracy (acc) and 99.88% sensitivity (sen) for classifying COVID-19 X-ray images against pneumonia infected and normal person's X-ray images. It attains 98.91% acc and 99.33% sen for COVID-19 X-ray against the normal X-ray classification. This method can be employed to assist the radiologists during mass screening for fast, accurate, and contact-free COVID-19 diagnosis.
Collapse
Affiliation(s)
- Shishir Maheshwari
- Discipline of Electrical and Electronics Engineering, Birla Institute of Technology and Science, Pilani, 333031, India.
| | - Rishi Raj Sharma
- Department of Electronics Engineering, Defence Institute of Advanced Technology, Pune, 411025, India.
| | - Mohit Kumar
- NAF Department, Indian Institute of Technology Kanpur, Kanpur, India.
| |
Collapse
|
13
|
Ashraf MN, Hussain M, Habib Z. Review of Various Tasks Performed in the Preprocessing Phase of a Diabetic Retinopathy Diagnosis System. Curr Med Imaging 2021; 16:397-426. [PMID: 32410541 DOI: 10.2174/1573405615666190219102427] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2018] [Revised: 12/31/2018] [Accepted: 01/20/2019] [Indexed: 12/15/2022]
Abstract
Diabetic Retinopathy (DR) is a major cause of blindness in diabetic patients. The increasing population of diabetic patients and difficulty to diagnose it at an early stage are limiting the screening capabilities of manual diagnosis by ophthalmologists. Color fundus images are widely used to detect DR lesions due to their comfortable, cost-effective and non-invasive acquisition procedure. Computer Aided Diagnosis (CAD) of DR based on these images can assist ophthalmologists and help in saving many sight years of diabetic patients. In a CAD system, preprocessing is a crucial phase, which significantly affects its performance. Commonly used preprocessing operations are the enhancement of poor contrast, balancing the illumination imbalance due to the spherical shape of a retina, noise reduction, image resizing to support multi-resolution, color normalization, extraction of a field of view (FOV), etc. Also, the presence of blood vessels and optic discs makes the lesion detection more challenging because these two artifacts exhibit specific attributes, which are similar to those of DR lesions. Preprocessing operations can be broadly divided into three categories: 1) fixing the native defects, 2) segmentation of blood vessels, and 3) localization and segmentation of optic discs. This paper presents a review of the state-of-the-art preprocessing techniques related to three categories of operations, highlighting their significant aspects and limitations. The survey is concluded with the most effective preprocessing methods, which have been shown to improve the accuracy and efficiency of the CAD systems.
Collapse
Affiliation(s)
| | - Muhammad Hussain
- Department of Computer Science, College of Computer and Information Sciences, King Saud University, Riyadh, Saudi Arabia
| | - Zulfiqar Habib
- Department of Computer Science, COMSATS University Islamabad, Lahore, Pakistan
| |
Collapse
|
14
|
Gayathri S, Gopi VP, Palanisamy P. Diabetic retinopathy classification based on multipath CNN and machine learning classifiers. Phys Eng Sci Med 2021; 44:639-653. [PMID: 34033015 DOI: 10.1007/s13246-021-01012-3] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2020] [Accepted: 05/06/2021] [Indexed: 11/28/2022]
Abstract
Eye care professionals generally use fundoscopy to confirm the occurrence of Diabetic Retinopathy (DR) in patients. Early DR detection and accurate DR grading are critical for the care and management of this disease. This work proposes an automated DR grading method in which features can be extracted from the fundus images and categorized based on severity using deep learning and Machine Learning (ML) algorithms. A Multipath Convolutional Neural Network (M-CNN) is used for global and local feature extraction from images. Then, a machine learning classifier is used to categorize the input according to the severity. The proposed model is evaluated across different publicly available databases (IDRiD, Kaggle (for DR detection), and MESSIDOR) and different ML classifiers (Support Vector Machine (SVM), Random Forest, and J48). The metrics selected for model evaluation are the False Positive Rate (FPR), Specificity, Precision, Recall, F1-score, K-score, and Accuracy. The experiments show that the best response is produced by the M-CNN network with the J48 classifier. The classifiers are evaluated across the pre-trained network features and existing DR grading methods. The average accuracy obtained for the proposed work is 99.62% for DR grading. The experiments and evaluation results show that the proposed method works well for accurate DR grading and early disease detection.
Collapse
Affiliation(s)
- S Gayathri
- National Institute of Technology, Tiruchirappalli, Tamil Nadu, India
| | - Varun P Gopi
- National Institute of Technology, Tiruchirappalli, Tamil Nadu, India.
| | - P Palanisamy
- National Institute of Technology, Tiruchirappalli, Tamil Nadu, India
| |
Collapse
|
15
|
Assessing Changes in Diabetic Retinopathy Caused by Diabetes Mellitus and Glaucoma Using Support Vector Machines in Combination with Differential Evolution Algorithm. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app11093944] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The aim of this study is to evaluate the changes related to diabetic retinopathy (DR) (no changes, small or moderate changes) in patients with glaucoma and diabetes using artificial intelligence instruments: Support Vector Machines (SVM) in combination with a powerful optimization algorithm—Differential Evolution (DE). In order to classify the DR changes and to make predictions in various situations, an approach including SVM optimized with DE was applied. The role of the optimizer was to automatically determine the SVM parameters that lead to the lowest classification error. The study was conducted on a sample of 52 patients: particularly, 101 eyes with glaucoma and diabetes mellitus, in the Ophthalmology Clinic I of the “St. Spiridon” Clinical Hospital of Iaşi. The criteria considered in the modelling action were normal or hypertensive open-angle glaucoma, intraocular hypertension and associated diabetes. The patients with other types of glaucoma pseudoexfoliation, pigment, cortisone, neovascular and primitive angle-closure, and those without associated diabetes, were excluded. The assessment of diabetic retinopathy changes were carried out with Volk lens and Fundus Camera Zeiss retinal photography on the dilated pupil, inspecting all quadrants. The criteria for classifying the DR (early treatment diabetic retinopathy study—ETDRS) changes were: without changes (absence of DR), mild forma nonproliferative diabetic retinopathy (the presence of a single micro aneurysm), moderate form (micro aneurysms, hemorrhages in 2–3 quadrants, venous dilatations and soft exudates in a quadrant), severe form (micro aneurysms, hemorrhages in all quadrants, venous dilatation in 2–3 quadrants) and proliferative diabetic retinopathy (disk and retinal neovascularization in different quadrants). Any new clinical element that occurred in subsequent checks, which led to their inclusion in severe nonproliferative or proliferative forms of diabetic retinopathy, was considered to be the result of the progression of diabetic retinopathy. The results obtained were very good; in the testing phase, a 95.23% accuracy has been obtained, only one sample being wrongly classified. The effectiveness of the classification algorithm (SVM), developed in optimal form with DE, and used in predictions of retinal changes related to diabetes, was demonstrated.
Collapse
|
16
|
Rajinikanth V, Sivakumar R, Hemanth DJ, Kadry S, Mohanty JR, Arunmozhi S, Raja NSM, Nhu NG. Automated classification of retinal images into AMD/non-AMD Class—a study using multi-threshold and Gassian-filter enhanced images. EVOLUTIONARY INTELLIGENCE 2021. [DOI: 10.1007/s12065-021-00581-2] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
17
|
AI applications in robotics, diagnostic image analysis and precision medicine: Current limitations, future trends, guidelines on CAD systems for medicine. INFORMATICS IN MEDICINE UNLOCKED 2021. [DOI: 10.1016/j.imu.2021.100596] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022] Open
|
18
|
Bibi I, Mir J, Raja G. Automated detection of diabetic retinopathy in fundus images using fused features. Phys Eng Sci Med 2020; 43:1253-1264. [PMID: 32955686 DOI: 10.1007/s13246-020-00929-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2020] [Accepted: 09/14/2020] [Indexed: 10/23/2022]
Abstract
Diabetic retinopathy (DR) is one of the severe eye conditions due to diabetes complication which can lead to vision loss if left untreated. In this paper, a computationally simple, yet very effective, DR detection method is proposed. First, a segmentation independent two-stage preprocessing based technique is proposed which can effectively extract DR pathognomonic signs; both bright and red lesions, and blood vessels from the eye fundus image. Then, the performance of Local Binary Patterns (LBP), Local Ternary Patterns (LTP), Dense Scale-Invariant Feature Transform (DSIFT) and Histogram of Oriented Gradients (HOG) as a feature descriptor for fundus images, is thoroughly analyzed. SVM kernel-based classifiers are trained and tested, using a 5-fold cross-validation scheme, on both newly acquired fundus image database from the local hospital and combined database created from the open-sourced available databases. The classification accuracy of 96.6% with 0.964 sensitivity and 0.969 specificity is achieved using a Cubic SVM classifier with LBP and LTP fused features for the local database. More importantly, in out-of-sample testing on the combined database, the model gives an accuracy of 95.21% with a sensitivity of 0.970 and specificity of 0.932. This indicates the proposed model is very well-fitted and generalized which is further corroborated by the presented train-test curves.
Collapse
Affiliation(s)
- Iqra Bibi
- Electrical Engineering Department, University of Engineering and Technology, Taxila, Pakistan
| | - Junaid Mir
- Electrical Engineering Department, University of Engineering and Technology, Taxila, Pakistan
| | - Gulistan Raja
- Electrical Engineering Department, University of Engineering and Technology, Taxila, Pakistan.
| |
Collapse
|
19
|
A lightweight CNN for Diabetic Retinopathy classification from fundus images. Biomed Signal Process Control 2020. [DOI: 10.1016/j.bspc.2020.102115] [Citation(s) in RCA: 46] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
20
|
Jadhav AS, Patil PB, Biradar S. Analysis on diagnosing diabetic retinopathy by segmenting blood vessels, optic disc and retinal abnormalities. J Med Eng Technol 2020; 44:299-316. [PMID: 32729345 DOI: 10.1080/03091902.2020.1791986] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2023]
Abstract
The main intention of mass screening programmes for Diabetic Retinopathy (DR) is to detect and diagnose the disorder earlier than it leads to vision loss. Automated analysis of retinal images has the likelihood to improve the efficacy of screening programmes when compared over the manual image analysis. This article plans to develop a framework for the detection of DR from the retinal fundus images using three evaluations based on optic disc, blood vessels and retinal abnormalities. Initially, the pre-processing steps like green channel conversion and Contrast Limited Adaptive Histogram Equalisation is done. Further, the segmentation procedure starts with optic disc segmentation by open-close watershed transform, blood vessel segmentation by grey level thresholding and abnormality segmentation (hard exudates, haemorrhages, Microaneurysm and soft exudates) by top hat transform and Gabor filtering mechanisms. From the three segmented images, the feature like local binary pattern, texture energy measurement, Shanon's and Kapur's entropy are extracted, which is subjected to optimal feature selection process using the new hybrid optimisation algorithm termed as Trial-based Bypass Improved Dragonfly Algorithm (TB - DA). These features are given to hybrid machine learning algorithm with the combination of NN and DBN. As a modification, the same hybrid TB - DA is used to enhance the training of hybrid classifier, which outputs the categorisation as normal, mild, moderate or severe images based on three components.
Collapse
Affiliation(s)
- Ambaji S Jadhav
- Department of Electrical and Electronics, B.L.D.E.A's V.P. Dr. P.G. Halakatti College of Engineering & Technology (Affiliated to Visvesvaraya Technological University, Belagavi), Vijayapur, India
| | - Pushpa B Patil
- Department of Computer Science & Engineering, B.L.D.E.A's V.P. Dr. P.G. Halakatti College of Engineering & Technology (Affiliated to Visvesvaraya Technological University, Belagavi), Vijayapur, India
| | - Sunil Biradar
- Department of Ophthalmology, Shri B.M. Patil Medical College Hospital and Research Center, Vijayapur, India
| |
Collapse
|
21
|
Feng S, Zhuo Z, Pan D, Tian Q. CcNet: A cross-connected convolutional network for segmenting retinal vessels using multi-scale features. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2018.10.098] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
22
|
Blended Multi-Modal Deep ConvNet Features for Diabetic Retinopathy Severity Prediction. ELECTRONICS 2020. [DOI: 10.3390/electronics9060914] [Citation(s) in RCA: 45] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Diabetic Retinopathy (DR) is one of the major causes of visual impairment and blindness across the world. It is usually found in patients who suffer from diabetes for a long period. The major focus of this work is to derive optimal representation of retinal images that further helps to improve the performance of DR recognition models. To extract optimal representation, features extracted from multiple pre-trained ConvNet models are blended using proposed multi-modal fusion module. These final representations are used to train a Deep Neural Network (DNN) used for DR identification and severity level prediction. As each ConvNet extracts different features, fusing them using 1D pooling and cross pooling leads to better representation than using features extracted from a single ConvNet. Experimental studies on benchmark Kaggle APTOS 2019 contest dataset reveals that the model trained on proposed blended feature representations is superior to the existing methods. In addition, we notice that cross average pooling based fusion of features from Xception and VGG16 is the most appropriate for DR recognition. With the proposed model, we achieve an accuracy of 97.41%, and a kappa statistic of 94.82 for DR identification and an accuracy of 81.7% and a kappa statistic of 71.1% for severity level prediction. Another interesting observation is that DNN with dropout at input layer converges more quickly when trained using blended features, compared to the same model trained using uni-modal deep features.
Collapse
|
23
|
Jadhav AS, Patil PB, Biradar S. Optimal feature selection-based diabetic retinopathy detection using improved rider optimization algorithm enabled with deep learning. EVOLUTIONARY INTELLIGENCE 2020. [DOI: 10.1007/s12065-020-00400-0] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
24
|
Detection of Early Signs of Diabetic Retinopathy Based on Textural and Morphological Information in Fundus Images. SENSORS 2020; 20:s20041005. [PMID: 32069912 PMCID: PMC7071097 DOI: 10.3390/s20041005] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/16/2019] [Revised: 01/30/2020] [Accepted: 02/10/2020] [Indexed: 02/01/2023]
Abstract
Estimated blind people in the world will exceed 40 million by 2025. To develop novel algorithms based on fundus image descriptors that allow the automatic classification of retinal tissue into healthy and pathological in early stages is necessary. In this paper, we focus on one of the most common pathologies in the current society: diabetic retinopathy. The proposed method avoids the necessity of lesion segmentation or candidate map generation before the classification stage. Local binary patterns and granulometric profiles are locally computed to extract texture and morphological information from retinal images. Different combinations of this information feed classification algorithms to optimally discriminate bright and dark lesions from healthy tissues. Through several experiments, the ability of the proposed system to identify diabetic retinopathy signs is validated using different public databases with a large degree of variability and without image exclusion.
Collapse
|
25
|
He Y, Jiao W, Shi Y, Lian J, Zhao B, Zou W, Zhu Y, Zheng Y. Segmenting Diabetic Retinopathy Lesions in Multispectral Images Using Low-Dimensional Spatial-Spectral Matrix Representation. IEEE J Biomed Health Inform 2020; 24:493-502. [DOI: 10.1109/jbhi.2019.2912668] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
26
|
Porwal P, Pachade S, Kokare M, Deshmukh G, Son J, Bae W, Liu L, Wang J, Liu X, Gao L, Wu T, Xiao J, Wang F, Yin B, Wang Y, Danala G, He L, Choi YH, Lee YC, Jung SH, Li Z, Sui X, Wu J, Li X, Zhou T, Toth J, Baran A, Kori A, Chennamsetty SS, Safwan M, Alex V, Lyu X, Cheng L, Chu Q, Li P, Ji X, Zhang S, Shen Y, Dai L, Saha O, Sathish R, Melo T, Araújo T, Harangi B, Sheng B, Fang R, Sheet D, Hajdu A, Zheng Y, Mendonça AM, Zhang S, Campilho A, Zheng B, Shen D, Giancardo L, Quellec G, Mériaudeau F. IDRiD: Diabetic Retinopathy - Segmentation and Grading Challenge. Med Image Anal 2019; 59:101561. [PMID: 31671320 DOI: 10.1016/j.media.2019.101561] [Citation(s) in RCA: 79] [Impact Index Per Article: 15.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2019] [Revised: 09/09/2019] [Accepted: 09/16/2019] [Indexed: 02/07/2023]
Abstract
Diabetic Retinopathy (DR) is the most common cause of avoidable vision loss, predominantly affecting the working-age population across the globe. Screening for DR, coupled with timely consultation and treatment, is a globally trusted policy to avoid vision loss. However, implementation of DR screening programs is challenging due to the scarcity of medical professionals able to screen a growing global diabetic population at risk for DR. Computer-aided disease diagnosis in retinal image analysis could provide a sustainable approach for such large-scale screening effort. The recent scientific advances in computing capacity and machine learning approaches provide an avenue for biomedical scientists to reach this goal. Aiming to advance the state-of-the-art in automatic DR diagnosis, a grand challenge on "Diabetic Retinopathy - Segmentation and Grading" was organized in conjunction with the IEEE International Symposium on Biomedical Imaging (ISBI - 2018). In this paper, we report the set-up and results of this challenge that is primarily based on Indian Diabetic Retinopathy Image Dataset (IDRiD). There were three principal sub-challenges: lesion segmentation, disease severity grading, and localization of retinal landmarks and segmentation. These multiple tasks in this challenge allow to test the generalizability of algorithms, and this is what makes it different from existing ones. It received a positive response from the scientific community with 148 submissions from 495 registrations effectively entered in this challenge. This paper outlines the challenge, its organization, the dataset used, evaluation methods and results of top-performing participating solutions. The top-performing approaches utilized a blend of clinical information, data augmentation, and an ensemble of models. These findings have the potential to enable new developments in retinal image analysis and image-based DR screening in particular.
Collapse
Affiliation(s)
- Prasanna Porwal
- Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded, India; School of Biomedical Informatics, University of Texas Health Science Center at Houston, USA.
| | - Samiksha Pachade
- Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded, India; School of Biomedical Informatics, University of Texas Health Science Center at Houston, USA
| | - Manesh Kokare
- Shri Guru Gobind Singhji Institute of Engineering and Technology, Nanded, India
| | | | | | | | - Lihong Liu
- Ping An Technology (Shenzhen) Co.,Ltd, China
| | | | - Xinhui Liu
- Ping An Technology (Shenzhen) Co.,Ltd, China
| | | | - TianBo Wu
- Ping An Technology (Shenzhen) Co.,Ltd, China
| | - Jing Xiao
- Ping An Technology (Shenzhen) Co.,Ltd, China
| | | | | | - Yunzhi Wang
- School of Electrical and Computer Engineering, University of Oklahoma, USA
| | - Gopichandh Danala
- School of Electrical and Computer Engineering, University of Oklahoma, USA
| | - Linsheng He
- School of Electrical and Computer Engineering, University of Oklahoma, USA
| | - Yoon Ho Choi
- Samsung Advanced Institute for Health Sciences & Technology (SAIHST), Sungkyunkwan University, Seoul, Republic of Korea
| | - Yeong Chan Lee
- Samsung Advanced Institute for Health Sciences & Technology (SAIHST), Sungkyunkwan University, Seoul, Republic of Korea
| | - Sang-Hyuk Jung
- Samsung Advanced Institute for Health Sciences & Technology (SAIHST), Sungkyunkwan University, Seoul, Republic of Korea
| | - Zhongyu Li
- Department of Computer Science, University of North Carolina at Charlotte, USA
| | - Xiaodan Sui
- School of Information Science and Engineering, Shandong Normal University, China
| | - Junyan Wu
- Cleerly Inc., New York, United States
| | | | - Ting Zhou
- University at Buffalo, New York, United States
| | - Janos Toth
- University of Debrecen, Faculty of Informatics 4002 Debrecen, POB 400, Hungary
| | - Agnes Baran
- University of Debrecen, Faculty of Informatics 4002 Debrecen, POB 400, Hungary
| | | | | | | | | | - Xingzheng Lyu
- College of Computer Science and Technology, Zhejiang University, Hangzhou, China; Machine Learning for Bioimage Analysis Group, Bioinformatics Institute, A*STAR, Singapore
| | - Li Cheng
- Machine Learning for Bioimage Analysis Group, Bioinformatics Institute, A*STAR, Singapore; Department of Electric and Computer Engineering, University of Alberta, Canada
| | - Qinhao Chu
- School of Computing, National University of Singapore, Singapore
| | - Pengcheng Li
- School of Computing, National University of Singapore, Singapore
| | - Xin Ji
- Beijing Shanggong Medical Technology Co., Ltd., China
| | - Sanyuan Zhang
- College of Computer Science and Technology, Zhejiang University, Hangzhou, China
| | - Yaxin Shen
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, China; MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University, China
| | - Ling Dai
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, China; MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University, China
| | | | | | - Tânia Melo
- INESC TEC - Institute for Systems and Computer Engineering, Technology and Science, Porto, Portugal
| | - Teresa Araújo
- INESC TEC - Institute for Systems and Computer Engineering, Technology and Science, Porto, Portugal; FEUP - Faculty of Engineering of the University of Porto, Porto, Portugal
| | - Balazs Harangi
- University of Debrecen, Faculty of Informatics 4002 Debrecen, POB 400, Hungary
| | - Bin Sheng
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, China; MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University, China
| | - Ruogu Fang
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, USA
| | | | - Andras Hajdu
- University of Debrecen, Faculty of Informatics 4002 Debrecen, POB 400, Hungary
| | - Yuanjie Zheng
- School of Information Science and Engineering, Shandong Normal University, China
| | - Ana Maria Mendonça
- INESC TEC - Institute for Systems and Computer Engineering, Technology and Science, Porto, Portugal; FEUP - Faculty of Engineering of the University of Porto, Porto, Portugal
| | - Shaoting Zhang
- Department of Computer Science, University of North Carolina at Charlotte, USA
| | - Aurélio Campilho
- INESC TEC - Institute for Systems and Computer Engineering, Technology and Science, Porto, Portugal; FEUP - Faculty of Engineering of the University of Porto, Porto, Portugal
| | - Bin Zheng
- School of Electrical and Computer Engineering, University of Oklahoma, USA
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, USA; Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea
| | - Luca Giancardo
- School of Biomedical Informatics, University of Texas Health Science Center at Houston, USA
| | | | - Fabrice Mériaudeau
- Department of Electrical and Electronic Engineering, Universiti Teknologi PETRONAS, Malaysia; ImViA/IFTIM, Université de Bourgogne, Dijon, France
| |
Collapse
|
27
|
WANG XUEWEI, ZHANG SHULIN, LIANG XIAO, ZHENG CHUN, ZHENG JINJIN, Sun MINGZHAI. A CNN-BASED RETINAL IMAGE QUALITY ASSESSMENT SYSTEM FOR TELEOPHTHALMOLOGY. J MECH MED BIOL 2019. [DOI: 10.1142/s0219519419500301] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/02/2023]
Abstract
Oculopathy is a widespread disease among people of all ages around the world. Teleophthalmology can facilitate the ophthalmological diagnosis for less developed countries that lack medical resources. In teleophthalmology, the assessment of retinal image quality is of great importance. In this paper, we propose a no-reference retinal image assessment system based on DenseNet, a convolutional neural network architecture. This system classified fundus images into good and bad quality or five categories: adequate, just noticeable blur, inappropriate illumination, incomplete optic disc, and opacity. The proposed system was evaluated on different datasets and compared to the applications based on other two networks: VGG-16 and GoogLenet. For binary classification, the good-and-bad binary classifier achieves an AUC of 1.000, and the degradation-specified classifiers that distinguish one specified degradation versus the rest achieve AUC values of 0.972, 0.990, 0.982, 0.982 for four categories, respectively. The multi-classification based on DenseNet achieves an overall accuracy of 0.927, which is significantly higher than 0.549 and 0.757 obtained using VGG-16 and GoogLeNet, respectively. The experimental results indicate that the proposed approach produces outstanding performance in retinal image quality assessment and is worth applying in ophthalmological telemedicine applications. In addition, the proposed approach is robust to the image noise. This study fills the gap of multi-classification in retinal image quality assessment.
Collapse
Affiliation(s)
- XUEWEI WANG
- Department of Precision Machinery and Instrumentation, University of Science and Technology of China, Hefei 230022, P. R. China
| | - SHULIN ZHANG
- Department of Precision Machinery and Instrumentation, University of Science and Technology of China, Hefei 230022, P. R. China
| | - XIAO LIANG
- School of Mechanical Engineering, Shijiazhuang Tiedao University, Shijiazhuang 050043, P. R. China
| | - CHUN ZHENG
- The 105 Hospital of PLA, Hefei 230031, P. R. China
| | - JINJIN ZHENG
- Department of Precision Machinery and Instrumentation, University of Science and Technology of China, Hefei 230022, P. R. China
| | - MINGZHAI Sun
- Department of Precision Machinery and Instrumentation, University of Science and Technology of China, Hefei 230022, P. R. China
| |
Collapse
|
28
|
Li Q, Fan S, Chen C. An Intelligent Segmentation and Diagnosis Method for Diabetic Retinopathy Based on Improved U-NET Network. J Med Syst 2019; 43:304. [PMID: 31407110 DOI: 10.1007/s10916-019-1432-0] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2019] [Accepted: 07/29/2019] [Indexed: 11/26/2022]
Abstract
Due to insufficient samples, the generalization performance of deep network is insufficient. In order to solve this problem, an improved U-net based image automatic segmentation and diagnosis algorithm was proposed, in which the max-pooling operation in original U-net model was replaced by the convolution operation to keep more feature information. Firstly, the regions of 128×128 were extracted from all slices of the patients as data samples. Secondly, the patient samples were divided into training sample set and testing sample set, and data augmentation was performed on the training samples. Finally, all the training samples were adopted to train the model. Compared with Fully Convolutional Network (FCN) model and max-pooling based U-net model, DSC and CR coefficients of the proposed method achieve the best results, while PM coefficient is 2.55 percentage lower than the maximum value in the two comparison models, and Average Symmetric Surface Distance is slightly higher than the minimum value of the two comparison models by 0.004. The experimental results show that the proposed model can achieve good segmentation and diagnosis results.
Collapse
Affiliation(s)
- Qianjin Li
- The Affiliated Hospital of Weifang Medical University, Shandong, 261031, Weifang, China
| | - Shanshan Fan
- The Affiliated Hospital of Weifang Medical University, Shandong, 261031, Weifang, China
| | - Changsheng Chen
- The Affiliated Hospital of Weifang Medical University, Shandong, 261031, Weifang, China.
| |
Collapse
|
29
|
Automated detection of diabetic subject using pre-trained 2D-CNN models with frequency spectrum images extracted from heart rate signals. Comput Biol Med 2019; 113:103387. [PMID: 31421276 DOI: 10.1016/j.compbiomed.2019.103387] [Citation(s) in RCA: 56] [Impact Index Per Article: 11.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2019] [Revised: 08/08/2019] [Accepted: 08/08/2019] [Indexed: 11/24/2022]
Abstract
In this study, a deep-transfer learning approach is proposed for the automated diagnosis of diabetes mellitus (DM), using heart rate (HR) signals obtained from electrocardiogram (ECG) data. Recent progress in deep learning has contributed significantly to improvement in the quality of healthcare. In order for deep learning models to perform well, large datasets are required for training. However, a difficulty in the biomedical field is the lack of clinical data with expert annotation. A recent, commonly implemented technique to train deep learning models using small datasets is to transfer the weighting, developed from a large dataset, to the current model. This deep learning transfer strategy is generally employed for two-dimensional signals. Herein, the weighting of models pre-trained using two-dimensional large image data was applied to one-dimensional HR signals. The one-dimensional HR signals were then converted into frequency spectrum images, which were utilized for application to well-known pre-trained models, specifically: AlexNet, VggNet, ResNet, and DenseNet. The DenseNet pre-trained model yielded the highest classification average accuracy of 97.62%, and sensitivity of 100%, to detect DM subjects via HR signal recordings. In the future, we intend to further test this developed model by utilizing additional data along with cloud-based storage to diagnose DM via heart signal analysis.
Collapse
|
30
|
Randive SN, Senapati RK, Rahulkar AD. A review on computer-aided recent developments for automatic detection of diabetic retinopathy. J Med Eng Technol 2019; 43:87-99. [PMID: 31198073 DOI: 10.1080/03091902.2019.1576790] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
Abstract
Diabetic retinopathy is a serious microvascular disorder that might result in loss of vision and blindness. It seriously damages the retinal blood vessels and reduces the light-sensitive inner layer of the eye. Due to the manual inspection of retinal fundus images on diabetic retinopathy to detect the morphological abnormalities in Microaneurysms (MAs), Exudates (EXs), Haemorrhages (HMs), and Inter retinal microvascular abnormalities (IRMA) is very difficult and time consuming process. In order to avoid this, the regular follow-up screening process, and early automatic Diabetic Retinopathy detection are necessary. This paper discusses various methods of analysing automatic retinopathy detection and classification of different grading based on the severity levels. In addition, retinal blood vessel detection techniques are also discussed for the ultimate detection and diagnostic procedure of proliferative diabetic retinopathy. Furthermore, the paper elaborately discussed the systematic review accessed by authors on various publicly available databases collected from different medical sources. In the survey, meta-analysis of several methods for diabetic feature extraction, segmentation and various types of classifiers have been used to evaluate the system performance metrics for the diagnosis of DR. This survey will be helpful for the technical persons and researchers who want to focus on enhancing the diagnosis of a system that would be more powerful in real life.
Collapse
Affiliation(s)
- Santosh Nagnath Randive
- a Department of Electronics & Communication Engineering , Koneru Lakshmaiah Education Foundation, Green Fields, Vaddeswaram , Guntur , Andhra Pradesh , India
| | - Ranjan K Senapati
- a Department of Electronics & Communication Engineering , Koneru Lakshmaiah Education Foundation, Green Fields, Vaddeswaram , Guntur , Andhra Pradesh , India
| | - Amol D Rahulkar
- b Department of Electrical and Electronics Engineering , National Institute of Technology , Goa , India
| |
Collapse
|
31
|
S K, D M. Distinguising Proof of Diabetic Retinopathy Detection by Hybrid Approaches in Two Dimensional Retinal Fundus Images. J Med Syst 2019; 43:173. [PMID: 31069550 DOI: 10.1007/s10916-019-1313-6] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2019] [Accepted: 04/25/2019] [Indexed: 12/29/2022]
Abstract
Diabetes is characterized by constant high level of blood glucose. The human body needs to maintain insulin at very constrict range. The patients who are all affected by diabetes for a long time affected by eye disease called Diabetic Retinopathy (DR). The retinal landmarks namely Optic disc is predicted and masked to decrease the false positive in the exudates detection. The abnormalities like Exudates, Microaneurysms and Hemorrhages are segmented to classify the various stages of DR. The proposed approach is employed to separate the landmarks of retina and lesions of retina for the classification of stages of DR. The segmentation algorithms like Gabor double-sided hysteresis thresholding, maximum intensity variation, inverse surface adaptive thresholding, multi-agent approach and toboggan segmentation are used to detect and segment BVs, ODs, EXs, MAs and HAs. The feature vector formation and machine learning algorithm used to classify the various stages of DR are evaluated using images available in various retinal databases, and their performance measures are presented in this paper.
Collapse
Affiliation(s)
- Karkuzhali S
- Department of Computer Science and Engineering, Kalasalingam Academy of Research and Education ( Deemed to be University), Srivilliputtur, Tamilnadu, India.
| | - Manimegalai D
- Department of Information Technology, National Engineering College, Kovilpatti, Tamilnadu, India
| |
Collapse
|
32
|
Kriti, Virmani J, Agarwal R. Effect of despeckle filtering on classification of breast tumors using ultrasound images. Biocybern Biomed Eng 2019. [DOI: 10.1016/j.bbe.2019.02.004] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
|
33
|
Balasubramanian K, Ananthamoorthy NP. Analysis of hybrid statistical textural and intensity features to discriminate retinal abnormalities through classifiers. Proc Inst Mech Eng H 2019; 233:506-514. [PMID: 30894077 DOI: 10.1177/0954411919835856] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Retinal image analysis relies on the effectiveness of computational techniques to discriminate various abnormalities in the eye like diabetic retinopathy, macular degeneration and glaucoma. The onset of the disease is often unnoticed in case of glaucoma, the effect of which is felt only at a later stage. Diagnosis of such degenerative diseases warrants early diagnosis and treatment. In this work, performance of statistical and textural features in retinal vessel segmentation is evaluated through classifiers like extreme learning machine, support vector machine and Random Forest. The fundus images are initially preprocessed for any noise reduction, image enhancement and contrast adjustment. The two-dimensional Gabor Wavelets and Partition Clustering is employed on the preprocessed image to extract the blood vessels. Finally, the combined hybrid features comprising statistical textural, intensity and vessel morphological features, extracted from the image, are used to detect glaucomatous abnormality through the classifiers. A crisp decision can be taken depending on the classifying rates of the classifiers. Public databases RIM-ONE and high-resolution fundus and local datasets are used for evaluation with threefold cross validation. The evaluation is based on performance metrics through accuracy, sensitivity and specificity. The evaluation of hybrid features obtained an overall accuracy of 97% when tested using classifiers. The support vector machine classifier is able to achieve an accuracy of 93.33% on high-resolution fundus, 93.8% on RIM-ONE dataset and 95.3% on local dataset. For extreme learning machine classifier, the accuracy is 95.1% on high-resolution fundus, 97.8% on RIM-ONE and 96.8% on local dataset. An accuracy of 94.5% on high-resolution fundus 92.5% on RIM-ONE and 94.2% on local dataset is obtained for the random forest classifier. Validation of the experiment results indicate that the hybrid features can be deployed in supervised classifiers to discriminate retinal abnormalities effectively.
Collapse
|
34
|
Yasar A, Saritas I, Korkmaz H. Computer-Aided Diagnosis System for Detection of Stomach Cancer with Image Processing Techniques. J Med Syst 2019; 43:99. [DOI: 10.1007/s10916-019-1203-y] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2018] [Accepted: 02/11/2019] [Indexed: 11/30/2022]
|
35
|
VICNESH JAHMUNAH, HAGIWARA YUKI. ACCURATE DETECTION OF SEIZURE USING NONLINEAR PARAMETERS EXTRACTED FROM EEG SIGNALS. J MECH MED BIOL 2019. [DOI: 10.1142/s0219519419400049] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Electroencephalography (EEG) is the graphical recording of electrical activity along the scalp. The EEG signal monitors brain activity noninvasively with a high accuracy of milliseconds and provides valuable discernment about the brain’s state. It is also sensitive in detecting spikes in epilepsy. Computer-aided diagnosis (CAD) tools allow epilepsy to be diagnosed by evading invasive methods. This paper presents a novel CAD system for epilepsy using other linear features together with Hjorth’s nonlinear features such as mobility, complexity, activity and Kolmogorov complexity. The proposed method uses MATLAB software to extract the nonlinear features from the EEG data. The optimal features are selected using the statistical analysis, ANOVA (analysis of variance) test for classification. Once selected, they are fed into the decision tree (DT) for the classification of the different epileptic classes. The proposed method affirms that four nonlinear features, Kolmogorov complexity, singular value decomposition, mobility and permutation entropy are sufficient to provide the highest accuracy of 93%, sensitivity of 97%, specificity of 88% and positive predictive value (PPV) of 94%, with the DT classifier. The mean value is the highest in the ictal stage for the Kolmogorov complexity proving it to have the best variation. It also has the highest [Formula: see text]-value of 300.439 portraying it to be the best parameter that is favourable for the clinical diagnosis of epilepsy, when used together with the DT classifier, for a duration of 23.6[Formula: see text]s of EEG data.
Collapse
Affiliation(s)
- JAHMUNAH VICNESH
- Department of Electronics and Computer Engineering, Ngee Ann Polytechnic, Singapore
| | - YUKI HAGIWARA
- Department of Electronics and Computer Engineering, Ngee Ann Polytechnic, Singapore
| |
Collapse
|
36
|
MURALIDHAR BAIRY G, HAGIWARA YUKI. EMPIRICAL MODE DECOMPOSITION-BASED PROCESSING FOR AUTOMATED DETECTION OF EPILEPSY. J MECH MED BIOL 2019. [DOI: 10.1142/s0219519419400037] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Epilepsy is a chronic illness of the brain characterized by recurring seizure attacks. Electroencephalogram (EEG) can record the electrical activity of the brain and is extensively used to analyze and diagnose epileptic seizures. However, the EEG signals are highly non-linear and chaotic and are difficult to analyze due to their small magnitude. Hence, empirical mode decomposition (EMD), a non-linear technique, has been widely adopted to capture the subtle changes present in the EEG signals. Hence, it is an added advantage to develop an automated computer-aided diagnostic (CAD) system to detect the different brain activities from the EEG signals using machine learning approaches. In this paper, we focus on the previous works which have used the EMD technique in the automated detection of normal or epileptic EEG signals.
Collapse
Affiliation(s)
- G. MURALIDHAR BAIRY
- Faculty Department of Biomedical Engineering, Manipal Institute of Technology, Manipal 576104, India
| | - YUKI HAGIWARA
- Department of Electronics and Computer Engineering, Ngee Ann Polytechnic, 599489 Singapore
| |
Collapse
|
37
|
AHAMMED MUNEER KV, PAUL JOSEPH K. AUTOMATION OF MR BRAIN IMAGE CLASSIFICATION FOR MALIGNANCY DETECTION. J MECH MED BIOL 2019. [DOI: 10.1142/s0219519419400025] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/16/2023]
Abstract
Magnetic resonance imaging (MRI) plays an integral role among the advanced techniques for detecting a brain tumor. The early detection of brain tumor with proper automation algorithm results in assisting oncologists to make easy decisions for diagnostic purposes. This paper presents an automatic classification of MR brain images in normal and malignant conditions. The feature extraction is done with gray-level co-occurrence matrix, and we proposed a feature reduction technique based on statistical test which is preceded by principal component analysis (PCA). The main focus of the work is to establish the statistical significance of the features obtained after PCA, thereby selecting significant feature values for subsequent classification. For that, a [Formula: see text]-test is performed which yielded a [Formula: see text]-value of 0.05. Finally, a comparative study using [Formula: see text]-nearest neighbor (kNN), support vector machine and artificial neural network (ANN)-based supervised classifiers is performed. In this work, we could achieve reasonably good sensitivity, specificity and accuracy for all the classifiers. The ANN classifier gives better performance with sensitivity of 97.33%, specificity of 97.42% and accuracy of 98.66% on the whole brain atlas database. The experimental results obtained are comparable to the other recent state-of-the-art.
Collapse
|
38
|
Maheshwari S, Kanhangad V, Pachori RB, Bhandary SV, Acharya UR. Automated glaucoma diagnosis using bit-plane slicing and local binary pattern techniques. Comput Biol Med 2019; 105:72-80. [DOI: 10.1016/j.compbiomed.2018.11.028] [Citation(s) in RCA: 29] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2018] [Revised: 11/30/2018] [Accepted: 11/30/2018] [Indexed: 12/18/2022]
|
39
|
Porwal P, Pachade S, Kokare M, Giancardo L, Mériaudeau F. Retinal image analysis for disease screening through local tetra patterns. Comput Biol Med 2018; 102:200-210. [DOI: 10.1016/j.compbiomed.2018.09.028] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2018] [Revised: 09/11/2018] [Accepted: 09/26/2018] [Indexed: 11/27/2022]
|
40
|
Efficient multi-kernel multi-instance learning using weakly supervised and imbalanced data for diabetic retinopathy diagnosis. Comput Med Imaging Graph 2018; 69:112-124. [DOI: 10.1016/j.compmedimag.2018.08.008] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2018] [Revised: 07/09/2018] [Accepted: 08/22/2018] [Indexed: 11/19/2022]
|
41
|
Atlas LLG, Parasuraman K. Effective Approach to Classify and Segment Retinal Hemorrhage Using ANFIS and Particle Swarm Optimization. JOURNAL OF INTELLIGENT SYSTEMS 2018. [DOI: 10.1515/jisys-2016-0354] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022] Open
Abstract
Abstract
The main objective of this study is to progress the structure and segment the images from hemorrhage recognition in retinal fundus images in ostensible. The abnormal bleeding of blood vessels in the retina which is the membrane in the back of the eye is called retinal hemorrhage. The image folders are deliberated, and the filter technique is utilized to decrease the images specifically adaptive median filter in our suggested proposal. Gray level co-occurrence matrix (GLCM), grey level run length matrix (GLRLM) and Scale invariant feature transform (SIFT) feature skills are present after filtrating the feature withdrawal. After this, the organization technique is performed, specifically artificial neural network with fuzzy interface system (ANFIS) method; with the help of this organization, exaggerated and non-affected images are categorized. Affected hemorrhage images are transpired for segmentation procedure, and in this exertion, threshold optimization is measured with numerous optimization methods; on the basis of this, particle swarm optimization is accomplished in improved manner. Consequently, the segmented images are projected, and the sensitivity is great when associating with accurateness and specificity in the MATLAB platform.
Collapse
|
42
|
Thin Cap Fibroatheroma Detection in Virtual Histology Images Using Geometric and Texture Features. APPLIED SCIENCES-BASEL 2018. [DOI: 10.3390/app8091632] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
Abstract
Atherosclerotic plaque rupture is the most common mechanism responsible for a majority of sudden coronary deaths. The precursor lesion of plaque rupture is thought to be a thin cap fibroatheroma (TCFA), or “vulnerable plaque”. Virtual Histology-Intravascular Ultrasound (VH-IVUS) images are clinically available for visualising colour-coded coronary artery tissue. However, it has limitations in terms of providing clinically relevant information for identifying vulnerable plaque. The aim of this research is to improve the identification of TCFA using VH-IVUS images. To more accurately segment VH-IVUS images, a semi-supervised model is developed by means of hybrid K-means with Particle Swarm Optimisation (PSO) and a minimum Euclidean distance algorithm (KMPSO-mED). Another novelty of the proposed method is fusion of different geometric and informative texture features to capture the varying heterogeneity of plaque components and compute a discriminative index for TCFA plaque, while the existing research on TCFA detection has only focused on the geometric features. Three commonly used statistical texture features are extracted from VH-IVUS images: Local Binary Patterns (LBP), Grey Level Co-occurrence Matrix (GLCM), and Modified Run Length (MRL). Geometric and texture features are concatenated in order to generate complex descriptors. Finally, Back Propagation Neural Network (BPNN), kNN (K-Nearest Neighbour), and Support Vector Machine (SVM) classifiers are applied to select the best classifier for classifying plaque into TCFA and Non-TCFA. The present study proposes a fast and accurate computer-aided method for plaque type classification. The proposed method is applied to 588 VH-IVUS images obtained from 10 patients. The results prove the superiority of the proposed method, with accuracy rates of 98.61% for TCFA plaque.
Collapse
|
43
|
Bhargavi VR, Rajesh V. Computer Aided Bright Lesion Classification in Fundus Image Based on Feature Extraction. INT J PATTERN RECOGN 2018. [DOI: 10.1142/s0218001418500349] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
In this paper, a hybrid approach of fundus image classification for diabetic retinopathy (DR) lesions is proposed. Laplacian eigenmaps (LE), a nonlinear dimensionality reduction (NDR) technique is applied to a high-dimensional scale invariant feature transform (SIFT) representation of fundus image for lesion classification. The applied NDR technique gives a low-dimensional intrinsic feature vector for lesion classification in fundus images. The publicly available databases are used for demonstrating the implemented strategy. The performance of applied technique can be evaluated based on sensitivity, specificity and accuracy using Support vector classifier. Compared to other feature vectors, the implemented LE-based feature vector yielded better classification performance. The accuracy obtained is 96.6% for SIFT-LE-SVM.
Collapse
Affiliation(s)
- V. Ratna Bhargavi
- Department of Electronics and Communication Engineering, Koneru Lakshmaiah Education Foundation, K L E F (K L Deemed to be University), Vaddeswaram, Guntur-522502, Andhra Pradesh, India
| | - V. Rajesh
- Department of Electronics and Communication Engineering, Koneru Lakshmaiah Education Foundation, K L E F (K L Deemed to be University), Vaddeswaram, Guntur-522502, Andhra Pradesh, India
| |
Collapse
|
44
|
Randive SN, Rahulkar AD, Senapati RK. LVP extraction and triplet-based segmentation for diabetic retinopathy recognition. EVOLUTIONARY INTELLIGENCE 2018. [DOI: 10.1007/s12065-018-0158-0] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
|
45
|
|
46
|
Mookiah MRK, Baum T, Mei K, Kopp FK, Kaissis G, Foehr P, Noel PB, Kirschke JS, Subburaj K. Effect of radiation dose reduction on texture measures of trabecular bone microstructure: an in vitro study. J Bone Miner Metab 2018; 36:323-335. [PMID: 28389933 DOI: 10.1007/s00774-017-0836-5] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/04/2016] [Accepted: 03/19/2017] [Indexed: 12/25/2022]
Abstract
Osteoporosis is characterized by bone loss and degradation of bone microstructure leading to fracture particularly in elderly people. Osteoporotic bone degeneration and fracture risk can be assessed by bone mineral density and trabecular bone score from 2D projection dual-energy X-ray absorptiometry images. However, multidetector computed tomography image based quantification of trabecular bone microstructure showed significant improvement in prediction of fracture risk beyond that from bone mineral density and trabecular bone score; however, high radiation exposure limits its use in routine clinical in vivo examinations. Hence, this study investigated reduction of radiation dose and its effects on image quality of thoracic midvertebral specimens. Twenty-four texture features were extracted to quantify the image quality from multidetector computed tomography images of 11 thoracic midvertebral specimens, by means of statistical moments, the gray-level co-occurrence matrix, and the gray-level run-length matrix, and were analyzed by an independent sample t-test to observe differences in image texture with respect to radiation doses of 80, 150, 220, and 500 mAs. The results showed that three features-namely, global variance, energy, and run percentage, were not statistically significant ([Formula: see text]) for low doses with respect to 500 mAs. Hence, it is evident that these three dose-independent features can be used for disease monitoring with a low-dose imaging protocol.
Collapse
Affiliation(s)
- Muthu Rama Krishnan Mookiah
- Pillar of Engineering Product Development, Singapore University of Technology and Design, Singapore, Singapore
| | - Thomas Baum
- Department of Neuroradiology, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| | - Kai Mei
- Department of Radiology, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| | - Felix K Kopp
- Department of Radiology, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| | - Georg Kaissis
- Department of Radiology, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| | - Peter Foehr
- Department of Orthopaedics and Sports Orthopaedics, Biomechanical Laboratory, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| | - Peter B Noel
- Department of Radiology, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| | - Jan S Kirschke
- Department of Neuroradiology, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| | - Karupppasamy Subburaj
- Pillar of Engineering Product Development, Singapore University of Technology and Design, Singapore, Singapore.
| |
Collapse
|
47
|
|
48
|
Extracting tumor in MR brain and breast image with Kapur’s entropy based Cuckoo Search Optimization and morphological reconstruction filters. Biocybern Biomed Eng 2018. [DOI: 10.1016/j.bbe.2018.07.005] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
|
49
|
Koh JEW, Ng EYK, Bhandary SV, Hagiwara Y, Laude A, Acharya UR. Automated retinal health diagnosis using pyramid histogram of visual words and Fisher vector techniques. Comput Biol Med 2017; 92:204-209. [PMID: 29227822 DOI: 10.1016/j.compbiomed.2017.11.019] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2017] [Revised: 11/27/2017] [Accepted: 11/30/2017] [Indexed: 12/18/2022]
Abstract
Untreated age-related macular degeneration (AMD), diabetic retinopathy (DR), and glaucoma may lead to irreversible vision loss. Hence, it is essential to have regular eye screening to detect these eye diseases at an early stage and to offer treatment where appropriate. One of the simplest, non-invasive and cost-effective techniques to screen the eyes is by using fundus photo imaging. But, the manual evaluation of fundus images is tedious and challenging. Further, the diagnosis made by ophthalmologists may be subjective. Therefore, an objective and novel algorithm using the pyramid histogram of visual words (PHOW) and Fisher vectors is proposed for the classification of fundus images into their respective eye conditions (normal, AMD, DR, and glaucoma). The proposed algorithm extracts features which are represented as words. These features are built and encoded into a Fisher vector for classification using random forest classifier. This proposed algorithm is validated with both blindfold and ten-fold cross-validation techniques. An accuracy of 90.06% is achieved with the blindfold method, and highest accuracy of 96.79% is obtained with ten-fold cross-validation. The highest classification performance of our system shows the potential of deploying it in polyclinics to assist healthcare professionals in their initial diagnosis of the eye. Our developed system can reduce the workload of ophthalmologists significantly.
Collapse
Affiliation(s)
- Joel E W Koh
- Department of Electronics and Computer Engineering, Ngee Ann Polytechnic, Singapore; School of Mechanical and Aerospace Engineering, Nanyang Technological University, Singapore.
| | - Eddie Y K Ng
- School of Mechanical and Aerospace Engineering, Nanyang Technological University, Singapore
| | | | - Yuki Hagiwara
- Department of Electronics and Computer Engineering, Ngee Ann Polytechnic, Singapore
| | - Augustinus Laude
- National Healthcare Group Eye Institute, Tan Tock Seng Hospital, Singapore; Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore
| | - U Rajendra Acharya
- Department of Electronics and Computer Engineering, Ngee Ann Polytechnic, Singapore; Department of Biomedical Engineering, School of Science and Technology, Singapore University of Social Sciences, Singapore; Department of Biomedical Engineering, Faculty of Engineering, University of Malaya, Malaysia
| |
Collapse
|
50
|
Mane VM, Jadhav DV. Holoentropy enabled-decision tree for automatic classification of diabetic retinopathy using retinal fundus images. ACTA ACUST UNITED AC 2017; 62:321-332. [PMID: 27514073 DOI: 10.1515/bmt-2016-0112] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2016] [Accepted: 07/15/2016] [Indexed: 11/15/2022]
Abstract
Diabetic retinopathy (DR) is the most common diabetic eye disease. Doctors are using various test methods to detect DR. But, the availability of test methods and requirements of domain experts pose a new challenge in the automatic detection of DR. In order to fulfill this objective, a variety of algorithms has been developed in the literature. In this paper, we propose a system consisting of a novel sparking process and a holoentropy-based decision tree for automatic classification of DR images to further improve the effectiveness. The sparking process algorithm is developed for automatic segmentation of blood vessels through the estimation of optimal threshold. The holoentropy enabled decision tree is newly developed for automatic classification of retinal images into normal or abnormal using hybrid features which preserve the disease-level patterns even more than the signal level of the feature. The effectiveness of the proposed system is analyzed using standard fundus image databases DIARETDB0 and DIARETDB1 for sensitivity, specificity and accuracy. The proposed system yields sensitivity, specificity and accuracy values of 96.72%, 97.01% and 96.45%, respectively. The experimental result reveals that the proposed technique outperforms the existing algorithms.
Collapse
|