1
|
Sulaiman ZB. RP squeeze U-SegNet model for lesion segmentation and optimization enabled ShuffleNet based multi-level severity diabetic retinopathy classification. NETWORK (BRISTOL, ENGLAND) 2024:1-34. [PMID: 39319551 DOI: 10.1080/0954898x.2024.2395375] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/19/2023] [Revised: 06/18/2024] [Accepted: 08/17/2024] [Indexed: 09/26/2024]
Abstract
In Diabetic Retinopathy (DR), the retina is harmed due to the high blood pressure in small blood vessels. Manual screening is time-consuming, which can be overcome by using automated techniques. Hence, this paper proposed a new method for classifying the multi-level severity of DR. Initially, the input fundus image is pre-processed by Non-local means Denoising (NLMD). Then, lesion segmentation is carried out by the Recurrent Prototypical-squeeze U-SegNet (RP-squeeze U-SegNet). Next, feature extraction is effectuated to mine image-level features. DR is categorized as abnormal or normal by ShuffleNet and it is tuned by Fractional War Royale Optimization (FrWRO), and later, if DR is detected, severity classification is performed. Furthermore, the FrWRO-SqueezeNet obtained the maximum performance with sensitivity of 97%, accuracy of 93.8%, specificity of 95.1%, precision of 91.8%, and F-Measure of 94.3%. The devised scheme accurately visualizes abnormal regions in the fundus images. Also, it has the ability to identify the severity levels of DR effectively, which avoids the progression risk to vision loss and proliferative disease.
Collapse
Affiliation(s)
- Zulaikha Beevi Sulaiman
- Department of Computer Science and Engineering, Karpagam College of Engineering, Coimbatore, India
| |
Collapse
|
2
|
Husvogt L, Yaghy A, Camacho A, Lam K, Schottenhamml J, Ploner SB, Fujimoto JG, Waheed NK, Maier A. Ensembling U-Nets for microaneurysm segmentation in optical coherence tomography angiography in patients with diabetic retinopathy. Sci Rep 2024; 14:21520. [PMID: 39277636 PMCID: PMC11401926 DOI: 10.1038/s41598-024-72375-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2023] [Accepted: 09/06/2024] [Indexed: 09/17/2024] Open
Abstract
Diabetic retinopathy is one of the leading causes of blindness around the world. This makes early diagnosis and treatment important in preventing vision loss in a large number of patients. Microaneurysms are the key hallmark of the early stage of the disease, non-proliferative diabetic retinopathy, and can be detected using OCT angiography quickly and non-invasively. Screening tools for non-proliferative diabetic retinopathy using OCT angiography thus have the potential to lead to improved outcomes in patients. We compared different configurations of ensembled U-nets to automatically segment microaneurysms from OCT angiography fundus projections. For this purpose, we created a new database to train and evaluate the U-nets, created by two expert graders in two stages of grading. We present the first U-net neural networks using ensembling for the detection of microaneurysms from OCT angiography en face images from the superficial and deep capillary plexuses in patients with non-proliferative diabetic retinopathy trained on a database labeled by two experts with repeats.
Collapse
Affiliation(s)
- Lennart Husvogt
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91058, Erlangen , Germany.
| | - Antonio Yaghy
- New England Eye Center, Tufts School of Medicine, Boston, MA, 02111, USA
| | - Alex Camacho
- New England Eye Center, Tufts School of Medicine, Boston, MA, 02111, USA
| | - Kenneth Lam
- New England Eye Center, Tufts School of Medicine, Boston, MA, 02111, USA
| | - Julia Schottenhamml
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91058, Erlangen , Germany
| | - Stefan B Ploner
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91058, Erlangen , Germany
| | - James G Fujimoto
- Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
| | - Nadia K Waheed
- New England Eye Center, Tufts School of Medicine, Boston, MA, 02111, USA
| | - Andreas Maier
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91058, Erlangen , Germany
| |
Collapse
|
3
|
Farahat Z, Zrira N, Souissi N, Bennani Y, Bencherif S, Benamar S, Belmekki M, Ngote MN, Megdiche K. Diabetic retinopathy screening through artificial intelligence algorithms: A systematic review. Surv Ophthalmol 2024; 69:707-721. [PMID: 38885761 DOI: 10.1016/j.survophthal.2024.05.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2023] [Revised: 05/20/2024] [Accepted: 05/20/2024] [Indexed: 06/20/2024]
Abstract
Diabetic retinopathy (DR) poses a significant challenge in diabetes management, with its progression often asymptomatic until advanced stages. This underscores the urgent need for cost-effective and reliable screening methods. Consequently, the integration of artificial intelligence (AI) tools presents a promising avenue to address this need effectively. We provide an overview of the current state of the art results and techniques in DR screening using AI, while also identifying gaps in research for future exploration. By synthesizing existing database and pinpointing areas requiring further investigation, this paper seeks to guide the direction of future research in the field of automatic diabetic retinopathy screening. There has been a continuous rise in the number of articles detailing deep learning (DL) methods designed for the automatic screening of diabetic retinopathy especially by the year 2021. Researchers utilized various databases, with a primary focus on the IDRiD dataset. This dataset consists of color fundus images captured at an ophthalmological clinic situated in India. It comprises 516 images that depict various stages of DR and diabetic macular edema. Each of the chosen papers concentrates on various DR signs. Nevertheless, a significant portion primarily focused on detecting exudates, which remains insufficient to assess the overall presence of this disease. Various AI methods have been employed to identify DR signs. Among the chosen papers, 4.7 % utilized detection methods, 46.5 % employed classification techniques, 41.9 % relied on segmentation, and 7 % opted for a combination of classification and segmentation. Metrics calculated from 80 % of the articles employing preprocessing techniques demonstrated the significant benefits of this approach in enhancing results quality. In addition, multiple DL techniques, starting by classification, detection then segmentation. Researchers used mostly YOLO for detection, ViT for classification, and U-Net for segmentation. Another perspective on the evolving landscape of AI models for diabetic retinopathy screening lies in the increasing adoption of Convolutional Neural Networks for classification tasks and U-Net architectures for segmentation purposes; however, there is a growing realization within the research community that these techniques, while powerful individually, can be even more effective when integrated. This integration holds promise for not only diagnosing DR, but also accurately classifying its different stages, thereby enabling more tailored treatment strategies. Despite this potential, the development of AI models for DR screening is fraught with challenges. Chief among these is the difficulty in obtaining the high-quality, labeled data necessary for training models to perform effectively. This scarcity of data poses significant barriers to achieving robust performance and can hinder progress in developing accurate screening systems. Moreover, managing the complexity of these models, particularly deep neural networks, presents its own set of challenges. Additionally, interpreting the outputs of these models and ensuring their reliability in real-world clinical settings remain ongoing concerns. Furthermore, the iterative process of training and adapting these models to specific datasets can be time-consuming and resource-intensive. These challenges underscore the multifaceted nature of developing effective AI models for DR screening. Addressing these obstacles requires concerted efforts from researchers, clinicians, and technologists to develop new approaches and overcome existing limitations. By doing so, a full potential of AI may transform DR screening and improve patient outcomes.
Collapse
Affiliation(s)
- Zineb Farahat
- LISTD Laboratory, Mines School of Rabat, Rabat 10000, Morocco; Cheikh Zaïd Foundation Medical Simulation Center, Rabat 10000, Morocco.
| | - Nabila Zrira
- LISTD Laboratory, Mines School of Rabat, Rabat 10000, Morocco
| | | | - Yasmine Bennani
- Cheikh Zaïd Ophthalmic Center, Cheikh Zaïd International University Hospital, Rabat 10000, Morocco; Institut Supérieur d'Ingénierie et Technologies de Santé/Faculté de Médecine Abulcasis, Université Internationale Abulcasis des Sciences de la Santé, Rabat 10000, Morocco
| | - Soufiane Bencherif
- Cheikh Zaïd Ophthalmic Center, Cheikh Zaïd International University Hospital, Rabat 10000, Morocco; Institut Supérieur d'Ingénierie et Technologies de Santé/Faculté de Médecine Abulcasis, Université Internationale Abulcasis des Sciences de la Santé, Rabat 10000, Morocco
| | - Safia Benamar
- Cheikh Zaïd Ophthalmic Center, Cheikh Zaïd International University Hospital, Rabat 10000, Morocco; Institut Supérieur d'Ingénierie et Technologies de Santé/Faculté de Médecine Abulcasis, Université Internationale Abulcasis des Sciences de la Santé, Rabat 10000, Morocco
| | - Mohammed Belmekki
- Cheikh Zaïd Ophthalmic Center, Cheikh Zaïd International University Hospital, Rabat 10000, Morocco; Institut Supérieur d'Ingénierie et Technologies de Santé/Faculté de Médecine Abulcasis, Université Internationale Abulcasis des Sciences de la Santé, Rabat 10000, Morocco
| | - Mohamed Nabil Ngote
- LISTD Laboratory, Mines School of Rabat, Rabat 10000, Morocco; Institut Supérieur d'Ingénierie et Technologies de Santé/Faculté de Médecine Abulcasis, Université Internationale Abulcasis des Sciences de la Santé, Rabat 10000, Morocco
| | - Kawtar Megdiche
- Cheikh Zaïd Foundation Medical Simulation Center, Rabat 10000, Morocco
| |
Collapse
|
4
|
Yao J, Lim J, Lim GYS, Ong JCL, Ke Y, Tan TF, Tan TE, Vujosevic S, Ting DSW. Novel artificial intelligence algorithms for diabetic retinopathy and diabetic macular edema. EYE AND VISION (LONDON, ENGLAND) 2024; 11:23. [PMID: 38880890 PMCID: PMC11181581 DOI: 10.1186/s40662-024-00389-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/03/2024] [Accepted: 05/09/2024] [Indexed: 06/18/2024]
Abstract
BACKGROUND Diabetic retinopathy (DR) and diabetic macular edema (DME) are major causes of visual impairment that challenge global vision health. New strategies are needed to tackle these growing global health problems, and the integration of artificial intelligence (AI) into ophthalmology has the potential to revolutionize DR and DME management to meet these challenges. MAIN TEXT This review discusses the latest AI-driven methodologies in the context of DR and DME in terms of disease identification, patient-specific disease profiling, and short-term and long-term management. This includes current screening and diagnostic systems and their real-world implementation, lesion detection and analysis, disease progression prediction, and treatment response models. It also highlights the technical advancements that have been made in these areas. Despite these advancements, there are obstacles to the widespread adoption of these technologies in clinical settings, including regulatory and privacy concerns, the need for extensive validation, and integration with existing healthcare systems. We also explore the disparity between the potential of AI models and their actual effectiveness in real-world applications. CONCLUSION AI has the potential to revolutionize the management of DR and DME, offering more efficient and precise tools for healthcare professionals. However, overcoming challenges in deployment, regulatory compliance, and patient privacy is essential for these technologies to realize their full potential. Future research should aim to bridge the gap between technological innovation and clinical application, ensuring AI tools integrate seamlessly into healthcare workflows to enhance patient outcomes.
Collapse
Affiliation(s)
- Jie Yao
- Singapore National Eye Centre, Singapore Eye Research Institute, 11 Third Hospital Avenue, Singapore, 168751, Singapore
- Duke-NUS Medical School, Singapore, Singapore
| | - Joshua Lim
- Singapore National Eye Centre, Singapore Eye Research Institute, 11 Third Hospital Avenue, Singapore, 168751, Singapore
| | - Gilbert Yong San Lim
- Duke-NUS Medical School, Singapore, Singapore
- SingHealth AI Health Program, Singapore, Singapore
| | - Jasmine Chiat Ling Ong
- Duke-NUS Medical School, Singapore, Singapore
- Division of Pharmacy, Singapore General Hospital, Singapore, Singapore
| | - Yuhe Ke
- Department of Anesthesiology and Perioperative Science, Singapore General Hospital, Singapore, Singapore
| | - Ting Fang Tan
- Singapore National Eye Centre, Singapore Eye Research Institute, 11 Third Hospital Avenue, Singapore, 168751, Singapore
| | - Tien-En Tan
- Singapore National Eye Centre, Singapore Eye Research Institute, 11 Third Hospital Avenue, Singapore, 168751, Singapore
- Duke-NUS Medical School, Singapore, Singapore
| | - Stela Vujosevic
- Department of Biomedical, Surgical and Dental Sciences, University of Milan, Milan, Italy
- Eye Clinic, IRCCS MultiMedica, Milan, Italy
| | - Daniel Shu Wei Ting
- Singapore National Eye Centre, Singapore Eye Research Institute, 11 Third Hospital Avenue, Singapore, 168751, Singapore.
- Duke-NUS Medical School, Singapore, Singapore.
- SingHealth AI Health Program, Singapore, Singapore.
| |
Collapse
|
5
|
Siami M, Barszcz T, Wodecki J, Zimroz R. Semantic segmentation of thermal defects in belt conveyor idlers using thermal image augmentation and U-Net-based convolutional neural networks. Sci Rep 2024; 14:5748. [PMID: 38459162 PMCID: PMC10923815 DOI: 10.1038/s41598-024-55864-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2023] [Accepted: 02/28/2024] [Indexed: 03/10/2024] Open
Abstract
The belt conveyor (BC) is the main means of horizontal transportation of bulk materials at mining sites. The sudden fault in BC modules may cause unexpected stops in production lines. With the increasing number of applications of inspection mobile robots in condition monitoring (CM) of industrial infrastructure in hazardous environments, in this article we introduce an image processing pipeline for automatic segmentation of thermal defects in thermal images captured from BC idlers using a mobile robot. This study follows the fact that CM of idler temperature is an important task for preventing sudden breakdowns in BC system networks. We compared the performance of three different types of U-Net-based convolutional neural network architectures for the identification of thermal anomalies using a small number of hand-labeled thermal images. Experiments on the test data set showed that the attention residual U-Net with binary cross entropy as the loss function handled the semantic segmentation problem better than our previous research and other studied U-Net variations.
Collapse
Affiliation(s)
- Mohammad Siami
- AMC Vibro Sp. z o.o., Pilotow 2e, 31-462, Kraków, Poland.
| | - Tomasz Barszcz
- Faculty of Mechanical Engineering and Robotics, AGH University, Al. Mickiewicza 30, 30-059, Kraków, Poland
| | - Jacek Wodecki
- Faculty of Geoengineering, Mining and Geology, Wroclaw University of Science and Technology, Na Grobli 15, 50-421, Wroclaw, Poland
| | - Radoslaw Zimroz
- Faculty of Geoengineering, Mining and Geology, Wroclaw University of Science and Technology, Na Grobli 15, 50-421, Wroclaw, Poland
| |
Collapse
|
6
|
Gao W, Fan B, Fang Y, Song N. Lightweight and multi-lesion segmentation model for diabetic retinopathy based on the fusion of mixed attention and ghost feature mapping. Comput Biol Med 2024; 169:107854. [PMID: 38109836 DOI: 10.1016/j.compbiomed.2023.107854] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 12/04/2023] [Accepted: 12/11/2023] [Indexed: 12/20/2023]
Abstract
Diabetic retinopathy is the main cause of blindness, and lesion segmentation is an important basic work for the diagnosis of this disease. The main lesions include soft and hard exudates, microaneurysms, and hemorrhages. However, the segmentation of these four types of lesions is difficult because of their variability in size and contrast, and high intertype similarity. Currently, many network models have problems, such as a large number of parameters and complex calculations, and most segmentation models for diabetic retinopathy focus only on one type of lesion. In this study, a lightweight algorithm based on BiSeNet V2 was proposed for the segmentation of multiple lesions in diabetic retinopathy fundus. First, a hybrid attention module was embedded in the semantic branch of BiSeNet V2 for 8- and 16-fold downsampling, which helped reassign deep feature-map weights and enhanced the ability to extract local key features. Second, a ghost feature-mapping unit was used to optimize the traditional convolution layers and further reduce the computational cost. Third, a new loss function based on the dynamic threshold loss function was applied to supervise the training by adjusting the training weights of the high-loss difficult samples, which enhanced the model's attention to small goals. In experiments on the IDRiD dataset, we conducted an ablation study to verify the effectiveness of each component and compared the proposed model, BiSeNet V2-Pro, with several state-of-the-art models. In comparison with the baseline BiSeNet V2, the segmentation performance of BiSeNet V2-Pro improved by 12.17 %, 11.44 %, and 8.49 % in terms of Sensitivity (SEN), Intersection over Union (IoU), and Dice coefficient (DICE), respectively. Specifically, IoU of MA reaches 0.5716. Compared with other methods, the segmentation speed was significantly improved while ensuring segmentation accuracy, and the number of model parameters was lower. These results demonstrate the superiority of BiSeNet V2-Pro in the multi-lesion segmentation of diabetic retinopathy.
Collapse
Affiliation(s)
- Weiwei Gao
- Institute of Mechanical and Automotive Engineering, Shanghai University of Engineering Science, Shanghai 201620, China.
| | - Bo Fan
- Institute of Mechanical and Automotive Engineering, Shanghai University of Engineering Science, Shanghai 201620, China.
| | - Yu Fang
- Institute of Mechanical and Automotive Engineering, Shanghai University of Engineering Science, Shanghai 201620, China.
| | - Nan Song
- Department of Ophthalmology, Eye&Ent Hospital of University, Shanghai 200031, China.
| |
Collapse
|
7
|
Wu H, Niyogisubizo J, Zhao K, Meng J, Xi W, Li H, Pan Y, Wei Y. A Weakly Supervised Learning Method for Cell Detection and Tracking Using Incomplete Initial Annotations. Int J Mol Sci 2023; 24:16028. [PMID: 38003217 PMCID: PMC10670924 DOI: 10.3390/ijms242216028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Revised: 08/18/2023] [Accepted: 09/06/2023] [Indexed: 11/26/2023] Open
Abstract
The automatic detection of cells in microscopy image sequences is a significant task in biomedical research. However, routine microscopy images with cells, which are taken during the process whereby constant division and differentiation occur, are notoriously difficult to detect due to changes in their appearance and number. Recently, convolutional neural network (CNN)-based methods have made significant progress in cell detection and tracking. However, these approaches require many manually annotated data for fully supervised training, which is time-consuming and often requires professional researchers. To alleviate such tiresome and labor-intensive costs, we propose a novel weakly supervised learning cell detection and tracking framework that trains the deep neural network using incomplete initial labels. Our approach uses incomplete cell markers obtained from fluorescent images for initial training on the Induced Pluripotent Stem (iPS) cell dataset, which is rarely studied for cell detection and tracking. During training, the incomplete initial labels were updated iteratively by combining detection and tracking results to obtain a model with better robustness. Our method was evaluated using two fields of the iPS cell dataset, along with the cell detection accuracy (DET) evaluation metric from the Cell Tracking Challenge (CTC) initiative, and it achieved 0.862 and 0.924 DET, respectively. The transferability of the developed model was tested using the public dataset FluoN2DH-GOWT1, which was taken from CTC; this contains two datasets with reference annotations. We randomly removed parts of the annotations in each labeled data to simulate the initial annotations on the public dataset. After training the model on the two datasets, with labels that comprise 10% cell markers, the DET improved from 0.130 to 0.903 and 0.116 to 0.877. When trained with labels that comprise 60% cell markers, the performance was better than the model trained using the supervised learning method. This outcome indicates that the model's performance improved as the quality of the labels used for training increased.
Collapse
Affiliation(s)
- Hao Wu
- Shenzhen Key Laboratory of Intelligent Bioinformatics and Center for High Performance Computing, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (H.W.); (J.N.); (K.Z.); (J.M.); (W.X.)
| | - Jovial Niyogisubizo
- Shenzhen Key Laboratory of Intelligent Bioinformatics and Center for High Performance Computing, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (H.W.); (J.N.); (K.Z.); (J.M.); (W.X.)
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Keliang Zhao
- Shenzhen Key Laboratory of Intelligent Bioinformatics and Center for High Performance Computing, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (H.W.); (J.N.); (K.Z.); (J.M.); (W.X.)
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Jintao Meng
- Shenzhen Key Laboratory of Intelligent Bioinformatics and Center for High Performance Computing, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (H.W.); (J.N.); (K.Z.); (J.M.); (W.X.)
| | - Wenhui Xi
- Shenzhen Key Laboratory of Intelligent Bioinformatics and Center for High Performance Computing, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (H.W.); (J.N.); (K.Z.); (J.M.); (W.X.)
| | - Hongchang Li
- Institute of Biomedicine and Biotechnology, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China;
| | - Yi Pan
- College of Computer Science and Control Engineering, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China;
| | - Yanjie Wei
- Shenzhen Key Laboratory of Intelligent Bioinformatics and Center for High Performance Computing, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (H.W.); (J.N.); (K.Z.); (J.M.); (W.X.)
| |
Collapse
|
8
|
Farahat Z, Zrira N, Souissi N, Benamar S, Belmekki M, Ngote MN, Megdiche K. Application of Deep Learning Methods in a Moroccan Ophthalmic Center: Analysis and Discussion. Diagnostics (Basel) 2023; 13:diagnostics13101694. [PMID: 37238179 DOI: 10.3390/diagnostics13101694] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2023] [Revised: 03/17/2023] [Accepted: 03/20/2023] [Indexed: 05/28/2023] Open
Abstract
Diabetic retinopathy (DR) remains one of the world's frequent eye illnesses, leading to vision loss among working-aged individuals. Hemorrhages and exudates are examples of signs of DR. However, artificial intelligence (AI), particularly deep learning (DL), is poised to impact nearly every aspect of human life and gradually transform medical practice. Insight into the condition of the retina is becoming more accessible thanks to major advancements in diagnostic technology. AI approaches can be used to assess lots of morphological datasets derived from digital images in a rapid and noninvasive manner. Computer-aided diagnosis tools for automatic detection of DR early-stage signs will ease the pressure on clinicians. In this work, we apply two methods to the color fundus images taken on-site at the Cheikh Zaïd Foundation's Ophthalmic Center in Rabat to detect both exudates and hemorrhages. First, we apply the U-Net method to segment exudates and hemorrhages into red and green colors, respectively. Second, the You Look Only Once Version 5 (YOLOv5) method identifies the presence of hemorrhages and exudates in an image and predicts a probability for each bounding box. The segmentation proposed method obtained a specificity of 85%, a sensitivity of 85%, and a Dice score of 85%. The detection software successfully detected 100% of diabetic retinopathy signs, the expert doctor detected 99% of DR signs, and the resident doctor detected 84%.
Collapse
Affiliation(s)
- Zineb Farahat
- LISTD Laboratory, Ecole Nationale Supérieure des Mines de Rabat, Rabat 10000, Morocco
- Cheikh Zaïd Foundation Medical Simulation Center, Rabat 10000, Morocco
| | - Nabila Zrira
- LISTD Laboratory, Ecole Nationale Supérieure des Mines de Rabat, Rabat 10000, Morocco
| | - Nissrine Souissi
- LISTD Laboratory, Ecole Nationale Supérieure des Mines de Rabat, Rabat 10000, Morocco
| | - Safia Benamar
- Cheikh Zaïd Ophthalmic Center, Cheikh Zaïd International University Hospital, Rabat 10000, Morocco
- Institut Supérieur d'Ingénierie et Technologies de Santé/Faculté de Médecine Abulcasis, Université Internationale Abulcasis des Sciences de la Santé, Rabat 10000, Morocco
| | - Mohammed Belmekki
- Cheikh Zaïd Ophthalmic Center, Cheikh Zaïd International University Hospital, Rabat 10000, Morocco
- Institut Supérieur d'Ingénierie et Technologies de Santé/Faculté de Médecine Abulcasis, Université Internationale Abulcasis des Sciences de la Santé, Rabat 10000, Morocco
| | - Mohamed Nabil Ngote
- LISTD Laboratory, Ecole Nationale Supérieure des Mines de Rabat, Rabat 10000, Morocco
- Institut Supérieur d'Ingénierie et Technologies de Santé/Faculté de Médecine Abulcasis, Université Internationale Abulcasis des Sciences de la Santé, Rabat 10000, Morocco
| | - Kawtar Megdiche
- Cheikh Zaïd Foundation Medical Simulation Center, Rabat 10000, Morocco
| |
Collapse
|
9
|
Pedada KR, A. BR, Patro KK, Allam JP, Jamjoom MM, Samee NA. A novel approach for brain tumour detection using deep learning based technique. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104549] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
|
10
|
Bhakar S, Sinwar D, Pradhan N, Dhaka VS, Cherrez-Ojeda I, Parveen A, Hassan MU. Computational Intelligence-Based Disease Severity Identification: A Review of Multidisciplinary Domains. Diagnostics (Basel) 2023; 13:diagnostics13071212. [PMID: 37046431 PMCID: PMC10093052 DOI: 10.3390/diagnostics13071212] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2023] [Revised: 03/06/2023] [Accepted: 03/08/2023] [Indexed: 04/14/2023] Open
Abstract
Disease severity identification using computational intelligence-based approaches is gaining popularity nowadays. Artificial intelligence and deep-learning-assisted approaches are proving to be significant in the rapid and accurate diagnosis of several diseases. In addition to disease identification, these approaches have the potential to identify the severity of a disease. The problem of disease severity identification can be considered multi-class classification, where the class labels are the severity levels of the disease. Plenty of computational intelligence-based solutions have been presented by researchers for severity identification. This paper presents a comprehensive review of recent approaches for identifying disease severity levels using computational intelligence-based approaches. We followed the PRISMA guidelines and compiled several works related to the severity identification of multidisciplinary diseases of the last decade from well-known publishers, such as MDPI, Springer, IEEE, Elsevier, etc. This article is devoted toward the severity identification of two main diseases, viz. Parkinson's Disease and Diabetic Retinopathy. However, severity identification of a few other diseases, such as COVID-19, autonomic nervous system dysfunction, tuberculosis, sepsis, sleep apnea, psychosis, traumatic brain injury, breast cancer, knee osteoarthritis, and Alzheimer's disease, was also briefly covered. Each work has been carefully examined against its methodology, dataset used, and the type of disease on several performance metrics, accuracy, specificity, etc. In addition to this, we also presented a few public repositories that can be utilized to conduct research on disease severity identification. We hope that this review not only acts as a compendium but also provides insights to the researchers working on disease severity identification using computational intelligence-based approaches.
Collapse
Affiliation(s)
- Suman Bhakar
- Department of Computer and Communication Engineering, Manipal University Jaipur, Dehmi Kalan, Jaipur 303007, Rajasthan, India
| | - Deepak Sinwar
- Department of Computer and Communication Engineering, Manipal University Jaipur, Dehmi Kalan, Jaipur 303007, Rajasthan, India
| | - Nitesh Pradhan
- Department of Computer Science and Engineering, Manipal University Jaipur, Dehmi Kalan, Jaipur 303007, Rajasthan, India
| | - Vijaypal Singh Dhaka
- Department of Computer and Communication Engineering, Manipal University Jaipur, Dehmi Kalan, Jaipur 303007, Rajasthan, India
| | - Ivan Cherrez-Ojeda
- Allergy and Pulmonology, Espíritu Santo University, Samborondón 0901-952, Ecuador
| | - Amna Parveen
- College of Pharmacy, Gachon University, Medical Campus, No. 191, Hambakmoero, Yeonsu-gu, Incheon 21936, Republic of Korea
| | - Muhammad Umair Hassan
- Department of ICT and Natural Sciences, Norwegian University of Science and Technology (NTNU), 6009 Ålesund, Norway
| |
Collapse
|
11
|
Xu J, Shen J, Jiang Q, Wan C, Zhou F, Zhang S, Yan Z, Yang W. A multi-modal fundus image based auxiliary location method of lesion boundary for guiding the layout of laser spot in central serous chorioretinopathy therapy. Comput Biol Med 2023; 155:106648. [PMID: 36805213 DOI: 10.1016/j.compbiomed.2023.106648] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2022] [Revised: 01/14/2023] [Accepted: 02/07/2023] [Indexed: 02/15/2023]
Abstract
The lesion boundary of central serous chorioretinopathy (CSCR) is the guarantee to guide the ophthalmologist to accurately arrange the laser spots, so as to enable this ophthalmopathy to be treated precisely. Currently, the accuracy and rapidity of manually locating CSCR lesion boundary in clinic based on single-modal fundus image are limited by imaging quality and ophthalmologist experience, which is also accompanied by poor repeatability, weak reliability and low efficiency. Consequently, a multi-modal fundus image-based lesion boundary auxiliary location method is developed. Firstly, the initial location module (ILM) is employed to achieve the preliminary location of key boundary points of CSCR lesion area on the optical coherence tomography (OCT) B-scan image, then followed by the joint location module (JLM) created based on reinforcement learning for further enhancing the location accuracy. Secondly, the scanning line detection module (SLDM) is constructed to realize the location of lesion scanning line on the scanning laser ophthalmoscope (SLO) image, so as to facilitate the cross-modal mapping of key boundary points. Finally, a simple yet effective lesion boundary location module (LBLM) is designed to assist the automatic cross-modal mapping of key boundary points and enable the final location of lesion boundary. Extensive experiments show that each module can perform well on its corresponding sub task, such as JLM, which makes the correction rate (CR) of ILM increase to 92.11%, comprehensively indicating the effectiveness and feasibility of this method in providing effective lesion boundary guidance for assisting ophthalmologists to precisely arrange the laser spots, and also opening a new research idea for the automatic location of lesion boundary of other fundus diseases.
Collapse
Affiliation(s)
- Jianguo Xu
- College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, 210016, Nanjing, PR China
| | - Jianxin Shen
- College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, 210016, Nanjing, PR China.
| | - Qin Jiang
- The Affiliated Eye Hospital of Nanjing Medical University, 210029, Nanjing, PR China
| | - Cheng Wan
- College of Electronic and Information Engineering, Nanjing University of Aeronautics and Astronautics, 211106, Nanjing, PR China
| | - Fen Zhou
- The Affiliated Eye Hospital of Nanjing Medical University, 210029, Nanjing, PR China
| | - Shaochong Zhang
- Shenzhen Eye Hospital, Jinan University, 518040, Shenzhen, PR China
| | - Zhipeng Yan
- The Affiliated Eye Hospital of Nanjing Medical University, 210029, Nanjing, PR China
| | - Weihua Yang
- Shenzhen Eye Hospital, Jinan University, 518040, Shenzhen, PR China.
| |
Collapse
|
12
|
Qureshi I, Yan J, Abbas Q, Shaheed K, Riaz AB, Wahid A, Khan MWJ, Szczuko P. Medical image segmentation using deep semantic-based methods: A review of techniques, applications and emerging trends. INFORMATION FUSION 2023. [DOI: 10.1016/j.inffus.2022.09.031] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2023]
|
13
|
Li P, Liang L, Gao Z, Wang X. AMD-Net: Automatic subretinal fluid and hemorrhage segmentation for wet age-related macular degeneration in ocular fundus images. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104262] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
14
|
Qasmieh IA, Alquran H, Zyout A, Al-Issa Y, Mustafa WA, Alsalatie M. Automated Detection of Corneal Ulcer Using Combination Image Processing and Deep Learning. Diagnostics (Basel) 2022; 12:diagnostics12123204. [PMID: 36553211 PMCID: PMC9777193 DOI: 10.3390/diagnostics12123204] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2022] [Revised: 11/26/2022] [Accepted: 12/12/2022] [Indexed: 12/23/2022] Open
Abstract
A corneal ulcers are one of the most common eye diseases. They come from various infections, such as bacteria, viruses, or parasites. They may lead to ocular morbidity and visual disability. Therefore, early detection can reduce the probability of reaching the visually impaired. One of the most common techniques exploited for corneal ulcer screening is slit-lamp images. This paper proposes two highly accurate automated systems to localize the corneal ulcer region. The designed approaches are image processing techniques with Hough transform and deep learning approaches. The two methods are validated and tested on the publicly available SUSTech-SYSU database. The accuracy is evaluated and compared between both systems. Both systems achieve an accuracy of more than 90%. However, the deep learning approach is more accurate than the traditional image processing techniques. It reaches 98.9% accuracy and Dice similarity 99.3%. However, the first method does not require parameters to optimize an explicit training model. The two approaches can perform well in the medical field. Moreover, the first model has more leverage than the deep learning model because the last one needs a large training dataset to build reliable software in clinics. Both proposed methods help physicians in corneal ulcer level assessment and improve treatment efficiency.
Collapse
Affiliation(s)
- Isam Abu Qasmieh
- Biomedical Systems and Medical Informatics Engineering, Yarmouk University, Irbid 21163, Jordan
| | - Hiam Alquran
- Biomedical Systems and Medical Informatics Engineering, Yarmouk University, Irbid 21163, Jordan
| | - Ala’a Zyout
- Biomedical Systems and Medical Informatics Engineering, Yarmouk University, Irbid 21163, Jordan
| | - Yazan Al-Issa
- Department of Computer Engineering, Yarmouk University, Irbid 21163, Jordan
| | - Wan Azani Mustafa
- Faculty of Electrical Engineering & Technology, Campus Pauh Putra, Universiti Malaysia Perlis (UniMAP), Arau, Perlis 02600, Malaysia
- Advanced Computing (AdvComp), Centre of Excellence (CoE), Campus Pauh Putra, Universiti Malaysia Perlis (UniMAP), Arau, Perlis 02600, Malaysia
- Correspondence:
| | - Mohammed Alsalatie
- The Institute of Biomedical Technology, King Hussein Medical Center, Royal Jordanian Medical Service, Amman 11855, Jordan
| |
Collapse
|
15
|
Outlier Based Skimpy Regularization Fuzzy Clustering Algorithm for Diabetic Retinopathy Image Segmentation. Symmetry (Basel) 2022. [DOI: 10.3390/sym14122512] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
Abstract
Blood vessels are harmed in diabetic retinopathy (DR), a condition that impairs vision. Using modern healthcare research and technology, artificial intelligence and processing units are used to aid in the diagnosis of this syndrome and the study of diagnostic procedures. The correct assessment of DR severity requires the segmentation of lesions from fundus pictures. The manual grading method becomes highly difficult and time-consuming due to the wide range of the morphologies, number, and sizes of lesions. For image segmentation, traditional fuzzy clustering techniques have two major drawbacks. First, fuzzy memberships based clustering are more susceptible to outliers. Second, because of the lack of local spatial information, these techniques often result in oversegmentation of images. In order to address these issues, this research study proposes an outlier-based skimpy regularization fuzzy clustering technique (OSR-FCA) for image segmentation. Clustering methods that use fuzzy membership with sparseness can be improved by incorporating a Gaussian metric regularisation into the objective function. The proposed study used the symmetry information contained in the image data to conduct the image segmentation using the fuzzy clustering technique while avoiding over segmenting relevant data. This resulted in a reduced proportion of noisy data and better clustering results. The classification was carried out by a deep learning technique called convolutional neural network (CNN). Two publicly available datasets were used for the validation process by using different metrics. The experimental results showed that the proposed segmentation technique achieved 97.16% and classification technique achieved 97.26% of accuracy on the MESSIDOR dataset.
Collapse
|
16
|
Shaukat N, Amin J, Sharif M, Azam F, Kadry S, Krishnamoorthy S. Three-Dimensional Semantic Segmentation of Diabetic Retinopathy Lesions and Grading Using Transfer Learning. J Pers Med 2022; 12:jpm12091454. [PMID: 36143239 PMCID: PMC9501488 DOI: 10.3390/jpm12091454] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2022] [Revised: 08/18/2022] [Accepted: 08/20/2022] [Indexed: 11/23/2022] Open
Abstract
Diabetic retinopathy (DR) is a drastic disease. DR embarks on vision impairment when it is left undetected. In this article, learning-based techniques are presented for the segmentation and classification of DR lesions. The pre-trained Xception model is utilized for deep feature extraction in the segmentation phase. The extracted features are fed to Deeplabv3 for semantic segmentation. For the training of the segmentation model, an experiment is performed for the selection of the optimal hyperparameters that provided effective segmentation results in the testing phase. The multi-classification model is developed for feature extraction using the fully connected (FC) MatMul layer of efficient-net-b0 and pool-10 of the squeeze-net. The extracted features from both models are fused serially, having the dimension of N × 2020, amidst the best N × 1032 features chosen by applying the marine predictor algorithm (MPA). The multi-classification of the DR lesions into grades 0, 1, 2, and 3 is performed using neural network and KNN classifiers. The proposed method performance is validated on open access datasets such as DIARETDB1, e-ophtha-EX, IDRiD, and Messidor. The obtained results are better compared to those of the latest published works.
Collapse
Affiliation(s)
- Natasha Shaukat
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Wah Cantt 47010, Pakistan
| | - Javeria Amin
- Department of Computer Science, University of Wah, Wah Campus, Wah Cantt 47010, Pakistan
| | - Muhammad Sharif
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Wah Cantt 47010, Pakistan
- Correspondence: (M.S.); (S.K.)
| | - Faisal Azam
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Wah Cantt 47010, Pakistan
| | - Seifedine Kadry
- Department of Applied Data Science, Noroff University College, 4612 Kristiansand, Norway
| | - Sujatha Krishnamoorthy
- Zhejiang Bioinformatics International Science and Technology Cooperation Center, Wenzhou-Kean University, Wenzhou 325060, China
- Wenzhou Municipal Key Lab of Applied Biomedical and Biopharmaceutical Informatics, Wenzhou-Kean University, Wenzhou 325060, China
- Correspondence: (M.S.); (S.K.)
| |
Collapse
|
17
|
Wang H, Zhou Y, Zhang J, Lei J, Sun D, Xu F, Xu X. Anomaly segmentation in retinal images with poisson-blending data augmentation. Med Image Anal 2022; 81:102534. [PMID: 35842977 DOI: 10.1016/j.media.2022.102534] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2021] [Revised: 03/14/2022] [Accepted: 07/08/2022] [Indexed: 11/24/2022]
Abstract
Diabetic retinopathy (DR) is one of the most important complications of diabetes. Accurate segmentation of DR lesions is of great importance for the early diagnosis of DR. However, simultaneous segmentation of multi-type DR lesions is technically challenging because of 1) the lack of pixel-level annotations and 2) the large diversity between different types of DR lesions. In this study, first, we propose a novel Poisson-blending data augmentation (PBDA) algorithm to generate synthetic images, which can be easily utilized to expand the existing training data for lesion segmentation. We perform extensive experiments to recognize the important attributes in the PBDA algorithm. We show that position constraints are of great importance and that the synthesis density of one type of lesion has a joint influence on the segmentation of other types of lesions. Second, we propose a convolutional neural network architecture, named DSR-U-Net++ (i.e., DC-SC residual U-Net++), for the simultaneous segmentation of multi-type DR lesions. Ablation studies showed that the mean area under precision recall curve (AUPR) for all four types of lesions increased by >5% with PBDA. The proposed DSR-U-Net++ with PBDA outperformed the state-of-the-art methods by 1.7%-9.9% on the Indian Diabetic Retinopathy Image Dataset (IDRiD) and 67.3% on the e-ophtha dataset with respect to mean AUPR. The developed method would be an efficient tool to generate large-scale task-specific training data for other medical anomaly segmentation tasks.
Collapse
Affiliation(s)
- Hualin Wang
- The Key Laboratory of Biomedical Information Engineering of Ministry of Education, School of Life Science and Technology, Xi'an Jiaotong University, Xi'an, 710049, China
| | - Yuhong Zhou
- The Key Laboratory of Biomedical Information Engineering of Ministry of Education, School of Life Science and Technology, Xi'an Jiaotong University, Xi'an, 710049, China
| | - Jiong Zhang
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, 315300, China
| | - Jianqin Lei
- Department of Ophthalmology, First Affiliated Hospital of Xi'an Jiaotong University, Xi'an, 710049, China
| | - Dongke Sun
- Jiangsu Key Laboratory for Design and Manufacture of Micro-Nano Biomedical Instruments, Southeast University, Southeast University, Nanjing, 211189, China
| | - Feng Xu
- The Key Laboratory of Biomedical Information Engineering of Ministry of Education, School of Life Science and Technology, Xi'an Jiaotong University, Xi'an, 710049, China
| | - Xiayu Xu
- The Key Laboratory of Biomedical Information Engineering of Ministry of Education, School of Life Science and Technology, Xi'an Jiaotong University, Xi'an, 710049, China; Zhejiang Research Institute of Xi'an Jiaotong University, Hangzhou, 311215, China.
| |
Collapse
|
18
|
Applying supervised contrastive learning for the detection of diabetic retinopathy and its severity levels from fundus images. Comput Biol Med 2022; 146:105602. [DOI: 10.1016/j.compbiomed.2022.105602] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2022] [Revised: 04/26/2022] [Accepted: 05/06/2022] [Indexed: 01/02/2023]
|
19
|
A dark and bright channel prior guided deep network for retinal image quality assessment. Biocybern Biomed Eng 2022. [DOI: 10.1016/j.bbe.2022.06.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
20
|
Xia H, Rao Z, Zhou Z. A multi-scale gated network for retinal hemorrhage detection. APPL INTELL 2022. [DOI: 10.1007/s10489-022-03476-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
21
|
A Few-Shot Dental Object Detection Method Based on a Priori Knowledge Transfer. Symmetry (Basel) 2022. [DOI: 10.3390/sym14061129] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
With the continuous improvement in oral health awareness, people’s demand for oral health diagnosis has also increased. Dental object detection is a key step in automated dental diagnosis; however, because of the particularity of medical data, researchers usually cannot obtain sufficient medical data. Therefore, this study proposes a dental object detection method for small-size datasets based on teeth semantics, structural information feature extraction, and an a priori knowledge migration, called a segmentation, points, segmentation, and classification network (SPSC-NET). In the region of interest area extraction method, the SPSC-NET method converts the teeth X-ray image into an a priori knowledge information image, composed of the edges of the teeth and the semantic segmentation image; the network structure used to extract the a priori knowledge information is a symmetric structure, which then generates the key points of the object instance. Next, it uses the key points of the object instance (i.e., the dental semantic segmentation image and the dental edge image) to obtain the object instance image (i.e., the positioning of the teeth). Using 10 training images, the test precision and recall rate of the tooth object center point of the SPSC-NET method were between 99–100%. In the classification method, the SPSC-NET identified the single instance segmentation image generated by migrating the dental object area, the edge image, and the semantic segmentation image as a priori knowledge. Under the premise of using the same deep neural network classification model, the model classification with a priori knowledge was 20% more accurate than the ordinary classification methods. For the overall object detection performance indicators, the SPSC-NET’s average precision (AP) value was more than 92%, which is better than that of the transfer-based faster region-based convolutional neural network (Faster-RCNN) object detection model; moreover, its AP and mean intersection-over-union (mIOU) were 14.72% and 19.68% better than the transfer-based Faster-CNN model, respectively.
Collapse
|
22
|
Erwin, Safmi A, Desiani A, Suprihatin B, Fathoni. The Augmentation Data of Retina Image for Blood Vessel Segmentation Using U-Net Convolutional Neural Network Method. INTERNATIONAL JOURNAL OF COMPUTATIONAL INTELLIGENCE AND APPLICATIONS 2022. [DOI: 10.1142/s1469026822500043] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
The retina is the most important part of the eye. By proper feature extraction, it can be the first step to detect a disease. Morphology of retina blood vessels can be used to identify and classify a disease. A step, such as segmentation and analysis of retinal blood vessels, can assist medical personnel in detecting the severity of a disease. In this paper, vascular segmentation using U-net architecture in the Convolutional Neural Network (CNN) method is proposed to train a sematic segmentation model in retinal blood vessel. In addition, the Contrast Limited Adaptive Histogram Equalization (CLAHE) method is used to increase the contrast of the grayscale and Median Filter is used to obtain better image quality. Data augmentation is also used to maximize the number of datasets owned to make more. The proposed method allows for easier implementation. In this study, the dataset used was STARE with the result of accuracy, sensitivity, specificity, precision, and F1-score that reached 97.64%, 78.18%, 99.20%, 88.77%, and 82.91%.
Collapse
Affiliation(s)
- Erwin
- Department of Computer Engineering, University of Sriwijaya, Jalan Raya Palembang-Unsri KM 32, Indralaya, Indonesia
| | - Asri Safmi
- Department of Computer Engineering, University of Sriwijaya, Jalan Raya Palembang-Unsri KM 32, Indralaya, Indonesia
| | - Anita Desiani
- Department of Mathematics, University of Sriwijaya, Jalan Raya Palembang-Unsri KM 32, Indralaya, South of Sumatera, Indonesia
| | - Bambang Suprihatin
- Department of Mathematics, University of Sriwijaya, Jalan Raya Palembang-Unsri KM 32, Indralaya, South of Sumatera, Indonesia
| | - Fathoni
- Department of Information System, University of Sriwijaya, Jalan Raya Palembang-Unsri KM 32, Indralaya, South of Sumatera, Indonesia
| |
Collapse
|
23
|
|
24
|
|
25
|
Guo Y, Peng Y. CARNet: Cascade attentive RefineNet for multi-lesion segmentation of diabetic retinopathy images. COMPLEX INTELL SYST 2022. [DOI: 10.1007/s40747-021-00630-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
AbstractDiabetic retinopathy is the leading cause of blindness in working population. Lesion segmentation from fundus images helps ophthalmologists accurately diagnose and grade of diabetic retinopathy. However, the task of lesion segmentation is full of challenges due to the complex structure, the various sizes and the interclass similarity with other fundus tissues. To address the issue, this paper proposes a cascade attentive RefineNet (CARNet) for automatic and accurate multi-lesion segmentation of diabetic retinopathy. It can make full use of the fine local details and coarse global information from the fundus image. CARNet is composed of global image encoder, local image encoder and attention refinement decoder. We take the whole image and the patch image as the dual input, and feed them to ResNet50 and ResNet101, respectively, for downsampling to extract lesion features. The high-level refinement decoder uses dual attention mechanism to integrate the same-level features in the two encoders with the output of the low-level attention refinement module for multiscale information fusion, which focus the model on the lesion area to generate accurate predictions. We evaluated the segmentation performance of the proposed CARNet on the IDRiD, E-ophtha and DDR data sets. Extensive comparison experiments and ablation studies on various data sets demonstrate the proposed framework outperforms the state-of-the-art approaches and has better accuracy and robustness. It not only overcomes the interference of similar tissues and noises to achieve accurate multi-lesion segmentation, but also preserves the contour details and shape features of small lesions without overloading GPU memory usage.
Collapse
|
26
|
Deep learning for diabetic retinopathy detection and classification based on fundus images: A review. Comput Biol Med 2021; 135:104599. [PMID: 34247130 DOI: 10.1016/j.compbiomed.2021.104599] [Citation(s) in RCA: 54] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2021] [Revised: 06/12/2021] [Accepted: 06/18/2021] [Indexed: 02/02/2023]
Abstract
Diabetic Retinopathy is a retina disease caused by diabetes mellitus and it is the leading cause of blindness globally. Early detection and treatment are necessary in order to delay or avoid vision deterioration and vision loss. To that end, many artificial-intelligence-powered methods have been proposed by the research community for the detection and classification of diabetic retinopathy on fundus retina images. This review article provides a thorough analysis of the use of deep learning methods at the various steps of the diabetic retinopathy detection pipeline based on fundus images. We discuss several aspects of that pipeline, ranging from the datasets that are widely used by the research community, the preprocessing techniques employed and how these accelerate and improve the models' performance, to the development of such deep learning models for the diagnosis and grading of the disease as well as the localization of the disease's lesions. We also discuss certain models that have been applied in real clinical settings. Finally, we conclude with some important insights and provide future research directions.
Collapse
|
27
|
FFU-Net: Feature Fusion U-Net for Lesion Segmentation of Diabetic Retinopathy. BIOMED RESEARCH INTERNATIONAL 2021; 2021:6644071. [PMID: 33490274 PMCID: PMC7801055 DOI: 10.1155/2021/6644071] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/03/2020] [Revised: 11/25/2020] [Accepted: 12/21/2020] [Indexed: 11/18/2022]
Abstract
Diabetic retinopathy is one of the main causes of blindness in human eyes, and lesion segmentation is an important basic work for the diagnosis of diabetic retinopathy. Due to the small lesion areas scattered in fundus images, it is laborious to segment the lesion of diabetic retinopathy effectively with the existing U-Net model. In this paper, we proposed a new lesion segmentation model named FFU-Net (Feature Fusion U-Net) that enhances U-Net from the following points. Firstly, the pooling layer in the network is replaced with a convolutional layer to reduce spatial loss of the fundus image. Then, we integrate multiscale feature fusion (MSFF) block into the encoders which helps the network to learn multiscale features efficiently and enrich the information carried with skip connection and lower-resolution decoder by fusing contextual channel attention (CCA) models. Finally, in order to solve the problems of data imbalance and misclassification, we present a Balanced Focal Loss function. In the experiments on benchmark dataset IDRID, we make an ablation study to verify the effectiveness of each component and compare FFU-Net against several state-of-the-art models. In comparison with baseline U-Net, FFU-Net improves the segmentation performance by 11.97%, 10.68%, and 5.79% on metrics SEN, IOU, and DICE, respectively. The quantitative and qualitative results demonstrate the superiority of our FFU-Net in the task of lesion segmentation of diabetic retinopathy.
Collapse
|