1
|
Wang Y, Li H. A Novel Single-Sample Retinal Vessel Segmentation Method Based on Grey Relational Analysis. SENSORS (BASEL, SWITZERLAND) 2024; 24:4326. [PMID: 39001106 PMCID: PMC11244310 DOI: 10.3390/s24134326] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/28/2024] [Revised: 06/23/2024] [Accepted: 07/02/2024] [Indexed: 07/16/2024]
Abstract
Accurate segmentation of retinal vessels is of great significance for computer-aided diagnosis and treatment of many diseases. Due to the limited number of retinal vessel samples and the scarcity of labeled samples, and since grey theory excels in handling problems of "few data, poor information", this paper proposes a novel grey relational-based method for retinal vessel segmentation. Firstly, a noise-adaptive discrimination filtering algorithm based on grey relational analysis (NADF-GRA) is designed to enhance the image. Secondly, a threshold segmentation model based on grey relational analysis (TS-GRA) is designed to segment the enhanced vessel image. Finally, a post-processing stage involving hole filling and removal of isolated pixels is applied to obtain the final segmentation output. The performance of the proposed method is evaluated using multiple different measurement metrics on publicly available digital retinal DRIVE, STARE and HRF datasets. Experimental analysis showed that the average accuracy and specificity on the DRIVE dataset were 96.03% and 98.51%. The mean accuracy and specificity on the STARE dataset were 95.46% and 97.85%. Precision, F1-score, and Jaccard index on the HRF dataset all demonstrated high-performance levels. The method proposed in this paper is superior to the current mainstream methods.
Collapse
Affiliation(s)
- Yating Wang
- School of Information Science and Technology, Nantong University, Nantong 226019, China
| | - Hongjun Li
- School of Information Science and Technology, Nantong University, Nantong 226019, China
| |
Collapse
|
2
|
Khattap MG, Abd Elaziz M, Hassan HGEMA, Elgarayhi A, Sallah M. AI-based model for automatic identification of multiple sclerosis based on enhanced sea-horse optimizer and MRI scans. Sci Rep 2024; 14:12104. [PMID: 38802440 DOI: 10.1038/s41598-024-61876-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2023] [Accepted: 05/10/2024] [Indexed: 05/29/2024] Open
Abstract
This study aims to develop an AI-enhanced methodology for the expedited and accurate diagnosis of Multiple Sclerosis (MS), a chronic disease affecting the central nervous system leading to progressive impairment. Traditional diagnostic methods are slow and require substantial expertise, underscoring the need for innovative solutions. Our approach involves two phases: initially, extracting features from brain MRI images using first-order histograms, the gray level co-occurrence matrix, and local binary patterns. A unique feature selection technique combining the Sine Cosine Algorithm with the Sea-horse Optimizer is then employed to identify the most significant features. Utilizing the eHealth lab dataset, which includes images from 38 MS patients (mean age 34.1 ± 10.5 years; 17 males, 21 females) and matched healthy controls, our model achieved a remarkable 97.97% detection accuracy using the k-nearest neighbors classifier. Further validation on a larger dataset containing 262 MS cases (199 females, 63 males; mean age 31.26 ± 10.34 years) and 163 healthy individuals (109 females, 54 males; mean age 32.35 ± 10.30 years) demonstrated a 92.94% accuracy for FLAIR images and 91.25% for T2-weighted images with the Random Forest classifier, outperforming existing MS detection methods. These results highlight the potential of the proposed technique as a clinical decision-making tool for the early identification and management of MS.
Collapse
Affiliation(s)
- Mohamed G Khattap
- Applied Mathematical Physics Research Group, Physics Department, Faculty of Science, Mansoura University, Mansoura, 35516, Egypt.
- Technology of Radiology and Medical Imaging Program, Faculty of Applied Health Sciences Technology, Galala University, Suez, 435611, Egypt.
| | - Mohamed Abd Elaziz
- Faculty of Computer Science and Engineering, Galala University, Suez, 435611, Egypt
- Department of Electrical and Computer Engineering, Lebanese American University, Byblos, 13-5053, Lebanon
| | - Hend Galal Eldeen Mohamed Ali Hassan
- Technology of Radiology and Medical Imaging Program, Faculty of Applied Health Sciences Technology, Galala University, Suez, 435611, Egypt
- Diagnostic and Interventional Radiology Department, Faculty of Medicine, Ain Shams University, Cairo, 11591, Egypt
| | - Ahmed Elgarayhi
- Applied Mathematical Physics Research Group, Physics Department, Faculty of Science, Mansoura University, Mansoura, 35516, Egypt
| | - Mohammed Sallah
- Department of Physics, College of Sciences, University of Bisha, P.O. Box 344, Bisha, 61922, Saudi Arabia
| |
Collapse
|
3
|
Wang J. Optimizing support vector machine (SVM) by social spider optimization (SSO) for edge detection in colored images. Sci Rep 2024; 14:9136. [PMID: 38644440 PMCID: PMC11033277 DOI: 10.1038/s41598-024-59811-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Accepted: 04/15/2024] [Indexed: 04/23/2024] Open
Abstract
Edge detection in images is a vital application of image processing in fields such as object detection and identification of lesion regions in medical images. This problem is more complex in the domain of color images due to the combination of color layer information and the need to achieve a unified edge boundary across these layers, which increases the complexity of the problem. In this paper, a simple and effective method for edge detection in color images is proposed using a combination of support vector machine (SVM) and the social spider optimization (SSO) algorithm. In the proposed method, the input color image is first converted to a grayscale image, and an initial estimation of the image edges is performed based on it. To this end, the proposed method utilizes an SVM with a Radial Basis Function (RBF) kernel, in which the model's hyperparameters are tuned using the SSO algorithm. After the formation of initial image edges, the resulting edges are compared with pairwise combinations of color layers, and an attempt is made to improve the edge localization using the SSO algorithm. In this step, the optimization algorithm's task is to refine the image edges in a way that maximizes the compatibility with pairwise combinations of color layers. This process leads to the formation of prominent image edges and reduces the adverse effects of noise on the final result. The performance of the proposed method in edge detection of various color images has been evaluated and compared with similar previous strategies. According to the obtained results, the proposed method can successfully identify image edges more accurately, as the edges identified by the proposed method have an average accuracy of 93.11% for the BSDS500 database, which is an increase of at least 0.74% compared to other methods.
Collapse
Affiliation(s)
- Jianfei Wang
- Suzhou Chien-Shiung Institute of Technology, Taicang, 215411, China.
| |
Collapse
|
4
|
Mahapatra S, Agrawal S, Mishro PK, Panda R, Dora L, Pachori RB. A Review on Retinal Blood Vessel Enhancement and Segmentation Techniques for Color Fundus Photography. Crit Rev Biomed Eng 2024; 52:41-69. [PMID: 37938183 DOI: 10.1615/critrevbiomedeng.2023049348] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2023]
Abstract
The retinal image is a trusted modality in biomedical image-based diagnosis of many ophthalmologic and cardiovascular diseases. Periodic examination of the retina can help in spotting these abnormalities in the early stage. However, to deal with today's large population, computerized retinal image analysis is preferred over manual inspection. The precise extraction of the retinal vessel is the first and decisive step for clinical applications. Every year, many more articles are added to the literature that describe new algorithms for the problem at hand. The majority of the review article is restricted to a fairly small number of approaches, assessment indices, and databases. In this context, a comprehensive review of different vessel extraction methods is inevitable. It includes the development of a first-hand classification of these methods. A bibliometric analysis of these articles is also presented. The benefits and drawbacks of the most commonly used techniques are summarized. The primary challenges, as well as the scope of possible changes, are discussed. In order to make a fair comparison, numerous assessment indices are considered. The findings of this survey could provide a new path for researchers for further work in this domain.
Collapse
Affiliation(s)
- Sakambhari Mahapatra
- Department of Electronics and Telecommunication Engineering, Veer Surendra Sai University of Technology, Burla, India
| | - Sanjay Agrawal
- Department of Electronics and Telecommunication Engineering, Veer Surendra Sai University of Technology, Burla, India
| | - Pranaba K Mishro
- Department of Electronics and Telecommunication Engineering, Veer Surendra Sai University of Technology, Burla, India
| | - Rutuparna Panda
- Department of Electronics and Telecommunication Engineering, Veer Surendra Sai University of Technology, Burla, India
| | - Lingraj Dora
- Department of Electrical and Electronics Engineering, Veer Surendra Sai University of Technology, Burla, India
| | - Ram Bilas Pachori
- Department of Electrical Engineering, Indian Institute of Technology Indore, Indore, India
| |
Collapse
|
5
|
Tomić M, Vrabec R, Hendelja Đ, Kolarić V, Bulum T, Rahelić D. Diagnostic Accuracy of Hand-Held Fundus Camera and Artificial Intelligence in Diabetic Retinopathy Screening. Biomedicines 2023; 12:34. [PMID: 38255141 PMCID: PMC10813433 DOI: 10.3390/biomedicines12010034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2023] [Revised: 12/19/2023] [Accepted: 12/20/2023] [Indexed: 01/24/2024] Open
Abstract
Our study aimed to assess the role of a hand-held fundus camera and artificial intelligence (AI)-based grading system in diabetic retinopathy (DR) screening and determine its diagnostic accuracy in detecting DR compared with clinical examination and a standard fundus camera. This cross-sectional instrument validation study, as a part of the International Diabetes Federation (IDF) Diabetic Retinopathy Screening Project, included 160 patients (320 eyes) with type 2 diabetes (T2DM). After the standard indirect slit-lamp fundoscopy, each patient first underwent fundus photography with a standard 45° camera VISUCAM Zeiss and then with a hand-held camera TANG (Shanghai Zhi Tang Health Technology Co., Ltd.). Two retina specialists independently graded the images taken with the standard camera, while the images taken with the hand-held camera were graded using the DeepDR system and an independent IDF ophthalmologist. The three screening methods did not differ in detecting moderate/severe nonproliferative and proliferative DR. The area under the curve, sensitivity, specificity, positive predictive value, negative predictive value, positive likelihood ratio, negative likelihood ratio, kappa (ĸ) agreement, diagnostic odds ratio, and diagnostic effectiveness for a hand-held camera compared to clinical examination were 0.921, 89.1%, 100%, 100%, 91.4%, infinity, 0.11, 0.86, 936.48, and 94.9%, while compared to the standard fundus camera were 0.883, 83.2%, 100%, 100%, 87.3%, infinity, 0.17, 0.78, 574.6, and 92.2%. The results of our study suggest that fundus photography with a hand-held camera and AI-based grading system is a short, simple, and accurate method for the screening and early detection of DR, comparable to clinical examination and fundus photography with a standard camera.
Collapse
Affiliation(s)
- Martina Tomić
- Department of Ophthalmology, Vuk Vrhovac University Clinic for Diabetes, Endocrinology and Metabolic Diseases, Merkur University Hospital, Dugi dol 4a, 10000 Zagreb, Croatia
| | - Romano Vrabec
- Department of Ophthalmology, Vuk Vrhovac University Clinic for Diabetes, Endocrinology and Metabolic Diseases, Merkur University Hospital, Dugi dol 4a, 10000 Zagreb, Croatia
| | - Đurđica Hendelja
- Department of Ophthalmology, Vuk Vrhovac University Clinic for Diabetes, Endocrinology and Metabolic Diseases, Merkur University Hospital, Dugi dol 4a, 10000 Zagreb, Croatia
| | - Vilma Kolarić
- Department of Diabetes and Endocrinology, Vuk Vrhovac University Clinic for Diabetes, Endocrinology and Metabolic Diseases, Merkur University Hospital, Dugi dol 4a, 10000 Zagreb, Croatia
| | - Tomislav Bulum
- Department of Diabetes and Endocrinology, Vuk Vrhovac University Clinic for Diabetes, Endocrinology and Metabolic Diseases, Merkur University Hospital, Dugi dol 4a, 10000 Zagreb, Croatia
- School of Medicine, University of Zagreb, Šalata 3, 10000 Zagreb, Croatia
| | - Dario Rahelić
- Department of Diabetes and Endocrinology, Vuk Vrhovac University Clinic for Diabetes, Endocrinology and Metabolic Diseases, Merkur University Hospital, Dugi dol 4a, 10000 Zagreb, Croatia
- School of Medicine, Catholic University of Croatia, Ilica 242, 10000 Zagreb, Croatia
- School of Medicine, Josip Juraj Strossmayer University, Josipa Huttlera 4, 31000 Osijek, Croatia
| |
Collapse
|
6
|
Wang Q, Xu L, Wang L, Yang X, Sun Y, Yang B, Greenwald SE. Automatic coronary artery segmentation of CCTA images using UNet with a local contextual transformer. Front Physiol 2023; 14:1138257. [PMID: 37675283 PMCID: PMC10478234 DOI: 10.3389/fphys.2023.1138257] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2023] [Accepted: 08/01/2023] [Indexed: 09/08/2023] Open
Abstract
Coronary artery segmentation is an essential procedure in the computer-aided diagnosis of coronary artery disease. It aims to identify and segment the regions of interest in the coronary circulation for further processing and diagnosis. Currently, automatic segmentation of coronary arteries is often unreliable because of their small size and poor distribution of contrast medium, as well as the problems that lead to over-segmentation or omission. To improve the performance of convolutional-neural-network (CNN) based coronary artery segmentation, we propose a novel automatic method, DR-LCT-UNet, with two innovative components: the Dense Residual (DR) module and the Local Contextual Transformer (LCT) module. The DR module aims to preserve unobtrusive features through dense residual connections, while the LCT module is an improved Transformer that focuses on local contextual information, so that coronary artery-related information can be better exploited. The LCT and DR modules are effectively integrated into the skip connections and encoder-decoder of the 3D segmentation network, respectively. Experiments on our CorArtTS2020 dataset show that the dice similarity coefficient (DSC), Recall, and Precision of the proposed method reached 85.8%, 86.3% and 85.8%, respectively, outperforming 3D-UNet (taken as the reference among the 6 other chosen comparison methods), by 2.1%, 1.9%, and 2.1%.
Collapse
Affiliation(s)
- Qianjin Wang
- School of Computer Science and Engineering, Northeastern University, Shenyang, China
| | - Lisheng Xu
- College of Medicine and Biological and Information Engineering, Northeastern University, Shenyang, China
| | - Lu Wang
- School of Computer Science and Engineering, Northeastern University, Shenyang, China
| | - Xiaofan Yang
- School of Computer Science and Engineering, Northeastern University, Shenyang, China
| | - Yu Sun
- College of Medicine and Biological and Information Engineering, Northeastern University, Shenyang, China
- Department of Radiology, General Hospital of Northern Theater Command, Shenyang, China
- Key Laboratory of Cardiovascular Imaging and Research of Liaoning Province, Shenyang, China
| | - Benqiang Yang
- Department of Radiology, General Hospital of Northern Theater Command, Shenyang, China
- Key Laboratory of Cardiovascular Imaging and Research of Liaoning Province, Shenyang, China
| | - Stephen E. Greenwald
- Blizard Institute, Barts and the London School of Medicine and Dentistry, Queen Mary University of London, London, United Kingdom
| |
Collapse
|
7
|
Li L, Ma J, Sun D, Tian Z, Cao L, Su P. Amp-vortex edge-camera: a lensless multi-modality imaging system with edge enhancement. OPTICS EXPRESS 2023; 31:22519-22531. [PMID: 37475361 DOI: 10.1364/oe.491380] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Accepted: 05/27/2023] [Indexed: 07/22/2023]
Abstract
We demonstrate a lensless imaging system with edge-enhanced imaging constructed with a Fresnel zone aperture (FZA) mask placed 3 mm away from a CMOS sensor. We propose vortex back-propagation (vortex-BP) and amplitude vortex-BP algorithms for the FZA-based lensless imaging system to remove the noise and achieve the fast reconstruction of high contrast edge enhancement. Directionally controlled anisotropic edge enhancement can be achieved with our proposed superimposed vortex-BP algorithm. With different reconstruction algorithms, the proposed amp-vortex edge-camera in this paper can achieve 2D bright filed imaging, isotropic, and directional controllable anisotropic edge-enhanced imaging with incoherent light illumination, by a single-shot captured hologram. The effect of edge detection is the same as optical edge detection, which is the re-distribution of light energy. Noise-free in-focus edge detection can be achieved by using back-propagation, without a de-noise algorithm, which is an advantage over other lensless imaging technologies. This is expected to be widely used in autonomous driving, artificial intelligence recognition in consumer electronics, etc.
Collapse
|
8
|
Retinal image blood vessel classification using hybrid deep learning in cataract diseased fundus images. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104776] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/12/2023]
|
9
|
Rayavel P, Murukesh C. Comparative analysis of deep learning classifiers for diabetic retinopathy identification and detection. THE IMAGING SCIENCE JOURNAL 2023. [DOI: 10.1080/13682199.2023.2168851] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/18/2023]
Affiliation(s)
- P. Rayavel
- Department of Computer Science and Engineering (Cybersecurity), Sri Sairam Institute of Technology, Chennai, Tamil Nadu, India
| | - C. Murukesh
- Department of Electronics and Communication Engineering, Velammal Engineering College, Chennai, Tamil Nadu, India
| |
Collapse
|
10
|
Susheel Kumar K, Pratap Singh N. Identification of retinal diseases based on retinal blood vessel segmentation using Dagum PDF and feature-based machine learning. THE IMAGING SCIENCE JOURNAL 2023. [DOI: 10.1080/13682199.2023.2183319] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2023]
Affiliation(s)
- K. Susheel Kumar
- Department of Computer science and Engineering, National Institute of Technology, Hamirpur, India
- Department of Computer Science and Engineering, Gandhi Institute of Technology and Management, Bengaluru, India
| | - Nagendra Pratap Singh
- Department of Computer science and Engineering, National Institute of Technology, Hamirpur, India
| |
Collapse
|
11
|
Spatial-contextual variational autoencoder with attention correction for anomaly detection in retinal OCT images. Comput Biol Med 2023; 152:106328. [PMID: 36462369 DOI: 10.1016/j.compbiomed.2022.106328] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Revised: 10/23/2022] [Accepted: 11/14/2022] [Indexed: 11/18/2022]
Abstract
Anomaly detection refers to leveraging only normal data to train a model for identifying unseen abnormal cases, which is extensively studied in various fields. Most previous methods are based on reconstruction models, and use anomaly score calculated by the reconstruction error as the metric to tackle anomaly detection. However, these methods just employ single constraint on latent space to construct reconstruction model, resulting in limited performance in anomaly detection. To address this problem, we propose a Spatial-Contextual Variational Autoencoder with Attention Correction for anomaly detection in retinal OCT images. Specifically, we first propose a self-supervised segmentation network to extract retinal regions, which can effectively eliminate interference of background regions. Next, by introducing both multi-dimensional and one-dimensional latent space, our proposed framework can then learn the spatial and contextual manifolds of normal images, which is conducive to enlarging the difference between reconstruction errors of normal images and those of abnormal ones. Furthermore, an ablation-based method is proposed to localize anomalous regions by computing the importance of feature maps, which is used to correct anomaly score calculated by reconstruction error. Finally, a novel anomaly score is constructed to separate the abnormal images from the normal ones. Extensive experiments on two retinal OCT datasets are conducted to evaluate our proposed method, and the experimental results demonstrate the effectiveness of our approach.
Collapse
|
12
|
Madhu C, M.S. S. Adaptive Bezier Curve-based Membership Function formulation scheme for interpretable edge detection. Appl Soft Comput 2022. [DOI: 10.1016/j.asoc.2022.109968] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
|
13
|
Dubey S, Dixit M. Recent developments on computer aided systems for diagnosis of diabetic retinopathy: a review. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 82:14471-14525. [PMID: 36185322 PMCID: PMC9510498 DOI: 10.1007/s11042-022-13841-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Revised: 04/27/2022] [Accepted: 09/06/2022] [Indexed: 06/16/2023]
Abstract
Diabetes is a long-term condition in which the pancreas quits producing insulin or the body's insulin isn't utilised properly. One of the signs of diabetes is Diabetic Retinopathy. Diabetic retinopathy is the most prevalent type of diabetes, if remains unaddressed, diabetic retinopathy can affect all diabetics and become very serious, raising the chances of blindness. It is a chronic systemic condition that affects up to 80% of patients for more than ten years. Many researchers believe that if diabetes individuals are diagnosed early enough, they can be rescued from the condition in 90% of cases. Diabetes damages the capillaries, which are microscopic blood vessels in the retina. On images, blood vessel damage is usually noticeable. Therefore, in this study, several traditional, as well as deep learning-based approaches, are reviewed for the classification and detection of this particular diabetic-based eye disease known as diabetic retinopathy, and also the advantage of one approach over the other is also described. Along with the approaches, the dataset and the evaluation metrics useful for DR detection and classification are also discussed. The main finding of this study is to aware researchers about the different challenges occurs while detecting diabetic retinopathy using computer vision, deep learning techniques. Therefore, a purpose of this review paper is to sum up all the major aspects while detecting DR like lesion identification, classification and segmentation, security attacks on the deep learning models, proper categorization of datasets and evaluation metrics. As deep learning models are quite expensive and more prone to security attacks thus, in future it is advisable to develop a refined, reliable and robust model which overcomes all these aspects which are commonly found while designing deep learning models.
Collapse
Affiliation(s)
- Shradha Dubey
- Madhav Institute of Technology & Science (Department of Computer Science and Engineering), Gwalior, M.P. India
| | - Manish Dixit
- Madhav Institute of Technology & Science (Department of Computer Science and Engineering), Gwalior, M.P. India
| |
Collapse
|
14
|
Khandouzi A, Ariafar A, Mashayekhpour Z, Pazira M, Baleghi Y. Retinal Vessel Segmentation, a Review of Classic and Deep Methods. Ann Biomed Eng 2022; 50:1292-1314. [PMID: 36008569 DOI: 10.1007/s10439-022-03058-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2022] [Accepted: 08/15/2022] [Indexed: 11/01/2022]
Abstract
Retinal illnesses such as diabetic retinopathy (DR) are the main causes of vision loss. In the early recognition of eye diseases, the segmentation of blood vessels in retina images plays an important role. Different symptoms of ocular diseases can be identified by the geometric features of ocular arteries. However, due to the complex construction of the blood vessels and their different thicknesses, segmenting the retina image is a challenging task. There are a number of algorithms that helped the detection of retinal diseases. This paper presents an overview of papers from 2016 to 2022 that discuss machine learning and deep learning methods for automatic vessel segmentation. The methods are divided into two groups: Deep learning-based, and classic methods. Algorithms, classifiers, pre-processing and specific techniques of each group is described, comprehensively. The performances of recent works are compared based on their achieved accuracy in different datasets in inclusive tables. A survey of most popular datasets like DRIVE, STARE, HRF and CHASE_DB1 is also given in this paper. Finally, a list of findings from this review is presented in the conclusion section.
Collapse
Affiliation(s)
- Ali Khandouzi
- Faculty of Electrical and Computer Engineering, Babol Noshirvani University of Technology, Babol, Iran
| | - Ali Ariafar
- Faculty of Electrical and Computer Engineering, Babol Noshirvani University of Technology, Babol, Iran
| | - Zahra Mashayekhpour
- Faculty of Electrical and Computer Engineering, Babol Noshirvani University of Technology, Babol, Iran
| | - Milad Pazira
- Faculty of Electrical and Computer Engineering, Babol Noshirvani University of Technology, Babol, Iran
| | - Yasser Baleghi
- Faculty of Electrical and Computer Engineering, Babol Noshirvani University of Technology, Babol, Iran.
| |
Collapse
|
15
|
Mahapatra S, Agrawal S, Mishro PK, Pachori RB. A novel framework for retinal vessel segmentation using optimal improved frangi filter and adaptive weighted spatial FCM. Comput Biol Med 2022; 147:105770. [DOI: 10.1016/j.compbiomed.2022.105770] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2022] [Revised: 06/08/2022] [Accepted: 06/19/2022] [Indexed: 11/28/2022]
|
16
|
Liu S, Fan J, Ai D, Song H, Fu T, Wang Y, Yang J. Feature matching for texture-less endoscopy images via superpixel vector field consistency. BIOMEDICAL OPTICS EXPRESS 2022; 13:2247-2265. [PMID: 35519251 PMCID: PMC9045917 DOI: 10.1364/boe.450259] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/02/2021] [Revised: 01/05/2022] [Accepted: 01/23/2022] [Indexed: 06/14/2023]
Abstract
Feature matching is an important technology to obtain the surface morphology of soft tissues in intraoperative endoscopy images. The extraction of features from clinical endoscopy images is a difficult problem, especially for texture-less images. The reduction of surface details makes the problem more challenging. We proposed an adaptive gradient-preserving method to improve the visual feature of texture-less images. For feature matching, we first constructed a spatial motion field by using the superpixel blocks and estimated its information entropy matching with the motion consistency algorithm to obtain the initial outlier feature screening. Second, we extended the superpixel spatial motion field to the vector field and constrained it with the vector feature to optimize the confidence of the initial matching set. Evaluations were implemented on public and undisclosed datasets. Our method increased by an order of magnitude in the three feature point extraction methods than the original image. In the public dataset, the accuracy and F1-score increased to 92.6% and 91.5%. The matching score was improved by 1.92%. In the undisclosed dataset, the reconstructed surface integrity of the proposed method was improved from 30% to 85%. Furthermore, we also presented the surface reconstruction result of differently sized images to validate the robustness of our method, which showed high-quality feature matching results. Overall, the experiment results proved the effectiveness of the proposed matching method. This demonstrates its capability to extract sufficient visual feature points and generate reliable feature matches for 3D reconstruction and meaningful applications in clinical.
Collapse
Affiliation(s)
- Shiyuan Liu
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Jingfan Fan
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Danni Ai
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Hong Song
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing, 100081, China
| | - Tianyu Fu
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Yongtian Wang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Jian Yang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| |
Collapse
|
17
|
Ma Z, Zhang M, Liu J, Yang A, Li H, Wang J, Hua D, Li M. An Assisted Diagnosis Model for Cancer Patients Based on Federated Learning. Front Oncol 2022; 12:860532. [PMID: 35311106 PMCID: PMC8928102 DOI: 10.3389/fonc.2022.860532] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2022] [Accepted: 02/08/2022] [Indexed: 12/24/2022] Open
Abstract
Since the 20th century, cancer has been a growing threat to human health. Cancer is a malignant tumor with high clinical morbidity and mortality, and there is a high risk of recurrence after surgery. At the same time, the diagnosis of whether the cancer is in situ recurrence is crucial for further treatment of cancer patients. According to statistics, about 90% of cancer-related deaths are due to metastasis of primary tumor cells. Therefore, the study of the location of cancer recurrence and its influencing factors is of great significance for the clinical diagnosis and treatment of cancer. In this paper, we propose an assisted diagnosis model for cancer patients based on federated learning. In terms of data, the influencing factors of cancer recurrence and the special needs of data samples required by federated learning were comprehensively considered. Six first-level impact indicators were determined, and the historical case data of cancer patients were further collected. Based on the federated learning framework combined with convolutional neural network, various physical examination indicators of patients were taken as input. The recurrence time and recurrence location of patients were used as output to construct an auxiliary diagnostic model, and linear regression, support vector regression, Bayesling regression, gradient ascending tree and multilayer perceptrons neural network algorithm were used as comparison algorithms. CNN’s federated prediction model based on improved under the condition of the joint modeling and simulation on the five types of cancer data accuracy reached more than 90%, the accuracy is better than single modeling machine learning tree model and linear model and neural network, the results show that auxiliary diagnosis model based on the study of cancer patients in assisted the doctor in the diagnosis of patients, As well as effectively provide nutritional programs for patients and have application value in prolonging the life of patients, it has certain guiding significance in the field of medical cancer rehabilitation.
Collapse
Affiliation(s)
- Zezhong Ma
- Hebei Engineering Research Center for the Intelligentization of Iron Ore Optimization and Ironmaking Raw Materials Preparation Processes, North China University of Science and Technology, Tangshan, China.,Hebei Key Laboratory of Data Science and Application, North China University of Science and Technology, Tangshan, China.,The Key Laboratory of Engineering Computing in Tangshan City, North China University of Science and Technology, Tangshan, China.,College of Science, North China University of Science and Technology, Tangshan, China
| | - Meng Zhang
- The Key Laboratory of Engineering Computing in Tangshan City, North China University of Science and Technology, Tangshan, China.,Tangshan Intelligent Industry and Image Processing Technology Innovation Center, North China University of Science and Technology, Tangshan, China
| | - Jiajia Liu
- College of Science, North China University of Science and Technology, Tangshan, China
| | - Aimin Yang
- Hebei Engineering Research Center for the Intelligentization of Iron Ore Optimization and Ironmaking Raw Materials Preparation Processes, North China University of Science and Technology, Tangshan, China.,Hebei Key Laboratory of Data Science and Application, North China University of Science and Technology, Tangshan, China.,The Key Laboratory of Engineering Computing in Tangshan City, North China University of Science and Technology, Tangshan, China.,College of Science, North China University of Science and Technology, Tangshan, China.,Tangshan Intelligent Industry and Image Processing Technology Innovation Center, North China University of Science and Technology, Tangshan, China
| | - Hao Li
- The Key Laboratory of Engineering Computing in Tangshan City, North China University of Science and Technology, Tangshan, China.,Tangshan Intelligent Industry and Image Processing Technology Innovation Center, North China University of Science and Technology, Tangshan, China
| | - Jian Wang
- The Key Laboratory of Engineering Computing in Tangshan City, North China University of Science and Technology, Tangshan, China.,Tangshan Intelligent Industry and Image Processing Technology Innovation Center, North China University of Science and Technology, Tangshan, China
| | - Dianbo Hua
- Beijing Sitairui Cancer Data Analysis Joint Laboratory, Beijing, China
| | - Mingduo Li
- State Key Laboratory of Process Automation in Mining and Metallurgy, Beijing, China.,Beijing Key Laboratory of Process Automation in Mining and Metallurgy, Beijing, China
| |
Collapse
|
18
|
Bhatia S, Alam S, Shuaib M, Hameed Alhameed M, Jeribi F, Alsuwailem RI. Retinal Vessel Extraction via Assisted Multi-Channel Feature Map and U-Net. Front Public Health 2022; 10:858327. [PMID: 35372222 PMCID: PMC8968759 DOI: 10.3389/fpubh.2022.858327] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2022] [Accepted: 02/04/2022] [Indexed: 11/13/2022] Open
Abstract
Early detection of vessels from fundus images can effectively prevent the permanent retinal damages caused by retinopathies such as glaucoma, hyperextension, and diabetes. Concerning the red color of both retinal vessels and background and the vessel's morphological variations, the current vessel detection methodologies fail to segment thin vessels and discriminate them in the regions where permanent retinopathies mainly occur. This research aims to suggest a novel approach to take the benefit of both traditional template-matching methods with recent deep learning (DL) solutions. These two methods are combined in which the response of a Cauchy matched filter is used to replace the noisy red channel of the fundus images. Consequently, a U-shaped fully connected convolutional neural network (U-net) is employed to train end-to-end segmentation of pixels into vessel and background classes. Each preprocessed image is divided into several patches to provide enough training images and speed up the training per each instance. The DRIVE public database has been analyzed to test the proposed method, and metrics such as Accuracy, Precision, Sensitivity and Specificity have been measured for evaluation. The evaluation indicates that the average extraction accuracy of the proposed model is 0.9640 on the employed dataset.
Collapse
Affiliation(s)
- Surbhi Bhatia
- Department of Information Systems, College of Computer Sciences and Information Technology, King Faisal University, Hofuf, Saudi Arabia
- *Correspondence: Surbhi Bhatia
| | - Shadab Alam
- College of Computer Science and Information Technology, Jazan University, Jazan, Saudi Arabia
| | - Mohammed Shuaib
- College of Computer Science and Information Technology, Jazan University, Jazan, Saudi Arabia
| | | | - Fathe Jeribi
- College of Computer Science and Information Technology, Jazan University, Jazan, Saudi Arabia
| | - Razan Ibrahim Alsuwailem
- Department of Information Systems, College of Computer Sciences and Information Technology, King Faisal University, Hofuf, Saudi Arabia
| |
Collapse
|
19
|
Azimirad E. Design of an optimized fuzzy system for edge detection in images. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2022. [DOI: 10.3233/jifs-213008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Edge Detection is the first stage in the image division into separate parts. Image division is the partitioning of a digital image to the different zones or the set of pixels. Edge detection is one of the techniques applied in digital image processing often. The purpose of detecting pixels is to match the edges in the image. Filtering, Enhancement, and Detection are three steps in edge detection. Images are usually destroyed by casual changes in intensity intervals called noise or confusion. some noise variations include salt and pepper, pulse, and Gaussian. However, there is a relation between edge detection power and noise reduction. Using filters to the noise reduction causes the loss of edge detection power. For facilitating the edges detection, it is essential for the determination of pixels’ intensity constraints in their neighborhood. Many points in an image have a nontransparent slope, and all of them are not the edges of the joint space. Therefore, some of the linear and nonlinear methods such as Sobel, Prewitt, and Robert have to be used to determine the edge points. The fuzzy logic and the system based on it, is one of the most effective methods for edge detection. This paper presents an optimized rule-based fuzzy inference system and designs the efficiency mask matric. The simulation results for edge detection are presented using the traditional edge detection techniques, including Binary Filter, Sobel Filter, Prewitt Filter, and Robert Filter. Also, it is presented using the fuzzy approach. The simulation results show that the designed fuzzy system has been able to detect the edges of the image more accurately and help to increase the sharpness and quality of the edges. Therefore, the proposed method has more accurate and more reliable results and reduces false edge detection comparison to the traditional methods.
Collapse
Affiliation(s)
- Ehsan Azimirad
- Electrical Engineering Department, University of Torbat Heydarieh, Torbat Heydarieh, Iran
| |
Collapse
|
20
|
Deng X, Ye J. A retinal blood vessel segmentation based on improved D-MNet and pulse-coupled neural network. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103467] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
21
|
Deep CNN with Hybrid Binary Local Search and Particle Swarm Optimizer for Exudates Classification from Fundus Images. J Digit Imaging 2022; 35:56-67. [PMID: 34997375 PMCID: PMC8854611 DOI: 10.1007/s10278-021-00534-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2020] [Revised: 10/04/2021] [Accepted: 11/03/2021] [Indexed: 02/03/2023] Open
Abstract
Diabetic retinopathy is a chronic condition that causes vision loss if not detected early. In the early stage, it can be diagnosed with the aid of exudates which are called lesions. However, it is arduous to detect the exudate lesion due to the availability of blood vessels and other distractions. To tackle these issues, we proposed a novel exudates classification from the fundus image known as hybrid convolutional neural network (CNN)-based binary local search optimizer-based particle swarm optimization algorithm. The proposed method from this paper exploits image augmentation to enlarge the fundus image to the required size without losing any features. The features from the resized fundus images are extracted as a feature vector and fed into the feed-forward CNN as the input. Henceforth, it classifies the exudates from the fundus image. Further, the hyperparameters are optimized to reduce the computational complexities by utilization of binary local search optimizer (BLSO) and particle swarm optimization (PSO). The experimental analysis is conducted on the public ROC and real-time ARA400 datasets and compared with the state-of-art works such as support vector machine classifiers, multi-modal/multi-scale, random forest, and CNN for the performance metrics. The classification accuracy is high for the proposed work, and thus, our proposed outperforms all the other approaches.
Collapse
|
22
|
Xu X, Shen Y, Lin L, Lin L, Li B. Multi-step deep neural network for identifying subfascial vessels in a dorsal skinfold window chamber model. BIOMEDICAL OPTICS EXPRESS 2022; 13:426-437. [PMID: 35154882 PMCID: PMC8803012 DOI: 10.1364/boe.446214] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Revised: 12/14/2021] [Accepted: 12/15/2021] [Indexed: 06/14/2023]
Abstract
Automatic segmentation of blood vessels in the dorsal skinfold window chamber (DWSC) model is a prerequisite for the evaluation of vascular-targeted photodynamic therapy (V-PDT) biological response. Recently, deep learning methods have been widely applied in blood vessel segmentation, but they have difficulty precisely identifying the subfascial vessels. This study proposed a multi-step deep neural network, named the global attention-Xnet (GA-Xnet) model, to precisely segment subfascial vessels in the DSWC model. We first used Hough transform combined with a U-Net model to extract circular regions of interest for image processing. GA step was then employed to obtain global feature learning followed by coarse segmentation for the entire blood vessel image. Secondly, the coarse segmentation of blood vessel images from the GA step and the same number of retinal images from the DRIVE datasets were combined as the mixing sample, inputted into the Xnet step to learn the multiscale feature predicting fine segmentation maps of blood vessels. The data show that the accuracy, sensitivity, and specificity for the segmentation of multiscale blood vessels in the DSWC model are 96.00%, 86.27%, 96.47%, respectively. As a result, the subfascial vessels could be accurately identified, and the connectedness of the vessel skeleton is well preserved. These findings suggest that the proposed multi-step deep neural network helps evaluate the short-term vascular responses in V-PDT.
Collapse
Affiliation(s)
- Xuelin Xu
- MOE Key Laboratory of OptoElectronic Science and Technology for Medicine, Fujian Provincial Key Laboratory for Photonics Technology, Fujian Normal University, Fuzhou, 350117, China
- School of Information Science and Engineering, Fujian University of Technology, Fuzhou, 350007, China
| | - Yi Shen
- MOE Key Laboratory of OptoElectronic Science and Technology for Medicine, Fujian Provincial Key Laboratory for Photonics Technology, Fujian Normal University, Fuzhou, 350117, China
| | - Li Lin
- MOE Key Laboratory of OptoElectronic Science and Technology for Medicine, Fujian Provincial Key Laboratory for Photonics Technology, Fujian Normal University, Fuzhou, 350117, China
| | - Lisheng Lin
- MOE Key Laboratory of OptoElectronic Science and Technology for Medicine, Fujian Provincial Key Laboratory for Photonics Technology, Fujian Normal University, Fuzhou, 350117, China
| | - Buhong Li
- MOE Key Laboratory of OptoElectronic Science and Technology for Medicine, Fujian Provincial Key Laboratory for Photonics Technology, Fujian Normal University, Fuzhou, 350117, China
| |
Collapse
|
23
|
A Fuzzy Rule-Based System for Classification of Diabetes. SENSORS 2021; 21:s21238095. [PMID: 34884099 PMCID: PMC8659829 DOI: 10.3390/s21238095] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/11/2021] [Revised: 11/27/2021] [Accepted: 11/28/2021] [Indexed: 12/26/2022]
Abstract
Diabetes is a fatal disease that currently has no treatment. However, early diagnosis of diabetes aids patients to start timely treatment and thus reduces or eliminates the risk of severe complications. The prevalence of diabetes has been rising rapidly worldwide. Several methods have been introduced to diagnose diabetes at an early stage, however, most of these methods lack interpretability, due to which the diagnostic process cannot be explained. In this paper, fuzzy logic has been employed to develop an interpretable model and to perform an early diagnosis of diabetes. Fuzzy logic has been combined with the cosine amplitude method, and two fuzzy classifiers have been constructed. Afterward, fuzzy rules have been designed based on these classifiers. Lastly, a publicly available diabetes dataset has been used to evaluate the performance of the proposed fuzzy rule-based model. The results show that the proposed model outperforms existing techniques by achieving an accuracy of 96.47%. The proposed model has demonstrated great prediction accuracy, suggesting that it can be utilized in the healthcare sector for the accurate diagnose of diabetes.
Collapse
|
24
|
Optimization of Personnel Placement Scheme and Big Data Analysis Based on Multilayer Variable Neural Network Algorithm. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2021; 2021:3250062. [PMID: 34707649 PMCID: PMC8545588 DOI: 10.1155/2021/3250062] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/27/2021] [Revised: 09/22/2021] [Accepted: 09/29/2021] [Indexed: 12/01/2022]
Abstract
People usually use the method of job analysis to understand the requirements of each job in terms of personnel characteristics, at the same time use the method of psychological measurement to understand the psychological characteristics of each person, and then put the personnel in the appropriate position by matching them with each other. With the development of the information age, massive and complex data are produced. How to accurately extract the effective data needed by the industry from the big data is a very arduous task. In reality, personnel data are influenced by many factors, and the time series formed by it is more accidental and random and often has multilevel and multiscale characteristics. How to use a certain algorithm or data processing technology to effectively dig out the rules contained in the personnel information data and explore the personnel placement scheme has become an important issue. In this paper, a multilayer variable neural network model for complex big data feature learning is established to optimize the staffing scheme. At the same time, the learning model is extended from vector space to tensor space. The parameters of neural network are inversed by high-order backpropagation algorithm facing tensor space. Compared with the traditional multilayer neural network calculation model based on tensor space, the multimodal neural network calculation model can learn the characteristics of complex data quickly and accurately and has obvious advantages.
Collapse
|
25
|
Zhang L, Zhong Q, Yu Z. Optimization of Tumor Disease Monitoring in Medical Big Data Environment Based on High-Order Simulated Annealing Neural Network Algorithm. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2021; 2021:8996673. [PMID: 34712319 PMCID: PMC8548164 DOI: 10.1155/2021/8996673] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/02/2021] [Revised: 08/25/2021] [Accepted: 09/01/2021] [Indexed: 12/17/2022]
Abstract
With the development of medical informatization, the data related to medical field are growing at an amazing speed, and medical big data appears. The mining and analysis of these data plays an important role in the prediction, monitoring, diagnosis, and treatment of tumor diseases. Therefore, this paper proposes a clustering algorithm of the high-order simulated annealing neural network algorithm and uses this algorithm to extract tumor disease-related big data, constructs training set according to the relevant information mined, designs a kind of dimension reduction model, aiming at the problem of excessive and wrong diagnosis and treatment in the diagnosis and treatment module of tumor disease monitoring mode, and establishes the corresponding control mechanism, so as to optimize the tumor disease monitoring mode. The results show that the clustering accuracy of the high-order simulated annealing neural network algorithm on different data sets (iris, wine, and Pima India diabetes) is 97.33%, 82.11%, and 70.56% and the execution time is 0.75 s, 0.562 s, and 1.092 s, which are better than those of the fast k-medoids algorithm and improved k-medoids clustering algorithm. To sum up, the high-order simulated annealing neural network algorithm can achieve good clustering effect in medical big data mining. The establishment of model M1 can reduce the probability of excessive and wrong medical treatment and improve the effectiveness of diagnosis and treatment module monitoring in tumor disease monitoring mode.
Collapse
Affiliation(s)
- Lei Zhang
- Department of Breast Surgery, The First Affiliated Hospital of China Medical University, Shenyang, Liaoning 110001, China
| | - Qixiang Zhong
- Department of Thoracic Surgery, The First Affiliated Hospital of China Medical University, Shenyang, Liaoning 110001, China
| | - Zhenglun Yu
- Department of Thoracic Surgery, The First Affiliated Hospital of China Medical University, Shenyang, Liaoning 110001, China
| |
Collapse
|
26
|
Yang Y, Hou X, Ren H. Accurate and efficient image segmentation and bias correction model based on entropy function and level sets. Inf Sci (N Y) 2021. [DOI: 10.1016/j.ins.2021.07.069] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
27
|
Xue M. Research on Information Visualization Graphic Design Teaching Based on DBN Algorithm. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2021; 2021:3355030. [PMID: 34621307 PMCID: PMC8492239 DOI: 10.1155/2021/3355030] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/15/2021] [Revised: 08/30/2021] [Accepted: 09/01/2021] [Indexed: 11/17/2022]
Abstract
With the advent of the era of big data, how to quickly obtain effective information and efficiently disseminate information technology has become the most popular topic. Studies have shown that the ability of the human brain to process data and information is unmatched by machines, and the processing of graphics is tens of thousands of times faster than that of words. Based on the deep belief network (DBN) algorithm, this paper studies the technology of information visualization graphic design teaching application. Firstly, the structure of the deep belief network is analysed to explore its technical application in graphic information reconstruction. It is concluded that the DBN algorithm can be used to deal with the problems of classification, regression, dimension calculation, feature point acquisition, accuracy calculation, and so on in machine learning training. Then, the deformation technology of graphic local design is studied based on the DBN algorithm to construct the visual teaching platform and analyse the technical research results of this algorithm in information graphic design. The results show that the DBN algorithm can quickly solve the problem of processing complex features in graphics, change the local deformation design of the original graphics to form new feature point data and add it to the teaching platform, and improve the ability of model fast learning and training, optimizing the operation efficiency of the teaching platform.
Collapse
Affiliation(s)
- Manjun Xue
- School of Architectural and Artistic Design, Henan Polytechnic University, Jiaozuo 454000, China
| |
Collapse
|
28
|
A Low Redundancy Wavelet Entropy Edge Detection Algorithm. J Imaging 2021; 7:jimaging7090188. [PMID: 34564114 PMCID: PMC8465474 DOI: 10.3390/jimaging7090188] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2021] [Revised: 09/13/2021] [Accepted: 09/13/2021] [Indexed: 11/17/2022] Open
Abstract
Fast edge detection of images can be useful for many real-world applications. Edge detection is not an end application but often the first step of a computer vision application. Therefore, fast and simple edge detection techniques are important for efficient image processing. In this work, we propose a new edge detection algorithm using a combination of the wavelet transform, Shannon entropy and thresholding. The new algorithm is based on the concept that each Wavelet decomposition level has an assumed level of structure that enables the use of Shannon entropy as a measure of global image structure. The proposed algorithm is developed mathematically and compared to five popular edge detection algorithms. The results show that our solution is low redundancy, noise resilient, and well suited to real-time image processing applications.
Collapse
|
29
|
Automatic Detection Method of Technical and Tactical Indicators for Table Tennis Based on Trajectory Prediction Using Compensation Fuzzy Neural Network. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2021; 2021:3155357. [PMID: 34484318 PMCID: PMC8410409 DOI: 10.1155/2021/3155357] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/16/2021] [Revised: 08/14/2021] [Accepted: 08/17/2021] [Indexed: 11/30/2022]
Abstract
In the system design of table tennis robot, the important influencing factors of automatic detection of technical and tactical indicators for table tennis are table tennis rotation state, trajectory, and rebound force. But the general prediction algorithm cannot process the time series data and give the corresponding rotation state. Therefore, this paper studies the automatic detection method of technical and tactical indicators for table tennis based on the trajectory prediction using the compensation fuzzy neural network. In this paper, the compensation fuzzy neural network algorithm which combines the compensation fuzzy algorithm and recurrent neural network is selected to construct the automatic detection of technical and tactical indicators for table tennis. The experimental results show that the convergence time of the compensation fuzzy neural network is shorter, the training time is shortened, and the prediction accuracy is improved. At the same time, in terms of performance testing, the model can accurately distinguish the influence of table tennis rotation state and rebound on table tennis motion estimation, so as to improve the accuracy of motion trajectory prediction. In addition, the accuracy of trajectory prediction will be improved with the increase of input data. When the number of data reaches 30, the trajectory prediction error is within the actual acceptable error range.
Collapse
|
30
|
Liu J, Ren Y, Qin X. Study on 3D Clothing Color Application Based on Deep Learning-Enabled Macro-Micro Adversarial Network and Human Body Modeling. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2021; 2021:9918175. [PMID: 34539773 PMCID: PMC8443351 DOI: 10.1155/2021/9918175] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/10/2021] [Revised: 08/12/2021] [Accepted: 08/14/2021] [Indexed: 11/23/2022]
Abstract
In real life, people's life gradually tends to be simple, so the convenience of online shopping makes more and more research begin to explore the convenience optimization of shopping, in which the fitting system is the research product. However, due to the immaturity of the virtual fitting system, there are a lot of problems, such as the expression of clothing color is not clear or deviation. In view of this, this paper proposes a 3D clothing color display model based on deep learning to support human modeling-driven. Firstly, the macro-micro adversarial network (MMAN) based on deep learning is used to analyze the original image, and then, the results are preprocessed. Finally, the 3D model with the original image color is constructed by using UV mapping. The experimental results show that the accuracy of the MMAN algorithm reaches 0.972, the established three-dimensional model is emotional enough, the expression of the clothing color is clear, and the difference between the color difference and the original image is within 0.01, and the subjective evaluation of volunteers is more than 90 points. The above results show that it is effective to use deep learning to build a 3D model with the original picture clothing color, which has great guiding significance for the research of character model modeling and simulation.
Collapse
Affiliation(s)
- Jingmiao Liu
- General Graduate School of Keimyung University South Korea, Daegu 42601, Republic of Korea
| | - Yu Ren
- General Graduate School of Keimyung University South Korea, Daegu 42601, Republic of Korea
- School of Design, Sichuan Fine Arts Institute, Chongqing 401331, China
| | - Xiaotong Qin
- School of Art, Yanching Institute of Technology, Sanhe 065201, China
| |
Collapse
|
31
|
Toptaş B, Hanbay D. Retinal blood vessel segmentation using pixel-based feature vector. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.103053] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/17/2023]
|
32
|
Ahsan MM, Nazim R, Siddique Z, Huebner P. Detection of COVID-19 Patients from CT Scan and Chest X-ray Data Using Modified MobileNetV2 and LIME. Healthcare (Basel) 2021; 9:1099. [PMID: 34574873 PMCID: PMC8465084 DOI: 10.3390/healthcare9091099] [Citation(s) in RCA: 32] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2021] [Revised: 08/18/2021] [Accepted: 08/20/2021] [Indexed: 11/30/2022] Open
Abstract
The COVID-19 global pandemic caused by the widespread transmission of the novel coronavirus (SARS-CoV-2) has become one of modern history's most challenging issues from a healthcare perspective. At its dawn, still without a vaccine, contagion containment strategies remained most effective in preventing the disease's spread. Patient isolation has been primarily driven by the results of polymerase chain reaction (PCR) testing, but its initial reach was challenged by low availability and high cost, especially in developing countries. As a means of taking advantage of a preexisting infrastructure for respiratory disease diagnosis, researchers have proposed COVID-19 patient screening based on the results of Chest Computerized Tomography (CT) and Chest Radiographs (X-ray). When paired with artificial-intelligence- and deep-learning-based approaches for analysis, early studies have achieved a comparatively high accuracy in diagnosing the disease. Considering the opportunity to further explore these methods, we implement six different Deep Convolutional Neural Networks (Deep CNN) models-VGG16, MobileNetV2, InceptionResNetV2, ResNet50, ResNet101, and VGG19-and use a mixed dataset of CT and X-ray images to classify COVID-19 patients. Preliminary results showed that a modified MobileNetV2 model performs best with an accuracy of 95 ± 1.12% (AUC = 0.816). Notably, a high performance was also observed for the VGG16 model, outperforming several previously proposed models with an accuracy of 98.5 ± 1.19% on the X-ray dataset. Our findings are supported by recent works in the academic literature, which also uphold the higher performance of MobileNetV2 when X-ray, CT, and their mixed datasets are considered. Lastly, we further explain the process of feature extraction using Local Interpretable Model-Agnostic Explanations (LIME), which contributes to a better understanding of what features in CT/X-ray images characterize the onset of COVID-19.
Collapse
Affiliation(s)
- Md Manjurul Ahsan
- Industrial and Systems Engineering, University of Oklahoma, Norman, OK 73019, USA
| | - Redwan Nazim
- Chemical, Biological & Materials Engineering, University of Oklahoma, Norman, OK 73019, USA;
| | - Zahed Siddique
- School of Aerospace and Mechanical Engineering, University of Oklahoma, Norman, OK 73019, USA;
| | - Pedro Huebner
- Industrial and Systems Engineering, University of Oklahoma, Norman, OK 73019, USA
| |
Collapse
|
33
|
Mohamed Ben Ali Y. Flexible edge detection and its enhancement by smell bees optimization algorithm. Neural Comput Appl 2021. [DOI: 10.1007/s00521-021-05769-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
34
|
Novel Three-Dimensional Bladder Reconstruction Model from B-Mode Ultrasound Image to Improve the Accuracy of Bladder Volume Measurement. SENSORS 2021; 21:s21144893. [PMID: 34300632 PMCID: PMC8309711 DOI: 10.3390/s21144893] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/03/2021] [Revised: 07/13/2021] [Accepted: 07/16/2021] [Indexed: 11/16/2022]
Abstract
Traditional bladder volume measurement from B-mode (two-dimensional) ultrasound has been found to produce inaccurate results, and thus in this work we aim to improve the accuracy of measurement from B-mode ultrasound. A total of 75 electronic medical records including ultrasonic images were reviewed retrospectively from 64 patients. We put forward a novel bladder volume measurement method, in which a three-dimensional (3D) reconstruction model was established from conventional two-dimensional (2D) ultrasonic images to estimate the bladder volume. The differences and relationships were analyzed among the actual volume, the traditional estimated volume, and the new reconstruction model estimated volume. We also compared the data in different volume groups from small volume to high volume. The mean actual volume is 531.8 mL and the standard deviation is 268.7 mL; the mean percentage error of traditional estimation is −28%. In our new bladder measurement method, the mean percentage error is −10.18% (N = 2), −4.72% (N = 3), −0.33% (N = 4), and 2.58% (N = 5). There is no significant difference between the actual volume and our new bladder measurement method (N = 4) in all data or the divided four groups. The estimated volumes from the traditional method or our new method are highly correlated with the actual volume. Our data show that the three-dimensional bladder reconstruction model provides an accurate measurement from conventional B-mode ultrasonic images compared with the traditional method. The accuracy is seen across different groups of volume, and thus we can conclude that this is a reliable and economical volume measurement model that can be applied in general software or in apps on mobile devices.
Collapse
|
35
|
Li J, Wang H, Wang L, Wei T, Wu M, Li T, Liao J, Tan B, Lu M. The concordance in lesion detection and characteristics between the Anatomical Intelligence and conventional breast ultrasound Scan method. BMC Med Imaging 2021; 21:102. [PMID: 34154558 PMCID: PMC8215794 DOI: 10.1186/s12880-021-00628-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2021] [Accepted: 05/24/2021] [Indexed: 11/17/2022] Open
Abstract
Background The aim of this study was to investigate the concordance in lesion detection, between conventional Handhold Ultrasound (HHUS) and The Anatomical Intelligence for Breast ultrasound scan method. Result The AI-breast showed the absolute agreement between the resident and an experienced breast radiologist. The ICC for the scan time, number, clockface location, distance to the nipple, largest diameter and mean diameter of the lesion obtained by a resident and an experienced breast radiologist were 0.7642, 0.7692, 0.8651, 0.8436, 0.7502, 0.8885, respectively. The ICC of the both practitioners of AI-breast were 0.7971, 0.7843, 0.9283, 0.8748, 0.7248, 0.8163, respectively. The k value of Anatomical Intelligence breast between experienced breast radiologist and resident in these image characteristics of boundary, morphology, aspect ratio, internal echo, and BI-RADS assessment were 0.7424, 0.7217, 0.6741, 0.6419, 0.6241, respectively. The k value of the two readers of AI-breast were 0.6531, 0.6762, 0.6439, 0.6137, 0.5981, respectively. Conclusion The anatomical intelligent breast US scanning method has excellent reproducibility in recording the lesion location and the distance from the nipple, which may be utilized in the lesions surveillance in the future.
Collapse
Affiliation(s)
- Juan Li
- Ultrasound Medical Center, Sichuan Cancer Hospital Institute, Sichuan Cancer Center, School of Medicine, University of Electronic Science and Technology of China, No. 55, Section 4, South Renmin Road, Chengdu, China
| | - Hao Wang
- Breast Surgeons Center, Sichuan Cancer Hospital Institute, Sichuan Cancer Center, School of Medicine, University of Electronic Science and Technology of China, No. 55, Section 4, South Renmin Road, Chengdu, China
| | - Lu Wang
- Ultrasound Medical Center, Sichuan Cancer Hospital Institute, Sichuan Cancer Center, School of Medicine, University of Electronic Science and Technology of China, No. 55, Section 4, South Renmin Road, Chengdu, China
| | - Ting Wei
- Ultrasound Medical Center, Sichuan Cancer Hospital Institute, Sichuan Cancer Center, School of Medicine, University of Electronic Science and Technology of China, No. 55, Section 4, South Renmin Road, Chengdu, China
| | - Minggang Wu
- Ultrasound Medical Center, Sichuan Cancer Hospital Institute, Sichuan Cancer Center, School of Medicine, University of Electronic Science and Technology of China, No. 55, Section 4, South Renmin Road, Chengdu, China
| | - Tingting Li
- Ultrasound Medical Center, Sichuan Cancer Hospital Institute, Sichuan Cancer Center, School of Medicine, University of Electronic Science and Technology of China, No. 55, Section 4, South Renmin Road, Chengdu, China
| | - Jifen Liao
- Ultrasound Medical Center, Sichuan Cancer Hospital Institute, Sichuan Cancer Center, School of Medicine, University of Electronic Science and Technology of China, No. 55, Section 4, South Renmin Road, Chengdu, China
| | - Bo Tan
- Ultrasound Medical Center, Sichuan Cancer Hospital Institute, Sichuan Cancer Center, School of Medicine, University of Electronic Science and Technology of China, No. 55, Section 4, South Renmin Road, Chengdu, China
| | - Man Lu
- Ultrasound Medical Center, Sichuan Cancer Hospital Institute, Sichuan Cancer Center, School of Medicine, University of Electronic Science and Technology of China, No. 55, Section 4, South Renmin Road, Chengdu, China.
| |
Collapse
|
36
|
Abstract
Detecting objects in synthetic aperture radar (SAR) imagery has received much attention in recent years since SAR can operate in all-weather and day-and-night conditions. Due to the prosperity and development of convolutional neural networks (CNNs), many previous methodologies have been proposed for SAR object detection. In spite of the advance, existing detection networks still have limitations in boosting detection performance because of inherently noisy characteristics in SAR imagery; hence, separate preprocessing step such as denoising (despeckling) is required before utilizing the SAR images for deep learning. However, inappropriate denoising techniques might cause detailed information loss and even proper denoising methods does not always guarantee performance improvement. In this paper, we therefore propose a novel object detection framework that combines unsupervised denoising network into traditional two-stage detection network and leverages a strategy for fusing region proposals extracted from both raw SAR image and synthetically denoised SAR image. Extensive experiments validate the effectiveness of our framework on our own object detection datasets constructed with remote sensing images from TerraSAR-X and COSMO-SkyMed satellites. Extensive experiments validate the effectiveness of our framework on our own object detection datasets constructed with remote sensing images from TerraSAR-X and COSMO-SkyMed satellites. The proposed framework shows better performances when we compared the model with using only noisy SAR images and only denoised SAR images after despeckling under multiple backbone networks.
Collapse
|
37
|
Multi-Scale Feature Fusion with Adaptive Weighting for Diabetic Retinopathy Severity Classification. ELECTRONICS 2021. [DOI: 10.3390/electronics10121369] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
Diabetic retinopathy (DR) is the prime cause of blindness in people who suffer from diabetes. Automation of DR diagnosis could help a lot of patients avoid the risk of blindness by identifying the disease and making judgments at an early stage. The main focus of the present work is to propose a feasible scheme of DR severity level detection under the MobileNetV3 backbone network based on a multi-scale feature of the retinal fundus image and improve the classification performance of the model. Firstly, a special residual attention module RCAM for multi-scale feature extraction from different convolution layers was designed. Then, the feature fusion by an innovative operation of adaptive weighting was carried out in each layer. The corresponding weight of the convolution block is updated in the model training automatically, with further global average pooling (GAP) and division process to avoid over-fitting of the model and removing non-critical features. In addition, Focal Loss is used as a loss function due to the data imbalance of DR images. The experimental results based on Kaggle APTOS 2019 contest dataset show that our proposed method for DR severity classification achieves an accuracy of 85.32%, a kappa statistic of 77.26%, and an AUC of 0.97. The comparison results also indicate that the model obtained is superior to the existing models and presents superior classification performance on the dataset.
Collapse
|
38
|
Lal S, Rehman SU, Shah JH, Meraj T, Rauf HT, Damaševičius R, Mohammed MA, Abdulkareem KH. Adversarial Attack and Defence through Adversarial Training and Feature Fusion for Diabetic Retinopathy Recognition. SENSORS 2021; 21:s21113922. [PMID: 34200216 PMCID: PMC8201392 DOI: 10.3390/s21113922] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/02/2021] [Revised: 05/31/2021] [Accepted: 06/04/2021] [Indexed: 12/15/2022]
Abstract
Due to the rapid growth in artificial intelligence (AI) and deep learning (DL) approaches, the security and robustness of the deployed algorithms need to be guaranteed. The security susceptibility of the DL algorithms to adversarial examples has been widely acknowledged. The artificially created examples will lead to different instances negatively identified by the DL models that are humanly considered benign. Practical application in actual physical scenarios with adversarial threats shows their features. Thus, adversarial attacks and defense, including machine learning and its reliability, have drawn growing interest and, in recent years, has been a hot topic of research. We introduce a framework that provides a defensive model against the adversarial speckle-noise attack, the adversarial training, and a feature fusion strategy, which preserves the classification with correct labelling. We evaluate and analyze the adversarial attacks and defenses on the retinal fundus images for the Diabetic Retinopathy recognition problem, which is considered a state-of-the-art endeavor. Results obtained on the retinal fundus images, which are prone to adversarial attacks, are 99% accurate and prove that the proposed defensive model is robust.
Collapse
Affiliation(s)
- Sheeba Lal
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Wah Cantt 47040, Pakistan; (S.L.); (S.U.R.); (J.H.S.); (T.M.)
| | - Saeed Ur Rehman
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Wah Cantt 47040, Pakistan; (S.L.); (S.U.R.); (J.H.S.); (T.M.)
| | - Jamal Hussain Shah
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Wah Cantt 47040, Pakistan; (S.L.); (S.U.R.); (J.H.S.); (T.M.)
| | - Talha Meraj
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Wah Cantt 47040, Pakistan; (S.L.); (S.U.R.); (J.H.S.); (T.M.)
| | - Hafiz Tayyab Rauf
- Department of Computer Science, Faculty of Engineering & Informatics, University of Bradford, Bradford BD7 1DP, UK
- Correspondence: (H.T.R.); (R.D.)
| | - Robertas Damaševičius
- Faculty of Applied Mathematics, Silesian University of Technology, 44-100 Gliwice, Poland
- Correspondence: (H.T.R.); (R.D.)
| | - Mazin Abed Mohammed
- College of Computer Science and Information Technology, University of Anbar, Anbar 31001, Iraq;
| | | |
Collapse
|
39
|
Hemorrhage Detection Based on 3D CNN Deep Learning Framework and Feature Fusion for Evaluating Retinal Abnormality in Diabetic Patients. SENSORS 2021; 21:s21113865. [PMID: 34205120 PMCID: PMC8199947 DOI: 10.3390/s21113865] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/29/2021] [Revised: 05/29/2021] [Accepted: 06/01/2021] [Indexed: 01/07/2023]
Abstract
Diabetic retinopathy (DR) is the main cause of blindness in diabetic patients. Early and accurate diagnosis can improve the analysis and prognosis of the disease. One of the earliest symptoms of DR are the hemorrhages in the retina. Therefore, we propose a new method for accurate hemorrhage detection from the retinal fundus images. First, the proposed method uses the modified contrast enhancement method to improve the edge details from the input retinal fundus images. In the second stage, a new convolutional neural network (CNN) architecture is proposed to detect hemorrhages. A modified pre-trained CNN model is used to extract features from the detected hemorrhages. In the third stage, all extracted feature vectors are fused using the convolutional sparse image decomposition method, and finally, the best features are selected by using the multi-logistic regression controlled entropy variance approach. The proposed method is evaluated on 1509 images from HRF, DRIVE, STARE, MESSIDOR, DIARETDB0, and DIARETDB1 databases and achieves the average accuracy of 97.71%, which is superior to the previous works. Moreover, the proposed hemorrhage detection system attains better performance, in terms of visual quality and quantitative analysis with high accuracy, in comparison with the state-of-the-art methods.
Collapse
|
40
|
Han Y, Huang L, Hong Z, Cao S, Zhang Y, Wang J. Deep Supervised Residual Dense Network for Underwater Image Enhancement. SENSORS 2021; 21:s21093289. [PMID: 34068741 PMCID: PMC8126201 DOI: 10.3390/s21093289] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/16/2021] [Revised: 05/01/2021] [Accepted: 05/05/2021] [Indexed: 11/18/2022]
Abstract
Underwater images are important carriers and forms of underwater information, playing a vital role in exploring and utilizing marine resources. However, underwater images have characteristics of low contrast and blurred details because of the absorption and scattering of light. In recent years, deep learning has been widely used in underwater image enhancement and restoration because of its powerful feature learning capabilities, but there are still shortcomings in detailed enhancement. To address the problem, this paper proposes a deep supervised residual dense network (DS_RD_Net), which is used to better learn the mapping relationship between clear in-air images and synthetic underwater degraded images. DS_RD_Net first uses residual dense blocks to extract features to enhance feature utilization; then, it adds residual path blocks between the encoder and decoder to reduce the semantic differences between the low-level features and high-level features; finally, it employs a deep supervision mechanism to guide network training to improve gradient propagation. Experiments results (PSNR was 36.2, SSIM was 96.5%, and UCIQE was 0.53) demonstrated that the proposed method can fully retain the local details of the image while performing color restoration and defogging compared with other image enhancement methods, achieving good qualitative and quantitative effects.
Collapse
|
41
|
Ramasamy LK, Padinjappurathu SG, Kadry S, Damaševičius R. Detection of diabetic retinopathy using a fusion of textural and ridgelet features of retinal images and sequential minimal optimization classifier. PeerJ Comput Sci 2021; 7:e456. [PMID: 34013026 PMCID: PMC8114804 DOI: 10.7717/peerj-cs.456] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2020] [Accepted: 03/03/2021] [Indexed: 06/12/2023]
Abstract
Diabetes is one of the most prevalent diseases in the world, which is a metabolic disorder characterized by high blood sugar. Diabetes complications are leading to Diabetic Retinopathy (DR). The early stages of DR may have either no sign or cause minor vision problems, but later stages of the disease can lead to blindness. DR diagnosis is an exceedingly difficult task because of changes in the retina during the disease stages. An automatic DR early detection method can save a patient's vision and can also support the ophthalmologists in DR screening. This paper develops a model for the diagnostics of DR. Initially, we extract and fuse the ophthalmoscopic features from the retina images based on textural gray-level features like co-occurrence, run-length matrix, as well as the coefficients of the Ridgelet Transform. Based on the retina features, the Sequential Minimal Optimization (SMO) classification is used to classify diabetic retinopathy. For performance analysis, the openly accessible retinal image datasets are used, and the findings of the experiments demonstrate the quality and efficacy of the proposed method (we achieved 98.87% sensitivity, 95.24% specificity, 97.05% accuracy on DIARETDB1 dataset, and 90.9% sensitivity, 91.0% specificity, 91.0% accuracy on KAGGLE dataset).
Collapse
|
42
|
Liu G, Li M, Zhang W, Gu J. Subpixel Matching Using Double-Precision Gradient-Based Method for Digital Image Correlation. SENSORS 2021; 21:s21093140. [PMID: 33946508 PMCID: PMC8125022 DOI: 10.3390/s21093140] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Revised: 04/04/2021] [Accepted: 04/28/2021] [Indexed: 11/27/2022]
Abstract
Digital image correlation (DIC) for displacement and strain measurement has flourished in recent years. There are integer pixel and subpixel matching steps to extract displacement from a series of images in the DIC approach, and identification accuracy mainly depends on the latter step. A subpixel displacement matching method, named the double-precision gradient-based algorithm (DPG), is proposed in this study. After, the integer pixel displacement is identified using the coarse-fine search algorithm. In order to improve the accuracy and anti-noise capability in the subpixel extraction step, the traditional gradient-based method is used to analyze the data on the speckle patterns using the computer, and the influence of noise is considered. These two nearest integer pixels in one direction are both utilized as an interpolation center. Then, two subpixel displacements are extracted by the five-point bicubic spline interpolation algorithm using these two interpolation centers. A novel combination coefficient considering contaminated noises is presented to merge these two subpixel displacements to obtain the final identification displacement. Results from a simulated speckle pattern and a painted beam bending test show that the accuracy of the proposed method can be improved by four times that of the traditional gradient-based method that reaches the same high accuracy as the Newton–Raphson method. The accuracy of the proposed method efficiently reaches at 92.67%, higher than the Newton-Raphon method, and it has better anti-noise performance and stability.
Collapse
|
43
|
Sediqi KM, Lee HJ. A Novel Upsampling and Context Convolution for Image Semantic Segmentation. SENSORS 2021; 21:s21062170. [PMID: 33804591 PMCID: PMC8003770 DOI: 10.3390/s21062170] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/01/2021] [Revised: 03/14/2021] [Accepted: 03/15/2021] [Indexed: 11/16/2022]
Abstract
Semantic segmentation, which refers to pixel-wise classification of an image, is a fundamental topic in computer vision owing to its growing importance in the robot vision and autonomous driving sectors. It provides rich information about objects in the scene such as object boundary, category, and location. Recent methods for semantic segmentation often employ an encoder-decoder structure using deep convolutional neural networks. The encoder part extracts features of the image using several filters and pooling operations, whereas the decoder part gradually recovers the low-resolution feature maps of the encoder into a full input resolution feature map for pixel-wise prediction. However, the encoder-decoder variants for semantic segmentation suffer from severe spatial information loss, caused by pooling operations or stepwise convolutions, and does not consider the context in the scene. In this paper, we propose a novel dense upsampling convolution method based on a guided filter to effectively preserve the spatial information of the image in the network. We further propose a novel local context convolution method that not only covers larger-scale objects in the scene but covers them densely for precise object boundary delineation. Theoretical analyses and experimental results on several benchmark datasets verify the effectiveness of our method. Qualitatively, our approach delineates object boundaries at a level of accuracy that is beyond the current excellent methods. Quantitatively, we report a new record of 82.86% and 81.62% of pixel accuracy on ADE20K and Pascal-Context benchmark datasets, respectively. In comparison with the state-of-the-art methods, the proposed method offers promising improvements.
Collapse
|
44
|
Huang X, Hu Z, Yue X, Cui Y, Cui J. Expression of Inflammatory Factors in Critically Ill Patients with Urosepticemia and the Imaging Analysis of the Severity of the Disease. JOURNAL OF HEALTHCARE ENGINEERING 2021; 2021:6659435. [PMID: 33688422 PMCID: PMC7914102 DOI: 10.1155/2021/6659435] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Revised: 02/06/2021] [Accepted: 02/15/2021] [Indexed: 11/17/2022]
Abstract
Urine sepsis is a complex inflammatory response of the body to infection with a high fatality rate. It is one of the main causes of death in noncardiovascular intensive care units. Nevertheless, in daily clinical practice, early sepsis is often not detected. In this paper, discharged cases of urinary sepsis from the Department of Urology and Critical Care Medicine of a university hospital were collected as the observation group, and common urinary tract infection cases were selected as the control group. We sorted and summarized the discharged case information of the observation group and the control group. The results of the study showed that, after renal pelvis perfusion, the expression of HMGB1 protein and mRNA increased, and the expression of TLR4 increased; inhibiting HMGB1 can reduce the expression of inflammatory factors caused by perfusion and reduce the infiltration of neutrophils and macrophages caused by perfusion. In addition, r HMGB1 treatment can promote the expression of inflammatory factors caused by perfusion and aggravate the infiltration of neutrophils and macrophages caused by perfusion. We found that inhibition of HMGB1 can inhibit the expression of TLR4/My D88 signaling molecules and r HMGB1 treatment can enhance the expression of TLR4/My D88 signaling molecules.
Collapse
Affiliation(s)
- Xia Huang
- Department of Critical Care Medicine Neurosurgery, Chongqing University Three Gorges Hospital, Wanzhou, Chongqing 404000, China
| | - Zongjun Hu
- Department of Critical Care Medicine Neurosurgery, Chongqing University Three Gorges Hospital, Wanzhou, Chongqing 404000, China
| | - Xi Yue
- Department of Critical Care Medicine Neurosurgery, Chongqing University Three Gorges Hospital, Wanzhou, Chongqing 404000, China
| | - Yong Cui
- Department of Critical Care Medicine Neurosurgery, Chongqing University Three Gorges Hospital, Wanzhou, Chongqing 404000, China
| | - Jiwen Cui
- Department of Critical Care Medicine Neurosurgery, Chongqing University Three Gorges Hospital, Wanzhou, Chongqing 404000, China
| |
Collapse
|
45
|
Valiuškaitė V, Raudonis V, Maskeliūnas R, Damaševičius R, Krilavičius T. Deep Learning Based Evaluation of Spermatozoid Motility for Artificial Insemination. SENSORS (BASEL, SWITZERLAND) 2020; 21:E72. [PMID: 33374461 PMCID: PMC7795243 DOI: 10.3390/s21010072] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/06/2020] [Revised: 12/15/2020] [Accepted: 12/21/2020] [Indexed: 12/15/2022]
Abstract
We propose a deep learning method based on the Region Based Convolutional Neural Networks (R-CNN) architecture for the evaluation of sperm head motility in human semen videos. The neural network performs the segmentation of sperm heads, while the proposed central coordinate tracking algorithm allows us to calculate the movement speed of sperm heads. We have achieved 91.77% (95% CI, 91.11-92.43%) accuracy of sperm head detection on the VISEM (A Multimodal Video Dataset of Human Spermatozoa) sperm sample video dataset. The mean absolute error (MAE) of sperm head vitality prediction was 2.92 (95% CI, 2.46-3.37), while the Pearson correlation between actual and predicted sperm head vitality was 0.969. The results of the experiments presented below will show the applicability of the proposed method to be used in automated artificial insemination workflow.
Collapse
Affiliation(s)
- Viktorija Valiuškaitė
- Department of Control Systems, Kaunas University of Technology, 51423 Kaunas, Lithuania; (V.V.); (V.R.)
| | - Vidas Raudonis
- Department of Control Systems, Kaunas University of Technology, 51423 Kaunas, Lithuania; (V.V.); (V.R.)
| | - Rytis Maskeliūnas
- Department of Multimedia Engineering, Kaunas University of Technology, 51423 Kaunas, Lithuania;
| | - Robertas Damaševičius
- Department of Applied Informatics, Vytautas Magnus University, 44404 Kaunas, Lithuania;
- Faculty of Applied Mathematics, Silesian University of Technology, 444-100 Gliwice, Poland
| | - Tomas Krilavičius
- Department of Applied Informatics, Vytautas Magnus University, 44404 Kaunas, Lithuania;
| |
Collapse
|
46
|
de Albuquerque VHC, Gupta D, De Falco I, Sannino G, Bouguila N. Special issue on Bio-inspired optimization techniques for Biomedical Data Analysis: Methods and applications. Appl Soft Comput 2020. [DOI: 10.1016/j.asoc.2020.106672] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
|
47
|
Abstract
Fundus blood vessel image segmentation plays an important role in the diagnosis and treatment of diseases and is the basis of computer-aided diagnosis. Feature information from the retinal blood vessel image is relatively complicated, and the existing algorithms are sometimes difficult to perform effective segmentation with. Aiming at the problems of low accuracy and low sensitivity of the existing segmentation methods, an improved U-shaped neural network (MRU-NET) segmentation method for retinal vessels was proposed. Firstly, the image enhancement algorithm and random segmentation method are used to solve the problems of low contrast and insufficient image data of the original image. Moreover, smaller image blocks after random segmentation are helpful to reduce the complexity of the U-shaped neural network model; secondly, the residual learning is introduced into the encoder and decoder to improve the efficiency of feature use and to reduce information loss, and a feature fusion module is introduced between the encoder and decoder to extract image features with different granularities; and finally, a feature balancing module is added to the skip connections to resolve the semantic gap between low-dimensional features in the encoder and high-dimensional features in decoder. Experimental results show that our method has better accuracy and sensitivity on the DRIVE and STARE datasets (accuracy (ACC) = 0.9611, sensitivity (SE) = 0.8613; STARE: ACC = 0.9662, SE = 0.7887) than some of the state-of-the-art methods.
Collapse
|