1
|
Mu N, Guo J, Wang R. Automated polyp segmentation based on a multi-distance feature dissimilarity-guided fully convolutional network. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:20116-20134. [PMID: 38052639 DOI: 10.3934/mbe.2023891] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/07/2023]
Abstract
Colorectal malignancies often arise from adenomatous polyps, which typically begin as solitary, asymptomatic growths before progressing to malignancy. Colonoscopy is widely recognized as a highly efficacious clinical polyp detection method, offering valuable visual data that facilitates precise identification and subsequent removal of these tumors. Nevertheless, accurately segmenting individual polyps poses a considerable difficulty because polyps exhibit intricate and changeable characteristics, including shape, size, color, quantity and growth context during different stages. The presence of similar contextual structures around polyps significantly hampers the performance of commonly used convolutional neural network (CNN)-based automatic detection models to accurately capture valid polyp features, and these large receptive field CNN models often overlook the details of small polyps, which leads to the occurrence of false detections and missed detections. To tackle these challenges, we introduce a novel approach for automatic polyp segmentation, known as the multi-distance feature dissimilarity-guided fully convolutional network. This approach comprises three essential components, i.e., an encoder-decoder, a multi-distance difference (MDD) module and a hybrid loss (HL) module. Specifically, the MDD module primarily employs a multi-layer feature subtraction (MLFS) strategy to propagate features from the encoder to the decoder, which focuses on extracting information differences between neighboring layers' features at short distances, and both short and long-distance feature differences across layers. Drawing inspiration from pyramids, the MDD module effectively acquires discriminative features from neighboring layers or across layers in a continuous manner, which helps to strengthen feature complementary across different layers. The HL module is responsible for supervising the feature maps extracted at each layer of the network to improve prediction accuracy. Our experimental results on four challenge datasets demonstrate that the proposed approach exhibits superior automatic polyp performance in terms of the six evaluation criteria compared to five current state-of-the-art approaches.
Collapse
Affiliation(s)
- Nan Mu
- College of Computer Science, Sichuan Normal University, Chengdu 610101, China
- Visual Computing and Virtual Reality Key Laboratory of Sichuan, Sichuan Normal University, Chengdu 610068, China
- Education Big Data Collaborative Innovation Center of Sichuan 2011, Chengdu 610101, China
| | - Jinjia Guo
- Chongqing University-University of Cincinnati Joint Co-op Institution, Chongqing University, Chongqing 400044, China
| | - Rong Wang
- College of Computer Science, Sichuan Normal University, Chengdu 610101, China
- Visual Computing and Virtual Reality Key Laboratory of Sichuan, Sichuan Normal University, Chengdu 610068, China
- Education Big Data Collaborative Innovation Center of Sichuan 2011, Chengdu 610101, China
| |
Collapse
|
2
|
Song H, Liu C, Li S, Zhang P. TS-GCN: A novel tumor segmentation method integrating transformer and GCN. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:18173-18190. [PMID: 38052553 DOI: 10.3934/mbe.2023807] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/07/2023]
Abstract
As one of the critical branches of medical image processing, the task of segmentation of breast cancer tumors is of great importance for planning surgical interventions, radiotherapy and chemotherapy. Breast cancer tumor segmentation faces several challenges, including the inherent complexity and heterogeneity of breast tissue, the presence of various imaging artifacts and noise in medical images, low contrast between the tumor region and healthy tissue, and inconsistent size of the tumor region. Furthermore, the existing segmentation methods may not fully capture the rich spatial and contextual information in small-sized regions in breast images, leading to suboptimal performance. In this paper, we propose a novel breast tumor segmentation method, called the transformer and graph convolutional neural (TS-GCN) network, for medical imaging analysis. Specifically, we designed a feature aggregation network to fuse the features extracted from the transformer, GCN and convolutional neural network (CNN) networks. The CNN extract network is designed for the image's local deep feature, and the transformer and GCN networks can better capture the spatial and context dependencies among pixels in images. By leveraging the strengths of three feature extraction networks, our method achieved superior segmentation performance on the BUSI dataset and dataset B. The TS-GCN showed the best performance on several indexes, with Acc of 0.9373, Dice of 0.9058, IoU of 0.7634, F1 score of 0.9338, and AUC of 0.9692, which outperforms other state-of-the-art methods. The research of this segmentation method provides a promising future for medical image analysis and diagnosis of other diseases.
Collapse
Affiliation(s)
- Haiyan Song
- The Second Affiliated Hospital of Shandong University of Traditional Chinese Medicine, Jinan, China
| | - Cuihong Liu
- Affiliated Eye Hospital of Shandong University of Traditional Chinese Medicine, Jinan, China
- School of Nursing, Shandong University of Traditional Chinese Medicine, Jinan, China
| | - Shengnan Li
- The Second Affiliated Hospital of Shandong University of Traditional Chinese Medicine, Jinan, China
| | - Peixiao Zhang
- The Second Affiliated Hospital of Shandong University of Traditional Chinese Medicine, Jinan, China
| |
Collapse
|
3
|
Chen M, Yi S, Yang M, Yang Z, Zhang X. UNet segmentation network of COVID-19 CT images with multi-scale attention. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:16762-16785. [PMID: 37920033 DOI: 10.3934/mbe.2023747] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/04/2023]
Abstract
In recent years, the global outbreak of COVID-19 has posed an extremely serious life-safety risk to humans, and in order to maximize the diagnostic efficiency of physicians, it is extremely valuable to investigate the methods of lesion segmentation in images of COVID-19. Aiming at the problems of existing deep learning models, such as low segmentation accuracy, poor model generalization performance, large model parameters and difficult deployment, we propose an UNet segmentation network integrating multi-scale attention for COVID-19 CT images. Specifically, the UNet network model is utilized as the base network, and the structure of multi-scale convolutional attention is proposed in the encoder stage to enhance the network's ability to capture multi-scale information. Second, a local channel attention module is proposed to extract spatial information by modeling local relationships to generate channel domain weights, to supplement detailed information about the target region to reduce information redundancy and to enhance important information. Moreover, the network model encoder segment uses the Meta-ACON activation function to avoid the overfitting phenomenon of the model and to improve the model's representational ability. A large number of experimental results on publicly available mixed data sets show that compared with the current mainstream image segmentation algorithms, the pro-posed method can more effectively improve the accuracy and generalization performance of COVID-19 lesions segmentation and provide help for medical diagnosis and analysis.
Collapse
Affiliation(s)
- Mingju Chen
- School of Automation and Information Engineering, Sichuan University of Science & Engineering, Yibin 644002, China
- Artificial Intelligence Key Laboratory of Sichuan Province, Sichuan University of Science & Engineering, Yibin 644002, China
| | - Sihang Yi
- School of Automation and Information Engineering, Sichuan University of Science & Engineering, Yibin 644002, China
- Artificial Intelligence Key Laboratory of Sichuan Province, Sichuan University of Science & Engineering, Yibin 644002, China
| | - Mei Yang
- Zigong Third People's Hospital, Zigong 643000, China
| | - Zhiwen Yang
- School of Automation and Information Engineering, Sichuan University of Science & Engineering, Yibin 644002, China
- Artificial Intelligence Key Laboratory of Sichuan Province, Sichuan University of Science & Engineering, Yibin 644002, China
| | - Xingyue Zhang
- School of Automation and Information Engineering, Sichuan University of Science & Engineering, Yibin 644002, China
- Artificial Intelligence Key Laboratory of Sichuan Province, Sichuan University of Science & Engineering, Yibin 644002, China
| |
Collapse
|
4
|
Zhou W, Wang T, He Y, Xie S, Luo A, Peng B, Yin L. Contrast U-Net driven by sufficient texture extraction for carotid plaque detection. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:15623-15640. [PMID: 37919983 DOI: 10.3934/mbe.2023697] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/04/2023]
Abstract
Ischemic heart disease or stroke caused by the rupture or dislodgement of a carotid plaque poses a huge risk to human health. To obtain accurate information on the carotid plaque characteristics of patients and to assist clinicians in the determination and identification of atherosclerotic areas, which is one significant foundation work. Existing work in this field has not deliberately extracted texture information of carotid from the ultrasound images. However, texture information is a very important part of carotid ultrasound images. To make full use of the texture information in carotid ultrasound images, a novel network based on U-Net called Contrast U-Net is designed in this paper. First, the proposed network mainly relies on a contrast block to extract accurate texture information. Moreover, to make the network better learn the texture information of each channel, the squeeze-and-excitation block is introduced to assist in the jump connection from encoding to decoding. Experimental results from intravascular ultrasound image datasets show that the proposed network can achieve superior performance compared with other popular models in carotid plaque detection.
Collapse
Affiliation(s)
- WenJun Zhou
- Ultrasound in Cardiac Electrophysiology and Biomechanics Key Laboratory of Sichuan Province, Sichuan Provincial People's Hospital, University of Electronic Science and Technology of China, Chengdu 611731, China
- School of Computer Science, Southwest Petroleum University, Chengdu 610500, China
| | - Tianfei Wang
- School of Computer Science, Southwest Petroleum University, Chengdu 610500, China
| | - Yuhang He
- School of Computer Science, Southwest Petroleum University, Chengdu 610500, China
| | - Shenghua Xie
- Ultrasound in Cardiac Electrophysiology and Biomechanics Key Laboratory of Sichuan Province, Sichuan Provincial People's Hospital, University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Anguo Luo
- Ultrasound in Cardiac Electrophysiology and Biomechanics Key Laboratory of Sichuan Province, Sichuan Provincial People's Hospital, University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Bo Peng
- Ultrasound in Cardiac Electrophysiology and Biomechanics Key Laboratory of Sichuan Province, Sichuan Provincial People's Hospital, University of Electronic Science and Technology of China, Chengdu 611731, China
- School of Computer Science, Southwest Petroleum University, Chengdu 610500, China
| | - Lixue Yin
- Ultrasound in Cardiac Electrophysiology and Biomechanics Key Laboratory of Sichuan Province, Sichuan Provincial People's Hospital, University of Electronic Science and Technology of China, Chengdu 611731, China
| |
Collapse
|
5
|
Zhang S, Niu Y. LcmUNet: A Lightweight Network Combining CNN and MLP for Real-Time Medical Image Segmentation. Bioengineering (Basel) 2023; 10:712. [PMID: 37370643 PMCID: PMC10295621 DOI: 10.3390/bioengineering10060712] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2023] [Revised: 05/26/2023] [Accepted: 06/06/2023] [Indexed: 06/29/2023] Open
Abstract
In recent years, UNet and its improved variants have become the main methods for medical image segmentation. Although these models have achieved excellent results in segmentation accuracy, their large number of network parameters and high computational complexity make it difficult to achieve medical image segmentation in real-time therapy and diagnosis rapidly. To address this problem, we introduce a lightweight medical image segmentation network (LcmUNet) based on CNN and MLP. We designed LcmUNet's structure in terms of model performance, parameters, and computational complexity. The first three layers are convolutional layers, and the last two layers are MLP layers. In the convolution part, we propose an LDA module that combines asymmetric convolution, depth-wise separable convolution, and an attention mechanism to reduce the number of network parameters while maintaining a strong feature-extraction capability. In the MLP part, we propose an LMLP module that helps enhance contextual information while focusing on local information and improves segmentation accuracy while maintaining high inference speed. This network also covers skip connections between the encoder and decoder at various levels. Our network achieves real-time segmentation results accurately in extensive experiments. With only 1.49 million model parameters and without pre-training, LcmUNet demonstrated impressive performance on different datasets. On the ISIC2018 dataset, it achieved an IoU of 85.19%, 92.07% recall, and 92.99% precision. On the BUSI dataset, it achieved an IoU of 63.99%, 79.96% recall, and 76.69% precision. Lastly, on the Kvasir-SEG dataset, LcmUNet achieved an IoU of 81.89%, 88.93% recall, and 91.79% precision.
Collapse
Affiliation(s)
| | - Yanmin Niu
- School of Computer and Information Science, Chongqing Normal University, Chongqing 401331, China;
| |
Collapse
|
6
|
Abu M, Zahri NAH, Amir A, Ismail MI, Yaakub A, Fukumoto F, Suzuki Y. Analysis of the Effectiveness of Metaheuristic Methods on Bayesian Optimization in the Classification of Visual Field Defects. Diagnostics (Basel) 2023; 13:diagnostics13111946. [PMID: 37296798 DOI: 10.3390/diagnostics13111946] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Revised: 05/24/2023] [Accepted: 05/30/2023] [Indexed: 06/12/2023] Open
Abstract
Bayesian optimization (BO) is commonly used to optimize the hyperparameters of transfer learning models to improve the model's performance significantly. In BO, the acquisition functions direct the hyperparameter space exploration during the optimization. However, the computational cost of evaluating the acquisition function and updating the surrogate model can become prohibitively expensive due to increasing dimensionality, making it more challenging to achieve the global optimum, particularly in image classification tasks. Therefore, this study investigates and analyses the effect of incorporating metaheuristic methods into BO to improve the performance of acquisition functions in transfer learning. By incorporating four different metaheuristic methods, namely Particle Swarm Optimization (PSO), Artificial Bee Colony (ABC) Optimization, Harris Hawks Optimization, and Sailfish Optimization (SFO), the performance of acquisition function, Expected Improvement (EI), was observed in the VGGNet models for visual field defect multi-class classification. Other than EI, comparative observations were also conducted using different acquisition functions, such as Probability Improvement (PI), Upper Confidence Bound (UCB), and Lower Confidence Bound (LCB). The analysis demonstrates that SFO significantly enhanced BO optimization by increasing mean accuracy by 9.6% for VGG-16 and 27.54% for VGG-19. As a result, the best validation accuracy obtained for VGG-16 and VGG-19 is 98.6% and 98.34%, respectively.
Collapse
Affiliation(s)
- Masyitah Abu
- Center of Excellence for Advance Computing, Faculty of Electronic Engineering & Technology, Universiti Malaysia Perlis, Kangar 01000, Malaysia
| | - Nik Adilah Hanin Zahri
- Center of Excellence for Advance Computing, Faculty of Electronic Engineering & Technology, Universiti Malaysia Perlis, Kangar 01000, Malaysia
| | - Amiza Amir
- Center of Excellence for Advance Computing, Faculty of Electronic Engineering & Technology, Universiti Malaysia Perlis, Kangar 01000, Malaysia
| | - Muhammad Izham Ismail
- Institute of Engineering Mathematics, Faculty of Applied and Human Sciences, Universiti Malaysia Perlis, Arau 02600, Malaysia
| | - Azhany Yaakub
- Department of Ophthalmology & Visual Science, Universiti Sains Malaysia, Kubang Kerian 16150, Malaysia
| | - Fumiyo Fukumoto
- Graduate Faculty of Interdisciplinary Research, University of Yamanashi, Kofu 400-0016, Japan
| | - Yoshimi Suzuki
- Graduate Faculty of Interdisciplinary Research, University of Yamanashi, Kofu 400-0016, Japan
| |
Collapse
|
7
|
Chakraborty GS, Batra S, Singh A, Muhammad G, Torres VY, Mahajan M. A Novel Deep Learning-Based Classification Framework for COVID-19 Assisted with Weighted Average Ensemble Modeling. Diagnostics (Basel) 2023; 13:diagnostics13101806. [PMID: 37238290 DOI: 10.3390/diagnostics13101806] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Revised: 05/16/2023] [Accepted: 05/17/2023] [Indexed: 05/28/2023] Open
Abstract
COVID-19 is an infectious disease caused by the deadly virus SARS-CoV-2 that affects the lung of the patient. Different symptoms, including fever, muscle pain and respiratory syndrome, can be identified in COVID-19-affected patients. The disease needs to be diagnosed in a timely manner, otherwise the lung infection can turn into a severe form and the patient's life may be in danger. In this work, an ensemble deep learning-based technique is proposed for COVID-19 detection that can classify the disease with high accuracy, efficiency, and reliability. A weighted average ensemble (WAE) prediction was performed by combining three CNN models, namely Xception, VGG19 and ResNet50V2, where 97.25% and 94.10% accuracy was achieved for binary and multiclass classification, respectively. To accurately detect the disease, different test methods have been proposed and developed, some of which are even being used in real-time situations. RT-PCR is one of the most successful COVID-19 detection methods, and is being used worldwide with high accuracy and sensitivity. However, complexity and time-consuming manual processes are limitations of this method. To make the detection process automated, researchers across the world have started to use deep learning to detect COVID-19 applied on medical imaging. Although most of the existing systems offer high accuracy, different limitations, including high variance, overfitting and generalization errors, can be found that can degrade the system performance. Some of the reasons behind those limitations are a lack of reliable data resources, missing preprocessing techniques, a lack of proper model selection, etc., which eventually create reliability issues. Reliability is an important factor for any healthcare system. Here, transfer learning with better preprocessing techniques applied on two benchmark datasets makes the work more reliable. The weighted average ensemble technique with hyperparameter tuning ensures better accuracy than using a randomly selected single CNN model.
Collapse
Affiliation(s)
- Gouri Shankar Chakraborty
- Department of Computer Science and Engineering, Lovely Professional University, Phagwara 144411, Punjab, India
| | - Salil Batra
- Department of Computer Science and Engineering, Lovely Professional University, Phagwara 144411, Punjab, India
| | - Aman Singh
- Higher Polytechnic School, Universidad Europea del Atlántico, C/Isabel Torres 21, 39011 Santander, Spain
- Department of Engineering, Universidad Internacional Iberoamericana, Arecibo, PR 00613, USA
- Uttaranchal Institute of Technology, Uttaranchal University, Dehradun 248007, Uttarakhand, India
| | - Ghulam Muhammad
- Department of Computer Engineering, College of Computer and Information Sciences, King Saud University, Riyadh 11543, Saudi Arabia
| | - Vanessa Yelamos Torres
- Department of Engineering, Universidad Internacional Iberoamericana, Arecibo, PR 00613, USA
- Engineering Research & Innovation Group, Universidad Europea del Atlántico, C/Isabel Torres 21, 39011 Santander, Spain
- Department of Project Management, Universidad Internacional Iberoamericana, Campeche C.P. 24560, Mexico
| | - Makul Mahajan
- Department of Computer Science and Engineering, Lovely Professional University, Phagwara 144411, Punjab, India
| |
Collapse
|
8
|
Sun Y, Li X, Liu Y, Yuan Z, Wang J, Shi C. A lightweight dual-path cascaded network for vessel segmentation in fundus image. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:10790-10814. [PMID: 37322961 DOI: 10.3934/mbe.2023479] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
Automatic and fast segmentation of retinal vessels in fundus images is a prerequisite in clinical ophthalmic diseases; however, the high model complexity and low segmentation accuracy still limit its application. This paper proposes a lightweight dual-path cascaded network (LDPC-Net) for automatic and fast vessel segmentation. We designed a dual-path cascaded network via two U-shaped structures. Firstly, we employed a structured discarding (SD) convolution module to alleviate the over-fitting problem in both codec parts. Secondly, we introduced the depthwise separable convolution (DSC) technique to reduce the parameter amount of the model. Thirdly, a residual atrous spatial pyramid pooling (ResASPP) model is constructed in the connection layer to aggregate multi-scale information effectively. Finally, we performed comparative experiments on three public datasets. Experimental results show that the proposed method achieved superior performance on the accuracy, connectivity, and parameter quantity, thus proving that it can be a promising lightweight assisted tool for ophthalmic diseases.
Collapse
Affiliation(s)
- Yanxia Sun
- Department of Software Engineering, Harbin University of Science and Technology, Rongcheng 264300, China
| | - Xiang Li
- Shenzhen Institute for Advanced Study, University of Electronic Science and Technology of China, Shenzhen 518110, China
| | - Yuechang Liu
- Department of Software Engineering, Harbin University of Science and Technology, Rongcheng 264300, China
| | - Zhongzheng Yuan
- Department of Software Engineering, Harbin University of Science and Technology, Rongcheng 264300, China
| | - Jinke Wang
- Department of Software Engineering, Harbin University of Science and Technology, Rongcheng 264300, China
| | - Changfa Shi
- Mobile E-business Collaborative Innovation Center of Hunan Province, Hunan University of Technology and Business, Changsha 410205, China
| |
Collapse
|
9
|
Li Z, Huang J, Tong X, Zhang C, Lu J, Zhang W, Song A, Ji S. GL-FusionNet: Fusing global and local features to classify deep and superficial partial thickness burn. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:10153-10173. [PMID: 37322927 DOI: 10.3934/mbe.2023445] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
Burns constitute one of the most common injuries in the world, and they can be very painful for the patient. Especially in the judgment of superficial partial thickness burns and deep partial thickness burns, many inexperienced clinicians are easily confused. Therefore, in order to make burn depth classification automated as well as accurate, we have introduced the deep learning method. This methodology uses a U-Net to segment burn wounds. On this basis, a new thickness burn classification model that fuses global and local features (GL-FusionNet) is proposed. For the thickness burn classification model, we use a ResNet50 to extract local features, use a ResNet101 to extract global features, and finally implement the add method to perform feature fusion and obtain the deep partial or superficial partial thickness burn classification results. Burns images are collected clinically, and they are segmented and labeled by professional physicians. Among the segmentation methods, the U-Net used achieved a Dice score of 85.352 and IoU score of 83.916, which are the best results among all of the comparative experiments. In the classification model, different existing classification networks are mainly used, as well as a fusion strategy and feature extraction method that are adjusted to conduct experiments; the proposed fusion network model also achieved the best results. Our method yielded the following: accuracy of 93.523, recall of 93.67, precision of 93.51, and F1-score of 93.513. In addition, the proposed method can quickly complete the auxiliary diagnosis of the wound in the clinic, which can greatly improve the efficiency of the initial diagnosis of burns and the nursing care of clinical medical staff.
Collapse
Affiliation(s)
- Zhiwei Li
- School of Computer Engineering and Science, Shanghai University, Shanghai 200444, China
| | - Jie Huang
- Department of Burn Surgery, the First Affiliated Hospital of Naval Medical University, Shanghai 200444, China
| | - Xirui Tong
- Department of Burn Surgery, the First Affiliated Hospital of Naval Medical University, Shanghai 200444, China
| | - Chenbei Zhang
- School of Computer Engineering and Science, Shanghai University, Shanghai 200444, China
| | - Jianyu Lu
- Department of Burn Surgery, the First Affiliated Hospital of Naval Medical University, Shanghai 200444, China
| | - Wei Zhang
- Department of Burn Surgery, the First Affiliated Hospital of Naval Medical University, Shanghai 200444, China
| | - Anping Song
- School of Computer Engineering and Science, Shanghai University, Shanghai 200444, China
| | - Shizhao Ji
- Department of Burn Surgery, the First Affiliated Hospital of Naval Medical University, Shanghai 200444, China
| |
Collapse
|
10
|
Dai M, Xiao G, Shao M, Zhang YS. The Synergy between Deep Learning and Organs-on-Chips for High-Throughput Drug Screening: A Review. BIOSENSORS 2023; 13:389. [PMID: 36979601 PMCID: PMC10046732 DOI: 10.3390/bios13030389] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/17/2022] [Revised: 02/22/2023] [Accepted: 03/07/2023] [Indexed: 06/18/2023]
Abstract
Organs-on-chips (OoCs) are miniature microfluidic systems that have arguably become a class of advanced in vitro models. Deep learning, as an emerging topic in machine learning, has the ability to extract a hidden statistical relationship from the input data. Recently, these two areas have become integrated to achieve synergy for accelerating drug screening. This review provides a brief description of the basic concepts of deep learning used in OoCs and exemplifies the successful use cases for different types of OoCs. These microfluidic chips are of potential to be assembled as highly potent human-on-chips with complex physiological or pathological functions. Finally, we discuss the future supply with perspectives and potential challenges in terms of combining OoCs and deep learning for image processing and automation designs.
Collapse
Affiliation(s)
- Manna Dai
- College of Physics and Information Engineering, Fuzhou University, Fuzhou 350108, China
- Computing and Intelligence Department, Institute of High Performance Computing (IHPC), Agency for Science, Technology and Research (A*STAR), 1 Fusionopolis Way, #16-16 Connexis, Singapore 138632, Singapore
| | - Gao Xiao
- College of Environment and Safety Engineering, Fuzhou University, Fuzhou 350108, China
- Department of Biomedical Engineering, Tsinghua University, Beijing 100084, China
| | - Ming Shao
- Department of Computer and Information Science, College of Engineering, University of Massachusetts Dartmouth, North Dartmouth, MA 02747, USA
| | - Yu Shrike Zhang
- Division of Engineering in Medicine, Department of Medicine, Brigham and Women’s Hospital, Harvard Medical School, Cambridge, MA 02139, USA
| |
Collapse
|
11
|
Li M, Li P, Liu Y. IDEFE algorithm: IDE algorithm optimizes the fuzzy entropy for the gland segmentation. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:4896-4911. [PMID: 36896528 DOI: 10.3934/mbe.2023227] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/18/2023]
Abstract
Breast cancer occurs in the epithelial tissue of the gland, so the accuracy of gland segmentation is crucial to the physician's diagnosis. An innovative technique for breast mammography image gland segmentation is put forth in this paper. In the first step, the algorithm designed the gland segmentation evaluation function. Then a new mutation strategy is established, and the adaptive controlled variables are used to balance the ability of improved differential evolution (IDE) in terms of investigation and convergence. To evaluate its performance, The proposed method is validated on a number of benchmark breast images, including four types of glands from the Quanzhou First Hospital, Fujian, China. Furthermore, the proposed algorithm is been systematically compared to five state-of-the-art algorithms. From the average MSSIM and boxplot, the evidence suggests that the mutation strategy may be effective in searching the topography of the segmented gland problem. The experiment results demonstrated that the proposed method has the best gland segmentation results compared to other algorithms.
Collapse
Affiliation(s)
- Mingzhu Li
- Department of Thyroid and Breast Surgery, East Branch of Quanzhou First Hospital, Fujian 362000, China
| | - Ping Li
- Quanzhou First Hospital Affiliated to Fujian Medical University, Quanzhou, Fujian 362000, China
| | - Yao Liu
- College of Medicine, Huaqiao University, Quanzhou, Fujian 362021, China
| |
Collapse
|
12
|
Estimation of Caenorhabditis Elegans Lifespan Stages Using a Dual-Path Network Combining Biomarkers and Physiological Changes. Bioengineering (Basel) 2022; 9:bioengineering9110689. [PMID: 36421090 PMCID: PMC9687340 DOI: 10.3390/bioengineering9110689] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Revised: 11/08/2022] [Accepted: 11/09/2022] [Indexed: 11/16/2022] Open
Abstract
Assessing individual aging has always been an important topic in aging research. Caenorhabditis elegans (C. elegans) has a short lifespan and is a popular model organism widely utilized in aging research. Studying the differences in C. elegans life stages is of great significance for human health and aging. In order to study the differences in C. elegans lifespan stages, the classification of lifespan stages is the first task to be performed. In the past, biomarkers and physiological changes captured with imaging were commonly used to assess aging in isogenic C. elegans individuals. However, all of the current research has focused only on physiological changes or biomarkers for the assessment of aging, which affects the accuracy of assessment. In this paper, we combine two types of features for the assessment of lifespan stages to improve assessment accuracy. To fuse the two types of features, an improved high-efficiency network (Att-EfficientNet) is proposed. In the new EfficientNet, attention mechanisms are introduced so that accuracy can be further improved. In addition, in contrast to previous research, which divided the lifespan into three stages, we divide the lifespan into six stages. We compared the classification method with other CNN-based methods as well as other classic machine learning methods. The results indicate that the classification method has a higher accuracy rate (72%) than other CNN-based methods and some machine learning methods.
Collapse
|
13
|
Wang Y, Hargreaves CA. A Review Study of the Deep Learning Techniques used for the Classification of Chest Radiological Images for COVID-19 Diagnosis. INTERNATIONAL JOURNAL OF INFORMATION MANAGEMENT DATA INSIGHTS 2022. [PMCID: PMC9294035 DOI: 10.1016/j.jjimei.2022.100100] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 12/02/2022]
Abstract
In the fight against COVID-19, the immediate and accurate screening of infected patients is of great significance. Chest X-Ray (CXR) and Computed Tomography (CT) screening play an important role in the diagnosis of COVID-19. Studies showed that changes occur in Chest Radiological images before the beginning of COVID-19 symptoms for some patients, and the symptoms of COVID-19 and other lung diseases can be similar in their very early stages. Further, it is crucial to effectively distinguish COVID-19 patients from healthy people, and patients with other lung diseases as soon as possible, otherwise inaccurate diagnosis may expose more people to the coronavirus. Many researchers have developed end-to-end deep learning techniques for the classification of COVID-19 patients without manual feature engineering. In this paper, we review the different deep learning techniques that have been used to analyze Chest X-Ray and Computed Tomography scans to classify patients with Covid-19. In addition, we also summarize the common public datasets, challenges, limitations, and possible future work. This review paper is extremely valuable as it confirms that (1) Deep Learning models are effective in classifying chest X-Ray images provided the training dataset is sufficiently large. (2) Data augmentation and generative adversarial networks (GANs) solve the small training dataset problem. (3) Transfer learning methods greatly enhanced the extraction and selection of features that were important for chest image classification. (4) Hyperparameter tuning was valuable for increasing the deep learning model accuracies to generally more than 97%. Our review study helps new researchers identify the gaps and opportunities for further or new research.
Collapse
|
14
|
Kumar S, Mallik A. COVID-19 Detection from Chest X-rays Using Trained Output Based Transfer Learning Approach. Neural Process Lett 2022; 55:1-24. [PMID: 36339644 PMCID: PMC9616430 DOI: 10.1007/s11063-022-11060-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/16/2022] [Indexed: 10/31/2022]
Abstract
The recent Coronavirus disease (COVID-19), which started in 2019, has spread across the globe and become a global pandemic. The efficient and effective COVID-19 detection using chest X-rays helps in early detection and curtailing the spread of the disease. In this paper, we propose a novel Trained Output-based Transfer Learning (TOTL) approach for COVID-19 detection from chest X-rays. We start by preprocessing the Chest X-rays of the patients with techniques like denoising, contrasting, segmentation. These processed images are then fed to several pre-trained transfer learning models like InceptionV3, InceptionResNetV2, Xception, MobileNet, ResNet50, ResNet50V2, VGG16, and VGG19. We fine-tune these models on the processed chest X-rays. Then we further train the outputs of these models using a deep neural network architecture to achieve enhanced performance and aggregate the capabilities of each of them. The proposed model has been tested on four recent COVID-19 chest X-rays datasets by computing several popular evaluation metrics. The performance of our model has also been compared with various deep transfer learning models and several contemporary COVID-19 detection methods. The obtained results demonstrate the efficiency and efficacy of our proposed model.
Collapse
Affiliation(s)
- Sanjay Kumar
- Department of Computer Science and Engineering, Delhi Technological University, New Delhi, 110042 India
| | - Abhishek Mallik
- Department of Computer Science and Engineering, Delhi Technological University, New Delhi, 110042 India
| |
Collapse
|
15
|
Connell M, Xin Y, Gerard SE, Herrmann J, Shah PK, Martin KT, Rezoagli E, Ippolito D, Rajaei J, Baron R, Delvecchio P, Humayun S, Rizi RR, Bellani G, Cereda M. Unsupervised segmentation and quantification of COVID-19 lesions on computed Tomography scans using CycleGAN. Methods 2022; 205:200-209. [PMID: 35817338 PMCID: PMC9288584 DOI: 10.1016/j.ymeth.2022.07.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2021] [Revised: 05/18/2022] [Accepted: 07/06/2022] [Indexed: 01/04/2023] Open
Abstract
BACKGROUND Lesion segmentation is a critical step in medical image analysis, and methods to identify pathology without time-intensive manual labeling of data are of utmost importance during a pandemic and in resource-constrained healthcare settings. Here, we describe a method for fully automated segmentation and quantification of pathological COVID-19 lung tissue on chest Computed Tomography (CT) scans without the need for manually segmented training data. METHODS We trained a cycle-consistent generative adversarial network (CycleGAN) to convert images of COVID-19 scans into their generated healthy equivalents. Subtraction of the generated healthy images from their corresponding original CT scans yielded maps of pathological tissue, without background lung parenchyma, fissures, airways, or vessels. We then used these maps to construct three-dimensional lesion segmentations. Using a validation dataset, Dice scores were computed for our lesion segmentations and other published segmentation networks using ground truth segmentations reviewed by radiologists. RESULTS The COVID-to-Healthy generator eliminated high Hounsfield unit (HU) voxels within pulmonary lesions and replaced them with lower HU voxels. The generator did not distort normal anatomy such as vessels, airways, or fissures. The generated healthy images had higher gas content (2.45 ± 0.93 vs 3.01 ± 0.84 L, P < 0.001) and lower tissue density (1.27 ± 0.40 vs 0.73 ± 0.29 Kg, P < 0.001) than their corresponding original COVID-19 images, and they were not significantly different from those of the healthy images (P < 0.001). Using the validation dataset, lesion segmentations scored an average Dice score of 55.9, comparable to other weakly supervised networks that do require manual segmentations. CONCLUSION Our CycleGAN model successfully segmented pulmonary lesions in mild and severe COVID-19 cases. Our model's performance was comparable to other published models; however, our model is unique in its ability to segment lesions without the need for manual segmentations.
Collapse
Affiliation(s)
- Marc Connell
- Department of Anesthesiology and Critical Care, University of Pennsylvania, Philadelphia, PA, USA
| | - Yi Xin
- Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA
| | - Sarah E Gerard
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - Jacob Herrmann
- Department of Biomedical Engineering, Boston University, Boston, MA, USA
| | - Parth K Shah
- Department of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Kevin T Martin
- Department of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Emanuele Rezoagli
- Department of Emergency and Intensive Care, San Gerardo Hospital, Monza, Italy; Department of Medicine and Surgery, University of Milano-Bicocca, Monza, Italy
| | - Davide Ippolito
- Department of Diagnostic and Interventional Radiology, San Gerardo Hospital, Monza, Italy
| | - Jennia Rajaei
- Department of Medicine, Stanford University, Stanford, CA, USA
| | - Ryan Baron
- Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA
| | - Paolo Delvecchio
- Department of Anesthesiology and Critical Care, University of Pennsylvania, Philadelphia, PA, USA
| | - Shiraz Humayun
- Department of Anesthesiology and Critical Care, University of Pennsylvania, Philadelphia, PA, USA
| | - Rahim R Rizi
- Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA
| | - Giacomo Bellani
- Department of Medicine and Surgery, University of Milano-Bicocca, Monza, Italy
| | - Maurizio Cereda
- Department of Anesthesiology and Critical Care, University of Pennsylvania, Philadelphia, PA, USA.
| |
Collapse
|
16
|
Jiang S, Li J, Hua Z. Transformer with progressive sampling for medical cellular image segmentation. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2022; 19:12104-12126. [PMID: 36653988 DOI: 10.3934/mbe.2022563] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
The convolutional neural network, as the backbone network for medical image segmentation, has shown good performance in the past years. However, its drawbacks cannot be ignored, namely, convolutional neural networks focus on local regions and are difficult to model global contextual information. For this reason, transformer, which is used for text processing, was introduced into the field of medical segmentation, and thanks to its expertise in modelling global relationships, the accuracy of medical segmentation was further improved. However, the transformer-based network structure requires a certain training set size to achieve satisfactory segmentation results, and most medical segmentation datasets are small in size. Therefore, in this paper we introduce a gated position-sensitive axial attention mechanism in the self-attention module, so that the transformer-based network structure can also be adapted to the case of small datasets. The common operation of the visual transformer introduced to visual processing when dealing with segmentation tasks is to divide the input image into equal patches of the same size and then perform visual processing on each patch, but this simple division may lead to the destruction of the structure of the original image, and there may be large unimportant regions in the divided grid, causing attention to stay on the uninteresting regions, affecting the segmentation performance. Therefore, in this paper, we add iterative sampling to update the sampling positions, so that the attention stays on the region to be segmented, reducing the interference of irrelevant regions and further improving the segmentation performance. In addition, we introduce the strip convolution module (SCM) and pyramid pooling module (PPM) to capture the global contextual information. The proposed network is evaluated on several datasets and shows some improvement in segmentation accuracy compared to networks of recent years.
Collapse
Affiliation(s)
- Shen Jiang
- School of Computer Science and Technology, Shandong Technology and Business University, Yantai 264005, China
| | - Jinjiang Li
- School of Computer Science and Technology, Shandong Technology and Business University, Yantai 264005, China
| | - Zhen Hua
- School of Information and Electronic Engineering, Shandong Technology and Business University, Yantai 264005, China
| |
Collapse
|
17
|
Li H, Liu X, Jia D, Chen Y, Hou P, Li H. Research on chest radiography recognition model based on deep learning. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2022; 19:11768-11781. [PMID: 36124613 DOI: 10.3934/mbe.2022548] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
With the development of medical informatization and against the background of the spread of global epidemic, the demand for automated chest X-ray detection by medical personnel and patients continues to increase. Although the rapid development of deep learning technology has made it possible to automatically generate a single conclusive sentence, the results produced by existing methods are not reliable enough due to the complexity of medical images. To solve this problem, this paper proposes an improved RCLN (Recurrent Learning Network) model as a solution. The model can generate high-level conclusive impressions and detailed descriptive findings sentence-by-sentence and realize the imitation of the doctoros standard tone by combining a convolutional neural network (CNN) with a long short-term memory (LSTM) network through a recurrent structure, and adding a multi-head attention mechanism. The proposed algorithm has been experimentally verified on publicly available chest X-ray images from the Open-i image set. The results show that it can effectively solve the problem of automatic generation of colloquial medical reports.
Collapse
Affiliation(s)
- Hui Li
- School of Computer Engineering, Jiangsu Ocean University, China
| | - Xintang Liu
- School of Computer Engineering, Jiangsu Ocean University, China
| | - Dongbao Jia
- School of Computer Engineering, Jiangsu Ocean University, China
| | - Yanyan Chen
- School of Computer Engineering, Jiangsu Ocean University, China
| | - Pengfei Hou
- School of Computer Engineering, Jiangsu Ocean University, China
| | - Haining Li
- Department of Neurology, General Hospital of Ningxia Medical University, China
| |
Collapse
|
18
|
Zou K, Tao T, Yuan X, Shen X, Lai W, Long H. An interactive dual-branch network for hard palate segmentation of the oral cavity from CBCT images. Appl Soft Comput 2022. [DOI: 10.1016/j.asoc.2022.109549] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
19
|
Sun J, Pi P, Tang C, Wang SH, Zhang YD. TSRNet: Diagnosis of COVID-19 based on self-supervised learning and hybrid ensemble model. Comput Biol Med 2022; 146:105531. [PMID: 35489140 PMCID: PMC9013277 DOI: 10.1016/j.compbiomed.2022.105531] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2021] [Revised: 03/12/2022] [Accepted: 04/13/2022] [Indexed: 12/16/2022]
Abstract
BACKGROUND As of Feb 27, 2022, coronavirus (COVID-19) has caused 434,888,591 infections and 5,958,849 deaths worldwide, dealing a severe blow to the economies and cultures of most countries around the world. As the virus has mutated, its infectious capacity has further increased. Effective diagnosis of suspected cases is an important tool to stop the spread of the pandemic. Therefore, we intended to develop a computer-aided diagnosis system for the diagnosis of suspected cases. METHODS To address the shortcomings of commonly used pre-training methods and exploit the information in unlabeled images, we proposed a new pre-training method based on transfer learning with self-supervised learning (TS). After that, a new convolutional neural network based on attention mechanism and deep residual network (RANet) was proposed to extract features. Based on this, a hybrid ensemble model (TSRNet) was proposed for classifying lung CT images of suspected patients as COVID-19 and normal. RESULTS Compared with the existing five models in terms of accuracy (DarkCOVIDNet: 98.08%; Deep-COVID: 97.58%; NAGNN: 97.86%; COVID-ResNet: 97.78%; Patch-based CNN: 88.90%), TSRNet has the highest accuracy of 99.80%. In addition, the recall, f1-score, and AUC of the model reached 99.59%, 99.78%, and 1, respectively. CONCLUSION TSRNet can effectively diagnose suspected COVID-19 cases with the help of the information in unlabeled and labeled images, thus helping physicians to adopt early treatment plans for confirmed cases.
Collapse
Affiliation(s)
- Junding Sun
- School of Computer Science and Technology, Henan Polytechnic University, Jiaozuo, Henan, 454000, PR China.
| | - Pengpeng Pi
- School of Computer Science and Technology, Henan Polytechnic University, Jiaozuo, Henan, 454000, PR China.
| | - Chaosheng Tang
- School of Computer Science and Technology, Henan Polytechnic University, Jiaozuo, Henan, 454000, PR China.
| | - Shui-Hua Wang
- School of Computer Science and Technology, Henan Polytechnic University, Jiaozuo, Henan, 454000, PR China; School of Computing and Mathematical Sciences, University of Leicester, Leicester, LE1 7RH, UK.
| | - Yu-Dong Zhang
- School of Computer Science and Technology, Henan Polytechnic University, Jiaozuo, Henan, 454000, PR China; School of Computing and Mathematical Sciences, University of Leicester, Leicester, LE1 7RH, UK.
| |
Collapse
|
20
|
Zhang G, Gao X, Zhu Z, Zhou F, Yu D. Determination of the location of the needle entry point based on an improved pruning algorithm. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2022; 19:7952-7977. [PMID: 35801452 DOI: 10.3934/mbe.2022372] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Since the emergence of new coronaviruses and their variant virus, a large number of medical resources around the world have been put into treatment. In this case, the purpose of this article is to develop a handback intravenous intelligence injection robot, which reduces the direct contact between medical staff and patients and reduces the risk of infection. The core technology of hand back intravenous intelligent robot is a handlet venous vessel detection and segmentation and the position of the needle point position decision. In this paper, an image processing algorithm based on U-Net improvement mechanism (AT-U-Net) is proposed for core technology. It is investigated using a self-built dorsal hand vein database and the results show that it performs well, with an F1-score of 93.91%. After the detection of a dorsal hand vein, this paper proposes a location decision method for the needle entry point based on an improved pruning algorithm (PT-Pruning). The extraction of the trunk line of the dorsal hand vein is realized through this algorithm. Considering the vascular cross-sectional area and bending of each vein injection point area, the optimal injection point of the dorsal hand vein is obtained via a comprehensive decision-making process. Using the self-built dorsal hand vein injection point database, the accuracy of the detection of the effective injection area reaches 96.73%. The accuracy for the detection of the injection area at the optimal needle entry point is 96.50%, which lays a foundation for subsequent mechanical automatic injection.
Collapse
Affiliation(s)
- Guangyuan Zhang
- School of Information Science and Electrical Engineering, Shan Dong Jiao Tong University, Jinan 250000, China
| | - Xiaonan Gao
- School of Information Science and Electrical Engineering, Shan Dong Jiao Tong University, Jinan 250000, China
| | - Zhenfang Zhu
- School of Information Science and Electrical Engineering, Shan Dong Jiao Tong University, Jinan 250000, China
| | - Fengyv Zhou
- School of Control Science and Engineering, Shandong University, Jinan 250000, China
| | - Dexin Yu
- Department of Radiology, Qilu Hospital of Shandong University, Jinan 250000, China
| |
Collapse
|
21
|
A Comprehensive Performance Analysis of Transfer Learning Optimization in Visual Field Defect Classification. Diagnostics (Basel) 2022; 12:diagnostics12051258. [PMID: 35626413 PMCID: PMC9140208 DOI: 10.3390/diagnostics12051258] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2022] [Revised: 05/16/2022] [Accepted: 05/17/2022] [Indexed: 02/05/2023] Open
Abstract
Numerous research have demonstrated that Convolutional Neural Network (CNN) models are capable of classifying visual field (VF) defects with great accuracy. In this study, we evaluated the performance of different pre-trained models (VGG-Net, MobileNet, ResNet, and DenseNet) in classifying VF defects and produced a comprehensive comparative analysis to compare the performance of different CNN models before and after hyperparameter tuning and fine-tuning. Using 32 batch sizes, 50 epochs, and ADAM as the optimizer to optimize weight, bias, and learning rate, VGG-16 obtained the highest accuracy of 97.63 percent, according to experimental findings. Subsequently, Bayesian optimization was utilized to execute automated hyperparameter tuning and automated fine-tuning layers of the pre-trained models to determine the optimal hyperparameter and fine-tuning layer for classifying many VF defect with the highest accuracy. We found that the combination of different hyperparameters and fine-tuning of the pre-trained models significantly impact the performance of deep learning models for this classification task. In addition, we also discovered that the automated selection of optimal hyperparameters and fine-tuning by Bayesian has significantly enhanced the performance of the pre-trained models. The results observed the best performance for the DenseNet-121 model with a validation accuracy of 98.46% and a test accuracy of 99.57% for the tested datasets.
Collapse
|
22
|
Guo P, Li L, Li C, Huang W, Zhao G, Wang S, Wang M, Lin Y. Multiparametric Magnetic Resonance Imaging Information Fusion Using Graph Convolutional Network for Glioma Grading. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:7315665. [PMID: 35591941 PMCID: PMC9113909 DOI: 10.1155/2022/7315665] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/12/2021] [Revised: 03/31/2022] [Accepted: 04/01/2022] [Indexed: 11/17/2022]
Abstract
Accurate preoperative glioma grading is essential for clinical decision-making and prognostic evaluation. Multiparametric magnetic resonance imaging (mpMRI) serves as an important diagnostic tool for glioma patients due to its superior performance in describing noninvasively the contextual information in tumor tissues. Previous studies achieved promising glioma grading results with mpMRI data utilizing a convolutional neural network (CNN)-based method. However, these studies have not fully exploited and effectively fused the rich tumor contextual information provided in the magnetic resonance (MR) images acquired with different imaging parameters. In this paper, a novel graph convolutional network (GCN)-based mpMRI information fusion module (named MMIF-GCN) is proposed to comprehensively fuse the tumor grading relevant information in mpMRI. Specifically, a graph is constructed according to the characteristics of mpMRI data. The vertices are defined as the glioma grading features of different slices extracted by the CNN, and the edges reflect the distances between the slices in a 3D volume. The proposed method updates the information in each vertex considering the interaction between adjacent vertices. The final glioma grading is conducted by combining the fused information in all vertices. The proposed MMIF-GCN module can introduce an additional nonlinear representation learning step in the process of mpMRI information fusion while maintaining the positional relationship between adjacent slices. Experiments were conducted on two datasets, that is, a public dataset (named BraTS2020) and a private one (named GliomaHPPH2018). The results indicate that the proposed method can effectively fuse the grading information provided in mpMRI data for better glioma grading performance.
Collapse
Affiliation(s)
- Peiying Guo
- School of Computer and Artificial Intelligence, Zhengzhou University, Zhengzhou 450001, China
- Collaborative Innovation Center for Internet Healthcare, Zhengzhou University, Zhengzhou 450052, China
| | - Longfei Li
- School of Computer and Artificial Intelligence, Zhengzhou University, Zhengzhou 450001, China
- Collaborative Innovation Center for Internet Healthcare, Zhengzhou University, Zhengzhou 450052, China
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518000, China
| | - Cheng Li
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518000, China
| | - Weijian Huang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518000, China
| | - Guohua Zhao
- Collaborative Innovation Center for Internet Healthcare, Zhengzhou University, Zhengzhou 450052, China
| | - Shanshan Wang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518000, China
| | - Meiyun Wang
- Collaborative Innovation Center for Internet Healthcare, Zhengzhou University, Zhengzhou 450052, China
- Department of Radiology, Henan Provincial People's Hospital, Zhengzhou 450003, China
| | - Yusong Lin
- Collaborative Innovation Center for Internet Healthcare, Zhengzhou University, Zhengzhou 450052, China
- School of Cyber Science and Engineering, Zhengzhou University, Zhengzhou 450002, China
- Hanwei IoT Institute, Zhengzhou University, Zhengzhou 450002, China
| |
Collapse
|
23
|
COV-DLS: Prediction of COVID-19 from X-Rays Using Enhanced Deep Transfer Learning Techniques. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:6216273. [PMID: 35422979 PMCID: PMC9002900 DOI: 10.1155/2022/6216273] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/22/2021] [Accepted: 02/11/2022] [Indexed: 12/12/2022]
Abstract
In this paper, modifications in neoteric architectures such as VGG16, VGG19, ResNet50, and InceptionV3 are proposed for the classification of COVID-19 using chest X-rays. The proposed architectures termed "COV-DLS" consist of two phases: heading model construction and classification. The heading model construction phase utilizes four modified deep learning architectures, namely Modified-VGG16, Modified-VGG19, Modified-ResNet50, and Modified-InceptionV3. An attempt is made to modify these neoteric architectures by incorporating the average pooling and dense layers. The dropout layer is also added to prevent the overfitting problem. Two dense layers with different activation functions are also added. Thereafter, the output of these modified models is applied during the classification phase, when COV-DLS are applied on a COVID-19 chest X-ray image data set. Classification accuracy of 98.61% is achieved by Modified-VGG16, 97.22% by Modified-VGG19, 95.13% by Modified-ResNet50, and 99.31% by Modified-InceptionV3. COV-DLS outperforms existing deep learning models in terms of accuracy and F1-score.
Collapse
|
24
|
Punn NS, Agarwal S. CHS-Net: A Deep Learning Approach for Hierarchical Segmentation of COVID-19 via CT Images. Neural Process Lett 2022; 54:3771-3792. [PMID: 35310011 PMCID: PMC8924740 DOI: 10.1007/s11063-022-10785-x] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/24/2022] [Indexed: 01/19/2023]
Abstract
The pandemic of novel severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) also known as COVID-19 has been spreading worldwide, causing rampant loss of lives. Medical imaging such as computed tomography (CT), X-ray, etc., plays a significant role in diagnosing the patients by presenting the visual representation of the functioning of the organs. However, for any radiologist analyzing such scans is a tedious and time-consuming task. The emerging deep learning technologies have displayed its strength in analyzing such scans to aid in the faster diagnosis of the diseases and viruses such as COVID-19. In the present article, an automated deep learning based model, COVID-19 hierarchical segmentation network (CHS-Net) is proposed that functions as a semantic hierarchical segmenter to identify the COVID-19 infected regions from lungs contour via CT medical imaging using two cascaded residual attention inception U-Net (RAIU-Net) models. RAIU-Net comprises of a residual inception U-Net model with spectral spatial and depth attention network (SSD) that is developed with the contraction and expansion phases of depthwise separable convolutions and hybrid pooling (max and spectral pooling) to efficiently encode and decode the semantic and varying resolution information. The CHS-Net is trained with the segmentation loss function that is the defined as the average of binary cross entropy loss and dice loss to penalize false negative and false positive predictions. The approach is compared with the recently proposed approaches and evaluated using the standard metrics like accuracy, precision, specificity, recall, dice coefficient and Jaccard similarity along with the visualized interpretation of the model prediction with GradCam++ and uncertainty maps. With extensive trials, it is observed that the proposed approach outperformed the recently proposed approaches and effectively segments the COVID-19 infected regions in the lungs.
Collapse
|
25
|
Zhang X, Wang G, Zhao SG. CapsNet-COVID19: Lung CT image classification method based on CapsNet model. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2022; 19:5055-5074. [PMID: 35430853 DOI: 10.3934/mbe.2022236] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
The outbreak of the Corona Virus Disease 2019 (COVID-19) has posed a serious threat to human health and life around the world. As the number of COVID-19 cases continues to increase, many countries are facing problems such as errors in nucleic acid testing (RT-PCR), shortage of testing reagents, and lack of testing personnel. In order to solve such problems, it is necessary to propose a more accurate and efficient method as a supplement to the detection and diagnosis of COVID-19. This research uses a deep network model to classify some of the COVID-19, general pneumonia, and normal lung CT images in the 2019 Novel Coronavirus Information Database. The first level of the model uses convolutional neural networks to locate lung regions in lung CT images. The second level of the model uses the capsule network to classify and predict the segmented images. The accuracy of our method is 84.291% on the test set and 100% on the training set. Experiment shows that our classification method is suitable for medical image classification with complex background, low recognition rate, blurred boundaries and large image noise. We believe that this classification method is of great value for monitoring and controlling the growth of patients in COVID-19 infected areas.
Collapse
Affiliation(s)
- XiaoQing Zhang
- Nanjing University of Science and Technology, Taizhou Technology Institute, Taizhou 225300, China
| | - GuangYu Wang
- Donghua University, College of Information Science and Technology, Shanghai 201620, China
| | - Shu-Guang Zhao
- Donghua University, College of Information Science and Technology, Shanghai 201620, China
| |
Collapse
|