1
|
Cui Y, Ji S, Zha Y, Zhou X, Zhang Y, Zhou T. An Automatic Method for Elbow Joint Recognition, Segmentation and Reconstruction. SENSORS (BASEL, SWITZERLAND) 2024; 24:4330. [PMID: 39001109 PMCID: PMC11244199 DOI: 10.3390/s24134330] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/20/2024] [Revised: 06/18/2024] [Accepted: 06/28/2024] [Indexed: 07/16/2024]
Abstract
Elbow computerized tomography (CT) scans have been widely applied for describing elbow morphology. To enhance the objectivity and efficiency of clinical diagnosis, an automatic method to recognize, segment, and reconstruct elbow joint bones is proposed in this study. The method involves three steps: initially, the humerus, ulna, and radius are automatically recognized based on the anatomical features of the elbow joint, and the prompt boxes are generated. Subsequently, elbow MedSAM is obtained through transfer learning, which accurately segments the CT images by integrating the prompt boxes. After that, hole-filling and object reclassification steps are executed to refine the mask. Finally, three-dimensional (3D) reconstruction is conducted seamlessly using the marching cube algorithm. To validate the reliability and accuracy of the method, the images were compared to the masks labeled by senior surgeons. Quantitative evaluation of segmentation results revealed median intersection over union (IoU) values of 0.963, 0.959, and 0.950 for the humerus, ulna, and radius, respectively. Additionally, the reconstructed surface errors were measured at 1.127, 1.523, and 2.062 mm, respectively. Consequently, the automatic elbow reconstruction method demonstrates promising capabilities in clinical diagnosis, preoperative planning, and intraoperative navigation for elbow joint diseases.
Collapse
Affiliation(s)
- Ying Cui
- School of Mechanical Engineering, Beijing Institute of Technology, Beijing 100081, China
- School of Medical Technology, Beijing Institute of Technology, Beijing 100081, China
| | - Shangwei Ji
- Department of Orthopedic Trauma, Beijing Jishuitan Hospital, Beijing 100035, China
| | - Yejun Zha
- Department of Orthopedic Trauma, Beijing Jishuitan Hospital, Beijing 100035, China
| | - Xinhua Zhou
- Department of Orthopedics, Beijing Jishuitan Hospital, Beijing 100035, China
| | - Yichuan Zhang
- School of Mechanical Engineering, Beijing Institute of Technology, Beijing 100081, China
| | - Tianfeng Zhou
- School of Mechanical Engineering, Beijing Institute of Technology, Beijing 100081, China
- School of Medical Technology, Beijing Institute of Technology, Beijing 100081, China
| |
Collapse
|
2
|
Zhang Z, Han J, Ji W, Lou H, Li Z, Hu Y, Wang M, Qi B, Liu S. Improved deep learning for automatic localisation and segmentation of rectal cancer on T2-weighted MRI. J Med Radiat Sci 2024. [PMID: 38654675 DOI: 10.1002/jmrs.794] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2023] [Accepted: 04/09/2024] [Indexed: 04/26/2024] Open
Abstract
INTRODUCTION The automatic segmentation approaches of rectal cancer from magnetic resonance imaging (MRI) are very valuable to relieve physicians from heavy workloads and enhance working efficiency. This study aimed to compare the segmentation accuracy of a proposed model with the other three models and the inter-observer consistency. METHODS A total of 65 patients with rectal cancer who underwent MRI examination were enrolled in our cohort and were randomly divided into a training cohort (n = 45) and a validation cohort (n = 20). Two experienced radiologists independently segmented rectal cancer lesions. A novel segmentation model (AttSEResUNet) was trained on T2WI based on ResUNet and attention mechanisms. The segmentation performance of the AttSEResUNet, U-Net, ResUNet and U-Net with Attention Gate (AttUNet) was compared, using Dice similarity coefficient (DSC), Hausdorff distance (HD), mean distance to agreement (MDA) and Jaccard index. The segmentation variability of automatic segmentation models and inter-observer was also evaluated. RESULTS The AttSEResUNet with post-processing showed perfect lesion recognition rate (100%) and false recognition rate (0), and its evaluation metrics outperformed other three models for two independent readers (observer 1: DSC = 0.839 ± 0.112, HD = 9.55 ± 6.68, MDA = 0.556 ± 0.722, Jaccard index = 0.736 ± 0.150; observer 2: DSC = 0.856 ± 0.099, HD = 11.0 ± 10.1, MDA = 0.789 ± 1.07, Jaccard index = 0.673 ± 0.130). The segmentation performance of AttSEResUNet was comparable and similar to manual variability (DSC = 0.857 ± 0.115, HD = 10.0 ± 10.0, MDA = 0.704 ± 1.17, Jaccard index = 0.666 ± 0.139). CONCLUSION Comparing with other three models, the proposed AttSEResUNet model was demonstrated as a more accurate model for contouring the rectal tumours in axial T2WI images, whose variability was similar to that of inter-observer.
Collapse
Affiliation(s)
- Zaixian Zhang
- Department of Radiology, The Affiliated Hospital of Qingdao University, Qingdao, China
| | - Junqi Han
- Department of Radiology, The Affiliated Hospital of Qingdao University, Qingdao, China
| | - Weina Ji
- Department of Radiology, The Affiliated Hospital of Qingdao University, Qingdao, China
| | - Henan Lou
- Department of Radiology, The Affiliated Hospital of Qingdao University, Qingdao, China
| | - Zhiming Li
- Department of Radiology, The Affiliated Hospital of Qingdao University, Qingdao, China
| | - Yabin Hu
- Department of Radiology, The Affiliated Hospital of Qingdao University, Qingdao, China
| | - Mingjia Wang
- College of Automation and Electronic Engineering, Qingdao University of Science and Technology, Qingdao, China
| | - Baozhu Qi
- College of Automation and Electronic Engineering, Qingdao University of Science and Technology, Qingdao, China
| | - Shunli Liu
- Department of Radiology, The Affiliated Hospital of Qingdao University, Qingdao, China
| |
Collapse
|
3
|
Jin X, Gao M, Li D, Zhao T. Damage detection of road domain waveform guardrail structure based on machine learning multi-module fusion. PLoS One 2024; 19:e0299116. [PMID: 38489307 PMCID: PMC10942022 DOI: 10.1371/journal.pone.0299116] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2023] [Accepted: 02/05/2024] [Indexed: 03/17/2024] Open
Abstract
The current highway waveform guardrail recognition technology has encountered problems with low segmentation accuracy and strong noise interference. Therefore, an improved U-net semantic segmentation model is proposed to improve the efficiency of road maintenance detection. The model training is guided by mixed expansion convolution and mixed loss function, while the presence of guardrail shedding is investigated by using partial mean values of gray values in ROI region based on segmentation results, while the first-order detail coefficients of wavelet transform are applied to detect guardrail defects and deformation. It has been determined that the Miou and Dice of the improved model are improved by 8.63% and 17.67%, respectively, over the traditional model, and that the method of detecting defects in the data is more accurate than 85%. As a result of efficient detection of highway waveform guardrail, the detection process is shortened and the effectiveness of the detection is improved later on during road maintenance.
Collapse
Affiliation(s)
- Xiaowei Jin
- School of Energy and Transportation Engineering, Inner Mongolia Agricultural University, Hohhot, 010018, Inner Mongolia, China
| | - Mingxing Gao
- College of Energy and Transportation Engineering, Inner Mongolia Agricultural University, Hohhot, 010018, Inner Mongolia, China
| | - Danlan Li
- College of Energy and Transportation Engineering, Inner Mongolia Agricultural University, Hohhot, 010018, Inner Mongolia, China
| | - Ting Zhao
- College of Energy and Transportation Engineering, Inner Mongolia Agricultural University, Hohhot, 010018, Inner Mongolia, China
| |
Collapse
|
4
|
Kang SH, Lee Y. Motion Artifact Reduction Using U-Net Model with Three-Dimensional Simulation-Based Datasets for Brain Magnetic Resonance Images. Bioengineering (Basel) 2024; 11:227. [PMID: 38534500 DOI: 10.3390/bioengineering11030227] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2023] [Revised: 02/20/2024] [Accepted: 02/23/2024] [Indexed: 03/28/2024] Open
Abstract
This study aimed to remove motion artifacts from brain magnetic resonance (MR) images using a U-Net model. In addition, a simulation method was proposed to increase the size of the dataset required to train the U-Net model while avoiding the overfitting problem. The volume data were rotated and translated with random intensity and frequency, in three dimensions, and were iterated as the number of slices in the volume data. Then, for every slice, a portion of the motion-free k-space data was replaced with motion k-space data, respectively. In addition, based on the transposed k-space data, we acquired MR images with motion artifacts and residual maps and constructed datasets. For a quantitative evaluation, the root mean square error (RMSE), peak signal-to-noise ratio (PSNR), coefficient of correlation (CC), and universal image quality index (UQI) were measured. The U-Net models for motion artifact reduction with the residual map-based dataset showed the best performance across all evaluation factors. In particular, the RMSE, PSNR, CC, and UQI improved by approximately 5.35×, 1.51×, 1.12×, and 1.01×, respectively, and the U-Net model with the residual map-based dataset was compared with the direct images. In conclusion, our simulation-based dataset demonstrates that U-Net models can be effectively trained for motion artifact reduction.
Collapse
Affiliation(s)
- Seong-Hyeon Kang
- Department of Biomedical Engineering, Eulji University, Seongnam 13135, Republic of Korea
| | - Youngjin Lee
- Department of Radiological Science, Gachon University, Incheon 21936, Republic of Korea
| |
Collapse
|
5
|
Guo X, Wang Z, Wu P, Li Y, Alsaadi FE, Zeng N. ELTS-Net: An enhanced liver tumor segmentation network with augmented receptive field and global contextual information. Comput Biol Med 2024; 169:107879. [PMID: 38142549 DOI: 10.1016/j.compbiomed.2023.107879] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2023] [Revised: 11/30/2023] [Accepted: 12/18/2023] [Indexed: 12/26/2023]
Abstract
The liver is one of the organs with the highest incidence rate in the human body, and late-stage liver cancer is basically incurable. Therefore, early diagnosis and lesion location of liver cancer are of important clinical value. This study proposes an enhanced network architecture ELTS-Net based on the 3D U-Net model, to address the limitations of conventional image segmentation methods and the underutilization of image spatial features by the 2D U-Net network structure. ELTS-Net expands upon the original network by incorporating dilated convolutions to increase the receptive field of the convolutional kernel. Additionally, an attention residual module, comprising an attention mechanism and residual connections, replaces the original convolutional module, serving as the primary components of the encoder and decoder. This design enables the network to capture contextual information globally in both channel and spatial dimensions. Furthermore, deep supervision modules are integrated between different levels of the decoder network, providing additional feedback from deeper intermediate layers. This constrains the network weights to the target regions and optimizing segmentation results. Evaluation on the LiTS2017 dataset shows improvements in evaluation metrics for liver and tumor segmentation tasks compared to the baseline 3D U-Net model, achieving 95.2% liver segmentation accuracy and 71.9% tumor segmentation accuracy, with accuracy improvements of 0.9% and 3.1% respectively. The experimental results validate the superior segmentation performance of ELTS-Net compared to other comparison models, offering valuable guidance for clinical diagnosis and treatment.
Collapse
Affiliation(s)
- Xiaoyue Guo
- College of Engineering, Peking University, Beijing 100871, China; Department of Instrumental and Electrical Engineering, Xiamen University, Fujian 361005, China
| | - Zidong Wang
- Department of Computer Science, Brunel University London, Uxbridge UB8 3PH, UK.
| | - Peishu Wu
- Department of Instrumental and Electrical Engineering, Xiamen University, Fujian 361005, China
| | - Yurong Li
- College of Electrical Engineering and Automation, Fuzhou University, Fujian 350116, China; Fujian Key Lab of Medical Instrumentation & Pharmaceutical Technology, Fujian 350116, China
| | - Fuad E Alsaadi
- Communication Systems and Networks Research Group, Department of Electrical and Computer Engineering, Faculty of Engineering, King Abdulaziz University, Jeddah 21589, Saudi Arabia
| | - Nianyin Zeng
- Department of Instrumental and Electrical Engineering, Xiamen University, Fujian 361005, China.
| |
Collapse
|
6
|
Hu X, Cao Y, Hu W, Zhang W, Li J, Wang C, Mukhopadhyay SC, Li Y, Liu Z, Li S. Refined Feature-based Multi-frame and Multi-scale Fusing Gate network for accurate segmentation of plaques in ultrasound videos. Comput Biol Med 2023; 163:107091. [PMID: 37331099 DOI: 10.1016/j.compbiomed.2023.107091] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Revised: 04/29/2023] [Accepted: 05/27/2023] [Indexed: 06/20/2023]
Abstract
The accurate segmentation of carotid plaques in ultrasound videos will provide evidence for clinicians to evaluate the properties of plaques and treat patients effectively. However, the confusing background, blurry boundaries and plaque movement in ultrasound videos make accurate plaque segmentation challenging. To address the above challenges, we propose the Refined Feature-based Multi-frame and Multi-scale Fusing Gate Network (RMFG_Net), which captures spatial and temporal features in consecutive video frames for high-quality segmentation results and no manual annotation of the first frame. A spatial-temporal feature filter is proposed to suppress the noise of low-level CNN features and promote the detailed target area. To obtain a more accurate plaque position, we propose a transformer-based cross-scale spatial location algorithm, which models the relationship between adjacent layers of consecutive video frames to achieve stable positioning. To make full use of more detailed and semantic information, multi-layer gated computing is applied to fuse features of different layers, ensuring sufficient useful feature map aggregation for segmentation. Experiments on two clinical datasets demonstrate that the proposed method outperforms other state-of-the-art methods under different evaluation metrics, and it processes images with a speed of 68 frames per second which is suitable for real-time segmentation. A large number of ablation experiments were conducted to demonstrate the effectiveness of each component and experimental setting, as well as the potential of the proposed method in ultrasound video plaque segmentation tasks. The codes can be publicly available from https://github.com/xifengHuu/RMFG_Net.git.
Collapse
Affiliation(s)
- Xifeng Hu
- School of Information Science and Engineering, Shandong University, Qingdao 266237, China
| | - Yankun Cao
- School of Software, Shandong University, Jinan 250101, China
| | - Weifeng Hu
- School of Information Science and Engineering, Shandong University, Qingdao 266237, China
| | - Wenzhen Zhang
- School of Information Science and Engineering, Shandong University, Qingdao 266237, China
| | - Jing Li
- Beijing Hospital National Geriatrics Center, No. 1 Dahua Road, Dongcheng District, Beijing 100730, China
| | - Chuanyu Wang
- Beijing Hospital National Geriatrics Center, No. 1 Dahua Road, Dongcheng District, Beijing 100730, China
| | | | - Yujun Li
- School of Information Science and Engineering, Shandong University, Qingdao 266237, China.
| | - Zhi Liu
- School of Information Science and Engineering, Shandong University, Qingdao 266237, China.
| | - Shuo Li
- School of Case Western Reserve University, Cleveland, OH, USA
| |
Collapse
|
7
|
Xu Z, Zhang X, Zhang H, Liu Y, Zhan Y, Lukasiewicz T. EFPN: Effective medical image detection using feature pyramid fusion enhancement. Comput Biol Med 2023; 163:107149. [PMID: 37348265 DOI: 10.1016/j.compbiomed.2023.107149] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2023] [Revised: 05/15/2023] [Accepted: 06/07/2023] [Indexed: 06/24/2023]
Abstract
Feature pyramid networks (FPNs) are widely used in the existing deep detection models to help them utilize multi-scale features. However, there exist two multi-scale feature fusion problems for the FPN-based deep detection models in medical image detection tasks: insufficient multi-scale feature fusion and the same importance for multi-scale features. Therefore, in this work, we propose a new enhanced backbone model, EFPNs, to overcome these problems and help the existing FPN-based detection models to achieve much better medical image detection performances. We first introduce an additional top-down pyramid to help the detection networks fuse deeper multi-scale information; then, a scale enhancement module is developed to use different sizes of kernels to generate more diverse multi-scale features. Finally, we propose a feature fusion attention module to estimate and assign different importance weights to features with different depths and scales. Extensive experiments are conducted on two public lesion detection datasets for different medical image modalities (X-ray and MRI). On the mAP and mR evaluation metrics, EFPN-based Faster R-CNNs improved 1.55% and 4.3% on the PenD (X-ray) dataset, and 2.74% and 3.1% on the BraTs (MRI) dataset, respectively. EFPN-based Faster R-CNNs achieve much better performances than the state-of-the-art baselines in medical image detection tasks. The proposed three improvements are all essential and effective for EFPNs to achieve superior performances; and besides Faster R-CNNs, EFPNs can be easily applied to other deep models to significantly enhance their performances in medical image detection tasks.
Collapse
Affiliation(s)
- Zhenghua Xu
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, Hebei University of Technology, Tianjin, China.
| | - Xudong Zhang
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, Hebei University of Technology, Tianjin, China
| | - Hexiang Zhang
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, Hebei University of Technology, Tianjin, China.
| | - Yunxin Liu
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, Hebei University of Technology, Tianjin, China
| | - Yuefu Zhan
- Department of Radiology, Hainan Women and Children's Medical Center, Haikou, China.
| | - Thomas Lukasiewicz
- Institute of Logic and Computation, TU Wien, Vienna, Austria; Department of Computer Science, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
8
|
Zhao Y, Wang S, Zhang Y, Qiao S, Zhang M. WRANet: wavelet integrated residual attention U-Net network for medical image segmentation. COMPLEX INTELL SYST 2023:1-13. [PMID: 37361970 PMCID: PMC10248349 DOI: 10.1007/s40747-023-01119-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2023] [Accepted: 05/16/2023] [Indexed: 06/28/2023]
Abstract
Medical image segmentation is crucial for the diagnosis and analysis of disease. Deep convolutional neural network methods have achieved great success in medical image segmentation. However, they are highly susceptible to noise interference during the propagation of the network, where weak noise can dramatically alter the network output. As the network deepens, it can face problems such as gradient explosion and vanishing. To improve the robustness and segmentation performance of the network, we propose a wavelet residual attention network (WRANet) for medical image segmentation. We replace the standard downsampling modules (e.g., maximum pooling and average pooling) in CNNs with discrete wavelet transform, decompose the features into low- and high-frequency components, and remove the high-frequency components to eliminate noise. At the same time, the problem of feature loss can be effectively addressed by introducing an attention mechanism. The combined experimental results show that our method can effectively perform aneurysm segmentation, achieving a Dice score of 78.99%, an IoU score of 68.96%, a precision of 85.21%, and a sensitivity score of 80.98%. In polyp segmentation, a Dice score of 88.89%, an IoU score of 81.74%, a precision rate of 91.32%, and a sensitivity score of 91.07% were achieved. Furthermore, our comparison with state-of-the-art techniques demonstrates the competitiveness of the WRANet network.
Collapse
Affiliation(s)
- Yawu Zhao
- School of Computer Science and Technology, China University of Petroleum, Qingdao, Shandong China
| | - Shudong Wang
- School of Computer Science and Technology, China University of Petroleum, Qingdao, Shandong China
| | - Yulin Zhang
- College of Mathematics and System Science, Shandong University of Science and Technology, Qingdao, Shandong China
| | - Sibo Qiao
- School of Computer Science and Technology, China University of Petroleum, Qingdao, Shandong China
| | - Mufei Zhang
- Inspur Cloud Information Technology Co, Inspur, Jinan, Shandong China
| |
Collapse
|
9
|
Arul King J, Helen Sulochana C. An efficient deep neural network to segment lung nodule using optimized HDCCARUNet model. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2023. [DOI: 10.3233/jifs-222215] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/07/2023]
Abstract
Lung cancer is a severe disease that may lead to death if left undiagnosed and untreated. Lung cancer recognition and segmentation is a difficult task in medical image processing. The study of Computed Tomography (CT) is an important phase for detecting abnormal tissues in the lung. The size of a nodule as well as the fine details of nodule can be varied for various images. Radiologists face a difficult task in diagnosing nodules from multiple images. Deep learning approaches outperform traditional learning algorithms when the data amount is large. One of the most common deep learning architectures is convolutional neural networks. Convolutional Neural Networks use pre-trained models like LeNet, AlexNet, GoogleNet, VGG16, VGG19, Resnet50, and others for learning features. This study proposes an optimized HDCCARUNet (Hybrid Dilated Convolutional Channel Attention Res-UNet) architecture, which combines an improved U-Net with a modified channel attention (MCA) block, and a HDAC (hybrid dilated attention convolutional) layer to accurately and effectively do medical image segmentation for various tasks. The attention mechanism aids in focusing on the desired outcome. The ability to dynamically allot input weights to neurons allows it to focus only on the most important information. In order to gather key details about different object features and infer a finer channel-wise attention, the proposed system uses a modified channel attention (MCA) block. The experiment is conducted on LIDC-IDRI dataset. The noises present in the dataset images are denoised by enhanced DWT filter and the performance is analysed at various noise levels. The proposed method achieves an accuracy rate of 99.58 % . Performance measures like accuracy, sensitivity, specificity, and ROC curves are evaluated and the system significantly outperforms other state-of-the-art systems.
Collapse
Affiliation(s)
- J. Arul King
- Department of ECE, St. Xavier’s Catholic College of Engineering, Tamilnadu, India
| | - C. Helen Sulochana
- Department of ECE, St. Xavier’s Catholic College of Engineering, Tamilnadu, India
| |
Collapse
|
10
|
Triplet attention fusion module: A concise and efficient channel attention module for medical image segmentation. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104515] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
|
11
|
Jiang L, Ou J, Liu R, Zou Y, Xie T, Xiao H, Bai T. RMAU-Net: Residual Multi-Scale Attention U-Net For liver and tumor segmentation in CT images. Comput Biol Med 2023; 158:106838. [PMID: 37030263 DOI: 10.1016/j.compbiomed.2023.106838] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2022] [Revised: 03/08/2023] [Accepted: 03/26/2023] [Indexed: 03/30/2023]
Abstract
Liver cancer is one of the leading causes of cancer-related deaths worldwide. Automatic liver and tumor segmentation are of great value in clinical practice as they can reduce surgeons' workload and increase the probability of success in surgery. Liver and tumor segmentation is a challenging task because of the different sizes, shapes, blurred boundaries of livers and lesions, and low-intensity contrast between organs within patients. To address the problem of fuzzy livers and small tumors, we propose a novel Residual Multi-scale Attention U-Net (RMAU-Net) for liver and tumor segmentation by introducing two modules, i.e., Res-SE-Block and MAB. The Res-SE-Block can mitigate the problem of gradient disappearance by residual connection and enhance the quality of representations by explicitly modeling the interdependencies and feature recalibration between the channels of features. The MAB can exploit rich multi-scale feature information and capture inter-channel and inter-spatial relationships of features simultaneously. In addition, a hybrid loss function, that combines focal loss and dice loss, is designed to improve segmentation accuracy and speed up convergence. We evaluated the proposed method on two publicly available datasets, i.e., LiTS and 3D-IRCADb. Our proposed method achieved better performance than the other state-of-the-art methods, with dice scores of 0.9552 and 0.9697 for LiTS and 3D-IRCABb liver segmentation, and dice scores of 0.7616 and 0.8307 for LiTS and 3D-IRCABb liver tumor segmentation.
Collapse
|
12
|
Pan L, Li Z, Shen Z, Liu Z, Huang L, Yang M, Zheng B, Zeng T, Zheng S. Learning multi-view and centerline topology connectivity information for pulmonary artery-vein separation. Comput Biol Med 2023; 155:106669. [PMID: 36803793 DOI: 10.1016/j.compbiomed.2023.106669] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2022] [Revised: 02/06/2023] [Accepted: 02/10/2023] [Indexed: 02/16/2023]
Abstract
BACKGROUND Automatic pulmonary artery-vein separation has considerable importance in the diagnosis and treatment of lung diseases. However, insufficient connectivity and spatial inconsistency have always been the problems of artery-vein separation. METHODS A novel automatic method for artery-vein separation in CT images is presented in this work. Specifically, a multi-scale information aggregated network (MSIA-Net) including multi-scale fusion blocks and deep supervision, is proposed to learn the features of artery-vein and aggregate additional semantic information, respectively. The proposed method integrates nine MSIA-Net models for artery-vein separation, vessel segmentation, and centerline separation tasks along with axial, coronal, and sagittal multi-view slices. First, the preliminary artery-vein separation results are obtained by the proposed multi-view fusion strategy (MVFS). Then, centerline correction algorithm (CCA) is used to correct the preliminary results of artery-vein separation by the centerline separation results. Finally, the vessel segmentation results are utilized to reconstruct the artery-vein morphology. In addition, weighted cross-entropy and dice loss are employed to solve the class imbalance problem. RESULTS We constructed 50 manually labeled contrast-enhanced computed CT scans for five-fold cross-validation, and experimental results demonstrated that our method achieves superior segmentation performance of 97.7%, 85.1%, and 84.9% on ACC, Pre, and DSC, respectively. Additionally, a series of ablation studies demonstrate the effectiveness of the proposed components. CONCLUSION The proposed method can effectively solve the problem of insufficient vascular connectivity and correct the spatial inconsistency of artery-vein.
Collapse
Affiliation(s)
- Lin Pan
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, China
| | - Zhaopei Li
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, China
| | - Zhiqiang Shen
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, China
| | - Zheng Liu
- Faculty of Applied Science, School of Engineering, University of British Columbia, Kelowna, BC, Canada
| | - Liqin Huang
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, China
| | - Mingjing Yang
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, China
| | - Bin Zheng
- Key Laboratory of Cardio-Thoracic Surgery, Fujian Medical University, Fuzhou, China
| | - Taidui Zeng
- Key Laboratory of Cardio-Thoracic Surgery, Fujian Medical University, Fuzhou, China
| | - Shaohua Zheng
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, China.
| |
Collapse
|
13
|
Cao Y, Zhou W, Zang M, An D, Feng Y, Yu B. MBANet: A 3D convolutional neural network with multi-branch attention for brain tumor segmentation from MRI images. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104296] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
14
|
Liu X, Du K, Lin S, Wang Y. Deep learning on lateral flow immunoassay for the analysis of detection data. Front Comput Neurosci 2023; 17:1091180. [PMID: 36777694 PMCID: PMC9909280 DOI: 10.3389/fncom.2023.1091180] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2022] [Accepted: 01/13/2023] [Indexed: 01/28/2023] Open
Abstract
Lateral flow immunoassay (LFIA) is an important detection method in vitro diagnosis, which has been widely used in medical industry. It is difficult to analyze all peak shapes through classical methods due to the complexity of LFIA. Classical methods are generally some peak-finding methods, which cannot distinguish the difference between normal peak and interference or noise peak, and it is also difficult for them to find the weak peak. Here, a novel method based on deep learning was proposed, which can effectively solve these problems. The method had two steps. The first was to classify the data by a classification model and screen out double-peaks data, and second was to realize segmentation of the integral regions through an improved U-Net segmentation model. After training, the accuracy of the classification model for validation set was 99.59%, and using combined loss function (WBCE + DSC), intersection over union (IoU) value of segmentation model for validation set was 0.9680. This method was used in a hand-held fluorescence immunochromatography analyzer designed independently by our team. A Ferritin standard curve was created, and the T/C value correlated well with standard concentrations in the range of 0-500 ng/ml (R 2 = 0.9986). The coefficients of variation (CVs) were ≤ 1.37%. The recovery rate ranged from 96.37 to 105.07%. Interference or noise peaks are the biggest obstacle in the use of hand-held instruments, and often lead to peak-finding errors. Due to the changeable and flexible use environment of hand-held devices, it is not convenient to provide any technical support. This method greatly reduced the failure rate of peak finding, which can reduce the customer's need for instrument technical support. This study provided a new direction for the data-processing of point-of-care testing (POCT) instruments based on LFIA.
Collapse
Affiliation(s)
- Xinquan Liu
- School of Precision Instrument and Optoelectronics Engineering, Tianjin University, Tianjin, China,Xinquan Liu,
| | - Kang Du
- Tianjin Boomscience Technology Co., Ltd., Tianjin, China
| | - Si Lin
- School of Precision Instrument and Optoelectronics Engineering, Tianjin University, Tianjin, China,Beijing Savant Biotechnology Co., Ltd., Beijing, China
| | - Yan Wang
- School of Precision Instrument and Optoelectronics Engineering, Tianjin University, Tianjin, China,*Correspondence: Yan Wang,
| |
Collapse
|
15
|
Peng Y, Yu D, Guo Y. MShNet: Multi-scale feature combined with h-network for medical image segmentation. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104167] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
16
|
Shao J, Zhou K, Cai YH, Geng DY. Application of an Improved U2-Net Model in Ultrasound Median Neural Image Segmentation. ULTRASOUND IN MEDICINE & BIOLOGY 2022; 48:2512-2520. [PMID: 36167742 DOI: 10.1016/j.ultrasmedbio.2022.08.003] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/17/2022] [Revised: 08/02/2022] [Accepted: 08/03/2022] [Indexed: 06/16/2023]
Abstract
To investigate whether an improved U2-Net model could be used to segment the median nerve and improve segmentation performance, we performed a retrospective study with 402 nerve images from patients who visited Huashan Hospital from October 2018 to July 2020; 249 images were from patients with carpal tunnel syndrome, and 153 were from healthy volunteers. From these, 320 cases were selected as training sets, and 82 cases were selected as test sets. The improved U2-Net model was used to segment each image. Dice coefficients (Dice), pixel accuracy (PA), mean intersection over union (MIoU) and average Hausdorff distance (AVD) were used to evaluate segmentation performance. Results revealed that the Dice, MIoU, PA and AVD values of our improved U2-Net were 72.85%, 79.66%, 95.92% and 51.37 mm, respectively, which were comparable to the actual ground truth; the ground truth came from the labeling of clinicians. However, the Dice, MIoU, PA and AVD values of U-Net were 43.19%, 65.57%, 86.22% and 74.82 mm, and those of Res-U-Net were 58.65%, 72.53%, 88.98% and 57.30 mm. Overall, our data suggest our improved U2-Net model might be used for segmentation of ultrasound median neural images.
Collapse
Affiliation(s)
- Jie Shao
- Department of Ultrasound, Huashan Hospital, Fudan University, Shanghai, China
| | - Kun Zhou
- Academy for Engineering and Technology, Fudan University, Shanghai, China
| | - Ye-Hua Cai
- Department of Ultrasound, Huashan Hospital, Fudan University, Shanghai, China
| | - Dao-Ying Geng
- Department of Radiology, Huashan Hospital, Fudan University, Shanghai, China; Greater Bay Area Institute of Precision Medicine (Guangzhou), Fudan University, Guangzhou, China.
| |
Collapse
|
17
|
Zhang Z, Jiang Y, Qiao H, Wang M, Yan W, Chen J. SIL-Net: A Semi-Isotropic L-shaped network for dermoscopic image segmentation. Comput Biol Med 2022; 150:106146. [PMID: 36228460 DOI: 10.1016/j.compbiomed.2022.106146] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Revised: 09/13/2022] [Accepted: 09/24/2022] [Indexed: 11/28/2022]
Abstract
BACKGROUND Dermoscopic image segmentation using deep learning algorithms is a critical technology for skin cancer detection and therapy. Specifically, this technology is a spatially equivariant task and relies heavily on Convolutional Neural Networks (CNNs), which lost more effective features during cascading down-sampling or up-sampling. Recently, vision isotropic architecture has emerged to eliminate cascade procedures in CNNs as well as demonstrates superior performance. Nevertheless, it cannot be used for the segmentation task directly. Based on these discoveries, this research intends to explore an efficient architecture which not only preserves the advantages of the isotropic architecture but is also suitable for clinical dermoscopic diagnosis. METHODS In this work, we introduce a novel Semi-Isotropic L-shaped network (SIL-Net) for dermoscopic image segmentation. First, we propose a Patch Embedding Weak Correlation (PEWC) module to address the issue of no interaction between adjacent patches during the standard Patch Embedding process. Second, a plug-and-play and zero-parameter Residual Spatial Mirror Information (RSMI) path is proposed to supplement effective features during up-sampling and optimize the lesion boundaries. Third, to further reconstruct deep features and get refined lesion regions, a Depth Separable Transpose Convolution (DSTC) based up-sampling module is designed. RESULTS The proposed architecture obtains state-of-the-art performance on dermoscopy benchmark datasets ISIC-2017, ISIC-2018 and PH2. Respectively, the Dice coefficient (DICE) of above datasets achieves 89.63%, 93.47%, and 95.11%, where the Mean Intersection over Union (MIoU) are 82.02%, 88.21%, and 90.81%. Furthermore, the robustness and generalizability of our method has been demonstrated through additional experiments on standard intestinal polyp datasets (CVC-ClinicDB and Kvasir-SEG). CONCLUSION Our findings demonstrate that SIL-Net not only has great potential for precise segmentation of the lesion region but also exhibits stronger generalizability and robustness, indicating that it meets the requirements for clinical diagnosis. Notably, our method shows state-of-the-art performance on all five datasets, which highlights the effectiveness of the semi-isotropic design mechanism.
Collapse
Affiliation(s)
- Zequn Zhang
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou 730070, China.
| | - Yun Jiang
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou 730070, China.
| | - Hao Qiao
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou 730070, China.
| | - Meiqi Wang
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou 730070, China.
| | - Wei Yan
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou 730070, China.
| | - Jie Chen
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou 730070, China.
| |
Collapse
|
18
|
Using AAEHS-Net as an Attention-Based Auxiliary Extraction and Hybrid Subsampled Network for Semantic Segmentation. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:1536976. [PMID: 36275973 PMCID: PMC9586756 DOI: 10.1155/2022/1536976] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/02/2022] [Accepted: 10/03/2022] [Indexed: 11/17/2022]
Abstract
Semantic segmentation based on deep learning has undergone remarkable advancements in recent years. However, due to the neglect of the shallow features, the problems of inaccurate segmentation have persisted. To address this issue, a semantic segmentation network-attention-based auxiliary extraction and hybrid subsampled network (AAEHS-Net) is suggested in this study. To extract more deep information and the shallow features, the complementary and enhanced extraction module (CEEM) is utilized by the network. As a result, the edge segmentation of the model is improved. Moreover, to reduce the loss of features, a hybrid subsampled module (HSM) is introduced. Meanwhile, global max pool and global avg pool module (GAGM) is designed as an attention module to enhance the features with global and important information and maintain feature continuity. The proposed AAEHS-Net is evaluated on three datasets: the aerial drone image dataset, the Massachusetts roads dataset, and the Massachusetts buildings dataset. On the three datasets, AAEHS-Net achieves 1.15%, 0.88%, and 2.1% higher accuracy than U-Net, reaching 90.12%, 96.23%, and 95.15%, respectively. At the same time, our proposed network has obtained the best values for all evaluation metrics in three datasets compared to the currently popular algorithms.
Collapse
|
19
|
Research on CT Lung Segmentation Method of Preschool Children based on Traditional Image Processing and ResUnet. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:7321330. [PMID: 36262868 PMCID: PMC9576440 DOI: 10.1155/2022/7321330] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Revised: 09/13/2022] [Accepted: 09/21/2022] [Indexed: 11/22/2022]
Abstract
Lung segmentation using computed tomography (CT) images is important for diagnosing various lung diseases. Currently, no lung segmentation method has been developed for assessing the CT images of preschool children, which may differ from those of adults due to (1) presence of artifacts caused by the shaking of children, (2) loss of a localized lung area due to a failure to hold their breath, and (3) a smaller CT chest area, compared with adults. To solve these unique problems, this study developed an automatic lung segmentation method by combining traditional imaging methods with ResUnet using the CT images of 60 children, aged 0-6 years. First, the CT images were cropped and zoomed through ecological operations to concentrate the segmentation task on the chest area. Then, a ResUnet model was used to improve the loss for lung segmentation, and case-based connected domain operations were performed to filter the segmentation results and improve segmentation accuracy. The proposed method demonstrated promising segmentation results on a test set of 12 cases, with average accuracy, Dice, precision, and recall of 0.9479, 0.9678, 0.9711, and 0.9715, respectively, which achieved the best performance relative to the other six models. This study shows that the proposed method can achieve good segmentation results in CT of preschool children, laying a good foundation for the diagnosis of children's lung diseases.
Collapse
|
20
|
A segmentation-based sequence residual attention model for KRAS gene mutation status prediction in colorectal cancer. APPL INTELL 2022. [DOI: 10.1007/s10489-022-04011-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
21
|
Ghaznavi A, Rychtáriková R, Saberioon M, Štys D. Cell segmentation from telecentric bright-field transmitted light microscopy images using a Residual Attention U-Net: A case study on HeLa line. Comput Biol Med 2022; 147:105805. [PMID: 35809410 DOI: 10.1016/j.compbiomed.2022.105805] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2022] [Revised: 06/03/2022] [Accepted: 06/26/2022] [Indexed: 11/20/2022]
Abstract
Living cell segmentation from bright-field light microscopy images is challenging due to the image complexity and temporal changes in the living cells. Recently developed deep learning (DL)-based methods became popular in medical and microscopy image segmentation tasks due to their success and promising outcomes. The main objective of this paper is to develop a deep learning, U-Net-based method to segment the living cells of the HeLa line in bright-field transmitted light microscopy. To find the most suitable architecture for our datasets, a residual attention U-Net was proposed and compared with an attention and a simple U-Net architecture. The attention mechanism highlights the remarkable features and suppresses activations in the irrelevant image regions. The residual mechanism overcomes with vanishing gradient problem. The Mean-IoU score for our datasets reaches 0.9505, 0.9524, and 0.9530 for the simple, attention, and residual attention U-Net, respectively. The most accurate semantic segmentation results was achieved in the Mean-IoU and Dice metrics by applying the residual and attention mechanisms together. The watershed method applied to this best - Residual Attention - semantic segmentation result gave the segmentation with the specific information for each cell.
Collapse
Affiliation(s)
- Ali Ghaznavi
- Faculty of Fisheries and Protection of Waters, South Bohemian Research Center of Aquaculture and Biodiversity of Hydrocenoses, Institute of Complex Systems, University of South Bohemia in České Budějovice, Zámek 136, 373 33, Nové Hrady, Czech Republic.
| | - Renata Rychtáriková
- Faculty of Fisheries and Protection of Waters, South Bohemian Research Center of Aquaculture and Biodiversity of Hydrocenoses, Institute of Complex Systems, University of South Bohemia in České Budějovice, Zámek 136, 373 33, Nové Hrady, Czech Republic.
| | - Mohammadmehdi Saberioon
- Helmholtz Centre Potsdam GFZ German Research Centre for Geosciences, Section 1.4 Remote Sensing and Geoinformatics, Telegrafenberg, Potsdam 14473, Germany.
| | - Dalibor Štys
- Faculty of Fisheries and Protection of Waters, South Bohemian Research Center of Aquaculture and Biodiversity of Hydrocenoses, Institute of Complex Systems, University of South Bohemia in České Budějovice, Zámek 136, 373 33, Nové Hrady, Czech Republic.
| |
Collapse
|
22
|
FGAM: A pluggable light-weight attention module for medical image segmentation. Comput Biol Med 2022; 146:105628. [DOI: 10.1016/j.compbiomed.2022.105628] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2021] [Revised: 04/08/2022] [Accepted: 04/15/2022] [Indexed: 11/22/2022]
|
23
|
Shu X, Gu Y, Zhang X, Hu C, Cheng K. FCRB U-Net: A novel fully connected residual block U-Net for fetal cerebellum ultrasound image segmentation. Comput Biol Med 2022; 148:105693. [PMID: 35717404 DOI: 10.1016/j.compbiomed.2022.105693] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2022] [Revised: 05/15/2022] [Accepted: 05/31/2022] [Indexed: 11/29/2022]
Abstract
In this paper, we propose a novel U-Net with fully connected residual blocks (FCRB U-Net) for the fetal cerebellum Ultrasound image segmentation task. FCRB U-Net, an improved convolutional neural network (CNN) based on U-Net, replaces the double convolution operation in the original model with the fully connected residual block and embeds an effective channel attention module to enhance the extraction of valid features. Moreover, in the decoding stage, a feature reuse module is employed to form a fully connected decoder to make full use of deep features. FCRB U-Net can effectively alleviate the problem of the loss of feature information during the convolution process and improve segmentation accuracy. Experimental results demonstrate that the proposed approach is effective and promising in the field of fetal cerebellar segmentation in actual Ultrasound images. The average IoU value and mean Dice index reach 86.72% and 90.45%, respectively, which are 3.07% and 5.25% higher than that of the basic U-Net.
Collapse
Affiliation(s)
- Xin Shu
- School of Computer Science, Jiangsu University of Science and Technology, Zhenjiang, 212100, China.
| | - Yingyan Gu
- School of Computer Science, Jiangsu University of Science and Technology, Zhenjiang, 212100, China
| | - Xin Zhang
- Department of Medical Ultrasound, Affiliated Hospital of Jiangsu University, Zhenjiang, 212003, China.
| | - Chunlong Hu
- School of Computer Science, Jiangsu University of Science and Technology, Zhenjiang, 212100, China
| | - Ke Cheng
- School of Computer Science, Jiangsu University of Science and Technology, Zhenjiang, 212100, China
| |
Collapse
|
24
|
Zhou T, Dong Y, Lu H, Zheng X, Qiu S, Hou S. APU-Net: An Attention Mechanism Parallel U-Net for Lung Tumor Segmentation. BIOMED RESEARCH INTERNATIONAL 2022; 2022:5303651. [PMID: 35586818 PMCID: PMC9110197 DOI: 10.1155/2022/5303651] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/17/2022] [Accepted: 04/09/2022] [Indexed: 11/30/2022]
Abstract
Lung cancer is one of the malignant tumors with high morbidity and mortality, and lung nodules are the early stages of lung cancer. The symptoms of pulmonary nodules are not obvious in the clinic, and the optimal treatment time is missed due to the missed diagnosis in the clinic. A parallel U-Net network called APU-Net is proposed. Firstly, two parallel U-Net networks are used to extract the features of different modalities. Among them, the subnetwork UNet_B extracts the CT image features, and the subnetwork UNet_A consists of two encoders to extract the PET/CT and PET image features. Secondly, multimodal feature extraction blocks are used to extract features for PET/CT and PET images in UNet_B network. Thirdly, a hybrid attention mechanism is added to the encoding paths of the UNet_A and UNet_B. Finally, a multiscale feature aggregation block is used for extracting feature maps of different scales of decoding path. On the lung tumor 18FDGPET/CT multimodal medical images dataset, experiments' results show that the DSC, Recall, VOE, and RVD coefficients of APU-Net are 96.86%, 97.53%, 3.18%, and 3.29%, respectively. APU-Net can improve the segmentation accuracy of the adhesion between the lesion of complex shape and the normal tissue. This has positive significance for computer-aided diagnosis.
Collapse
Affiliation(s)
- Tao Zhou
- School of Computer Science and Engineering, North Minzu University, Yinchuan, Ningxia 750021, China
- The Key Laboratory of Images and Graphics Intelligent Processing of State Ethnic Affairs Commission, North Minzu University, Yinchuan 750021, China
| | - YaLi Dong
- School of Computer Science and Engineering, North Minzu University, Yinchuan, Ningxia 750021, China
- The Key Laboratory of Images and Graphics Intelligent Processing of State Ethnic Affairs Commission, North Minzu University, Yinchuan 750021, China
| | - HuiLing Lu
- School of Science, Ningxia Medical University, Yinchuan, Ningxia 750004, China
| | - XiaoMin Zheng
- Research Institute for Reproductive Medicine and Genetic Diseases, Wuxi Maternity and Child Health Hospital, Jiangsu Wuxi, 214002, China
| | - Shi Qiu
- Key Laboratory of Spectral Imaging Technology CAS, Xi'an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi'an, Shanxi 710119, China
| | - SenBao Hou
- School of Computer Science and Engineering, North Minzu University, Yinchuan, Ningxia 750021, China
- The Key Laboratory of Images and Graphics Intelligent Processing of State Ethnic Affairs Commission, North Minzu University, Yinchuan 750021, China
| |
Collapse
|
25
|
Yun Z, Xu Q, Wang G, Jin S, Lin G, Feng Q, Yuan J. EVA: Fully automatic hemodynamics assessment system for the bulbar conjunctival microvascular network. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 216:106631. [PMID: 35123347 DOI: 10.1016/j.cmpb.2022.106631] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Revised: 01/07/2022] [Accepted: 01/09/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE Conjunctival microcirculation has been used to quantitatively assess microvascular changes due to systemic disorders. The space between red blood cell clusters in conjunctival microvessels is essential for assessing hemodynamics. However, it causes discontinuities in vessel image segmentation and increases the difficulty of automatically measuring blood velocity. In this study, we developed an EVA system based on deep learning to maintain vessel segmentation continuity and automatically measure blood velocity. METHODS The EVA system sequentially performs image registration, vessel segmentation, diameter measurement, and blood velocity measurement on conjunctival images. A U-Net model optimized with a connectivity-preserving loss function was used to solve the problem of discontinuities in vessel segmentation. Then, an automatic measurement algorithm based on line segment detection was proposed to obtain accurate blood velocity. Finally, the EVA system assessed hemodynamic parameters based on the measured blood velocity in each vessel segment. RESULTS The EVA system was validated for 23 videos of conjunctival microcirculation captured using functional slit-lamp microscopy. The U-Net model produced the longest average vessel segment length, 158.03 ± 181.87 µm, followed by the adaptive threshold method and Frangi filtering, which produced lengths of 120.05 ± 151.47 µm and 99.94 ± 138.12 µm, respectively. The proposed method and one based on cross-correlation were validated to measure blood velocity for a dataset consisting of 30 vessel segments. Bland-Altman analysis showed that compared with the cross-correlation method (bias: 0.36, SD: 0.32), the results of the proposed method were more consistent with a manual measurement-based gold standard (bias: -0.04, SD: 0.14). CONCLUSIONS The proposed EVA system provides an automatic and reliable solution for quantitative assessment of hemodynamics in conjunctival microvascular images, and potentially can be applied to hypoglossal microcirculation images.
Collapse
Affiliation(s)
- Zhaoqiang Yun
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, China
| | - Qing Xu
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, China
| | - Gengyuan Wang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Shuang Jin
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, China
| | - Guoye Lin
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, China
| | - Qianjin Feng
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, China.
| | - Jin Yuan
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China.
| |
Collapse
|
26
|
Silva F, Pereira T, Neves I, Morgado J, Freitas C, Malafaia M, Sousa J, Fonseca J, Negrão E, Flor de Lima B, Correia da Silva M, Madureira AJ, Ramos I, Costa JL, Hespanhol V, Cunha A, Oliveira HP. Towards Machine Learning-Aided Lung Cancer Clinical Routines: Approaches and Open Challenges. J Pers Med 2022; 12:480. [PMID: 35330479 PMCID: PMC8950137 DOI: 10.3390/jpm12030480] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Revised: 02/28/2022] [Accepted: 03/10/2022] [Indexed: 12/15/2022] Open
Abstract
Advancements in the development of computer-aided decision (CAD) systems for clinical routines provide unquestionable benefits in connecting human medical expertise with machine intelligence, to achieve better quality healthcare. Considering the large number of incidences and mortality numbers associated with lung cancer, there is a need for the most accurate clinical procedures; thus, the possibility of using artificial intelligence (AI) tools for decision support is becoming a closer reality. At any stage of the lung cancer clinical pathway, specific obstacles are identified and "motivate" the application of innovative AI solutions. This work provides a comprehensive review of the most recent research dedicated toward the development of CAD tools using computed tomography images for lung cancer-related tasks. We discuss the major challenges and provide critical perspectives on future directions. Although we focus on lung cancer in this review, we also provide a more clear definition of the path used to integrate AI in healthcare, emphasizing fundamental research points that are crucial for overcoming current barriers.
Collapse
Affiliation(s)
- Francisco Silva
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal; (I.N.); (J.M.); (M.M.); (J.S.); (J.F.); (A.C.); (H.P.O.)
- FCUP—Faculty of Science, University of Porto, 4169-007 Porto, Portugal
| | - Tania Pereira
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal; (I.N.); (J.M.); (M.M.); (J.S.); (J.F.); (A.C.); (H.P.O.)
| | - Inês Neves
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal; (I.N.); (J.M.); (M.M.); (J.S.); (J.F.); (A.C.); (H.P.O.)
- ICBAS—Abel Salazar Biomedical Sciences Institute, University of Porto, 4050-313 Porto, Portugal
| | - Joana Morgado
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal; (I.N.); (J.M.); (M.M.); (J.S.); (J.F.); (A.C.); (H.P.O.)
| | - Cláudia Freitas
- CHUSJ—Centro Hospitalar e Universitário de São João, 4200-319 Porto, Portugal; (C.F.); (E.N.); (B.F.d.L.); (M.C.d.S.); (A.J.M.); (I.R.); (V.H.)
- FMUP—Faculty of Medicine, University of Porto, 4200-319 Porto, Portugal;
| | - Mafalda Malafaia
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal; (I.N.); (J.M.); (M.M.); (J.S.); (J.F.); (A.C.); (H.P.O.)
- FEUP—Faculty of Engineering, University of Porto, 4200-465 Porto, Portugal
| | - Joana Sousa
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal; (I.N.); (J.M.); (M.M.); (J.S.); (J.F.); (A.C.); (H.P.O.)
| | - João Fonseca
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal; (I.N.); (J.M.); (M.M.); (J.S.); (J.F.); (A.C.); (H.P.O.)
- FEUP—Faculty of Engineering, University of Porto, 4200-465 Porto, Portugal
| | - Eduardo Negrão
- CHUSJ—Centro Hospitalar e Universitário de São João, 4200-319 Porto, Portugal; (C.F.); (E.N.); (B.F.d.L.); (M.C.d.S.); (A.J.M.); (I.R.); (V.H.)
| | - Beatriz Flor de Lima
- CHUSJ—Centro Hospitalar e Universitário de São João, 4200-319 Porto, Portugal; (C.F.); (E.N.); (B.F.d.L.); (M.C.d.S.); (A.J.M.); (I.R.); (V.H.)
| | - Miguel Correia da Silva
- CHUSJ—Centro Hospitalar e Universitário de São João, 4200-319 Porto, Portugal; (C.F.); (E.N.); (B.F.d.L.); (M.C.d.S.); (A.J.M.); (I.R.); (V.H.)
| | - António J. Madureira
- CHUSJ—Centro Hospitalar e Universitário de São João, 4200-319 Porto, Portugal; (C.F.); (E.N.); (B.F.d.L.); (M.C.d.S.); (A.J.M.); (I.R.); (V.H.)
- FMUP—Faculty of Medicine, University of Porto, 4200-319 Porto, Portugal;
| | - Isabel Ramos
- CHUSJ—Centro Hospitalar e Universitário de São João, 4200-319 Porto, Portugal; (C.F.); (E.N.); (B.F.d.L.); (M.C.d.S.); (A.J.M.); (I.R.); (V.H.)
- FMUP—Faculty of Medicine, University of Porto, 4200-319 Porto, Portugal;
| | - José Luis Costa
- FMUP—Faculty of Medicine, University of Porto, 4200-319 Porto, Portugal;
- i3S—Instituto de Investigação e Inovação em Saúde, Universidade do Porto, 4200-135 Porto, Portugal
- IPATIMUP—Institute of Molecular Pathology and Immunology of the University of Porto, 4200-135 Porto, Portugal
| | - Venceslau Hespanhol
- CHUSJ—Centro Hospitalar e Universitário de São João, 4200-319 Porto, Portugal; (C.F.); (E.N.); (B.F.d.L.); (M.C.d.S.); (A.J.M.); (I.R.); (V.H.)
- FMUP—Faculty of Medicine, University of Porto, 4200-319 Porto, Portugal;
| | - António Cunha
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal; (I.N.); (J.M.); (M.M.); (J.S.); (J.F.); (A.C.); (H.P.O.)
- UTAD—University of Trás-os-Montes and Alto Douro, 5001-801 Vila Real, Portugal
| | - Hélder P. Oliveira
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal; (I.N.); (J.M.); (M.M.); (J.S.); (J.F.); (A.C.); (H.P.O.)
- FCUP—Faculty of Science, University of Porto, 4169-007 Porto, Portugal
| |
Collapse
|
27
|
Zhu L, He Q, Huang Y, Zhang Z, Zeng J, Lu L, Kong W, Zhou F. DualMMP-GAN: Dual-scale multi-modality perceptual generative adversarial network for medical image segmentation. Comput Biol Med 2022; 144:105387. [PMID: 35305502 DOI: 10.1016/j.compbiomed.2022.105387] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2021] [Revised: 03/04/2022] [Accepted: 03/04/2022] [Indexed: 01/22/2023]
Abstract
Multi-modality magnetic resonance imaging (MRI) can reveal distinct patterns of tissue in the human body and is crucial to clinical diagnosis. But it still remains a challenge to obtain diverse and plausible multi-modality MR images due to expense, noise, and artifacts. For the same lesion, different modalities of MRI have big differences in context information, coarse location, and fine structure. In order to achieve better generation and segmentation performance, a dual-scale multi-modality perceptual generative adversarial network (DualMMP-GAN) is proposed based on cycle-consistent generative adversarial networks (CycleGAN). Dilated residual blocks are introduced to increase the receptive field, preserving structure and context information of images. A dual-scale discriminator is constructed. The generator is optimized by discriminating patches to represent lesions with different sizes. The perceptual consistency loss is introduced to learn the mapping between the generated and target modality at different semantic levels. Moreover, generative multi-modality segmentation (GMMS) combining given modalities with generated modalities is proposed for brain tumor segmentation. Experimental results show that the DualMMP-GAN outperforms the CycleGAN and some state-of-the-art methods in terms of PSNR, SSMI, and RMSE in most tasks. In addition, dice, sensitivity, specificity, and Hausdorff95 obtained from segmentation by GMMS are all higher than those from a single modality. The objective index obtained by the proposed methods are close to upper bounds obtained from real multiple modalities, indicating that GMMS can achieve similar effects as multi-modality. Overall, the proposed methods can serve as an effective method in clinical brain tumor diagnosis with promising application potential.
Collapse
Affiliation(s)
- Li Zhu
- School of Information Engineering, Nanchang University, Nanchang, 330031, China.
| | - Qiong He
- School of Information Engineering, Nanchang University, Nanchang, 330031, China.
| | - Yue Huang
- School of Informatics, Xiamen University, Xiamen, 361005, China.
| | - Zihe Zhang
- School of Information Engineering, Nanchang University, Nanchang, 330031, China.
| | - Jiaming Zeng
- School of Information Engineering, Nanchang University, Nanchang, 330031, China.
| | - Ling Lu
- School of Information Engineering, Nanchang University, Nanchang, 330031, China.
| | - Weiming Kong
- Hospital of the Joint Logistics Support Force of the Chinese People's Liberation Army, No.908, Nanchang, 330002, China.
| | - Fuqing Zhou
- Department of Radiology, The First Affiliated Hospital, Nanchang University, Nanchang, 330006, China.
| |
Collapse
|
28
|
Lu F, Fu C, Zhang G, Shi J. Adaptive multi-scale feature fusion based U-net for fracture segmentation in coal rock images. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2022. [DOI: 10.3233/jifs-211968] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Accurate segmentation of fractures in coal rock CT images is important for the development of coalbed methane. However, due to the large variation of fracture scale and the similarity of gray values between weak fractures and the surrounding matrix, it remains a challenging task. And there is no published dataset of coal rock, which make the task even harder. In this paper, a novel adaptive multi-scale feature fusion method based on U-net (AMSFF-U-net) is proposed for fracture segmentation in coal rock CT images. Specifically, encoder and decoder path consist of residual blocks (ReBlock), respectively. The attention skip concatenation (ASC) module is proposed to capture more representative and distinguishing features by combining the high-level and low-level features of adjacent layers. The adaptive multi-scale feature fusion (AMSFF) module is presented to adaptively fuse different scale feature maps of encoder path; it can effectively capture rich multi-scale features. In response to the lack of coal rock fractures training data, we applied a set of comprehensive data augmentation operations to increase the diversity of training samples. These extensive experiments are conducted via seven state-of-the-art methods (i.e., FCEM, U-net, Res-Unet, Unet++, MSN-Net, WRAU-Net and ours). The experiment results demonstrate that the proposed AMSFF-U-net can achieve better segmentation performance in our works, particularly for weak fractures and tiny scale fractures.
Collapse
Affiliation(s)
- Fengli Lu
- School of Mechanical Electronic and Information Engineering, China University of Mining and Technology, Beijing, China
| | - Chengcai Fu
- School of Mechanical Electronic and Information Engineering, China University of Mining and Technology, Beijing, China
| | - Guoying Zhang
- School of Mechanical Electronic and Information Engineering, China University of Mining and Technology, Beijing, China
| | - Jie Shi
- School of Mechanical Electronic and Information Engineering, China University of Mining and Technology, Beijing, China
| |
Collapse
|
29
|
Li F, Zhou Y, Zhang Y, Yin J, Qiu Y, Gao J, Zhu F. POSREG: proteomic signature discovered by simultaneously optimizing its reproducibility and generalizability. Brief Bioinform 2022; 23:6532538. [PMID: 35183059 DOI: 10.1093/bib/bbac040] [Citation(s) in RCA: 69] [Impact Index Per Article: 34.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Revised: 01/21/2022] [Accepted: 01/27/2022] [Indexed: 12/17/2022] Open
Abstract
Mass spectrometry-based proteomic technique has become indispensable in current exploration of complex and dynamic biological processes. Instrument development has largely ensured the effective production of proteomic data, which necessitates commensurate advances in statistical framework to discover the optimal proteomic signature. Current framework mainly emphasizes the generalizability of the identified signature in predicting the independent data but neglects the reproducibility among signatures identified from independently repeated trials on different sub-dataset. These problems seriously restricted the wide application of the proteomic technique in molecular biology and other related directions. Thus, it is crucial to enable the generalizable and reproducible discovery of the proteomic signature with the subsequent indication of phenotype association. However, no such tool has been developed and available yet. Herein, an online tool, POSREG, was therefore constructed to identify the optimal signature for a set of proteomic data. It works by (i) identifying the proteomic signature of good reproducibility and aggregating them to ensemble feature ranking by ensemble learning, (ii) assessing the generalizability of ensemble feature ranking to acquire the optimal signature and (iii) indicating the phenotype association of discovered signature. POSREG is unique in its capacity of discovering the proteomic signature by simultaneously optimizing its reproducibility and generalizability. It is now accessible free of charge without any registration or login requirement at https://idrblab.org/posreg/.
Collapse
Affiliation(s)
- Fengcheng Li
- College of Pharmaceutical Sciences, Zhejiang University, Hangzhou 310058, China
| | - Ying Zhou
- State Key Laboratory for Diagnosis and Treatment of Infectious Disease, Collaborative Innovation Center for Diagnosis and Treatment of Infectious Diseases, Zhejiang Provincial Key Laboratory for Drug Clinical Research and Evaluation, The First Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang 310000, China
| | - Ying Zhang
- College of Pharmaceutical Sciences, Zhejiang University, Hangzhou 310058, China
| | - Jiayi Yin
- College of Pharmaceutical Sciences, Zhejiang University, Hangzhou 310058, China
| | - Yunqing Qiu
- State Key Laboratory for Diagnosis and Treatment of Infectious Disease, Collaborative Innovation Center for Diagnosis and Treatment of Infectious Diseases, Zhejiang Provincial Key Laboratory for Drug Clinical Research and Evaluation, The First Affiliated Hospital, Zhejiang University, Hangzhou, Zhejiang 310000, China
| | - Jianqing Gao
- Westlake Laboratory of Life Sciences and Biomedicine, Hangzhou, Zhejiang, China
| | - Feng Zhu
- College of Pharmaceutical Sciences, Zhejiang University, Hangzhou 310058, China
| |
Collapse
|
30
|
All You Need Is a Few Dots to Label CT Images for Organ Segmentation. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12031328] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
Image segmentation is used to analyze medical images quantitatively for diagnosis and treatment planning. Since manual segmentation requires considerable time and effort from experts, research to automatically perform segmentation is in progress. Recent studies using deep learning have improved performance but need many labeled data. Although there are public datasets for research, manual labeling is required in an area where labeling is not performed to train a model. We propose a deep-learning-based tool that can easily create training data to alleviate this inconvenience. The proposed tool receives a CT image and the pixels of organs the user wants to segment as inputs and extract the features of the CT image using a deep learning network. Then, pixels that have similar features are classified to the identical organ. The advantage of the proposed tool is that it can be trained with a small number of labeled data. After training with 25 labeled CT images, our tool shows competitive results when it is compared to the state-of-the-art segmentation algorithms, such as UNet and DeepNetV3.
Collapse
|
31
|
Li Z, Feng N, Pu H, Dong Q, Liu Y, Liu Y, Xu X. PIxel-Level Segmentation of Bladder Tumors on MR Images Using a Random Forest Classifier. Technol Cancer Res Treat 2022; 21:15330338221086395. [PMID: 35296195 PMCID: PMC9123929 DOI: 10.1177/15330338221086395] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Objectives: Regional bladder wall thickening on noninvasive magnetic
resonance (MR) images is an important sign of developing urinary bladder cancer
(BCa), and precise segmentation of the tumor mass is an essential step toward
noninvasive identification of the pathological stage and grade, which is of
critical importance for the clinical management of patients with BCa.
Methods: In this paper, we proposed a new method based on the
high-throughput pixel-level features and a random forest (RF) classifier for the
BCa segmentation. First, regions of interest (ROIs) including tumor and wall
ROIs were used in the training set for feature extraction and segmentation model
development. Then, candidate regions containing both bladder tumor and its
neighboring wall tissue in the testing set were segmented. Results:
Experimental results were evaluated on a retrospective database containing 56
patients postoperatively confirmed with BCa from the affiliated hospital. The
Dice similarity coefficient (DSC) and average symmetric surface distance (ASSD)
of the tumor regions were adopted to quantitatively assess the overall
performance of this approach. The results showed that the mean DSC was 0.906
(95% confidential interval [CI]: 0.852-0.959), and the mean ASSD was 1.190 mm
(95% CI: 1.727-2.449), which were higher than those of the state-of-the-art
methods for tumor region separation. Conclusion: The proposed
Pixel-level BCa segmentation method can achieve good performance for the
accurate segmentation of BCa lesion on MR images.
Collapse
Affiliation(s)
- Ziqi Li
- School of Biomedical Engineering, 12644Air Force Medical University, Xi'an, PR China
| | - Na Feng
- Basic Medical Science Academy, 12644Air Force Medical University, Xi'an, PR China
| | - Huangsheng Pu
- College of Advanced Interdisciplinary Studies, 58294National University of Defense Technology, Changsha, PR China
| | - Qi Dong
- School of Biomedical Engineering, 12644Air Force Medical University, Xi'an, PR China
| | - Yan Liu
- School of Biomedical Engineering, 12644Air Force Medical University, Xi'an, PR China
| | - Yang Liu
- School of Biomedical Engineering, 12644Air Force Medical University, Xi'an, PR China
| | - Xiaopan Xu
- School of Biomedical Engineering, 12644Air Force Medical University, Xi'an, PR China
| |
Collapse
|
32
|
Herrmann P, Busana M, Cressoni M, Lotz J, Moerer O, Saager L, Meissner K, Quintel M, Gattinoni L. Using Artificial Intelligence for Automatic Segmentation of CT Lung Images in Acute Respiratory Distress Syndrome. Front Physiol 2021; 12:676118. [PMID: 34594233 PMCID: PMC8476971 DOI: 10.3389/fphys.2021.676118] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2021] [Accepted: 08/17/2021] [Indexed: 01/17/2023] Open
Abstract
Knowledge of gas volume, tissue mass and recruitability measured by the quantitative CT scan analysis (CT-qa) is important when setting the mechanical ventilation in acute respiratory distress syndrome (ARDS). Yet, the manual segmentation of the lung requires a considerable workload. Our goal was to provide an automatic, clinically applicable and reliable lung segmentation procedure. Therefore, a convolutional neural network (CNN) was used to train an artificial intelligence (AI) algorithm on 15 healthy subjects (1,302 slices), 100 ARDS patients (12,279 slices), and 20 COVID-19 (1,817 slices). Eighty percent of this populations was used for training, 20% for testing. The AI and manual segmentation at slice level were compared by intersection over union (IoU). The CT-qa variables were compared by regression and Bland Altman analysis. The AI-segmentation of a single patient required 5–10 s vs. 1–2 h of the manual. At slice level, the algorithm showed on the test set an IOU across all CT slices of 91.3 ± 10.0, 85.2 ± 13.9, and 84.7 ± 14.0%, and across all lung volumes of 96.3 ± 0.6, 88.9 ± 3.1, and 86.3 ± 6.5% for normal lungs, ARDS and COVID-19, respectively, with a U-shape in the performance: better in the lung middle region, worse at the apex and base. At patient level, on the test set, the total lung volume measured by AI and manual segmentation had a R2 of 0.99 and a bias −9.8 ml [CI: +56.0/−75.7 ml]. The recruitability measured with manual and AI-segmentation, as change in non-aerated tissue fraction had a bias of +0.3% [CI: +6.2/−5.5%] and −0.5% [CI: +2.3/−3.3%] expressed as change in well-aerated tissue fraction. The AI-powered lung segmentation provided fast and clinically reliable results. It is able to segment the lungs of seriously ill ARDS patients fully automatically.
Collapse
Affiliation(s)
- Peter Herrmann
- Department of Anesthesiology, University Medical Center Göttingen, Göttingen, Germany
| | - Mattia Busana
- Department of Anesthesiology, University Medical Center Göttingen, Göttingen, Germany
| | | | - Joachim Lotz
- Institute for Diagnostic and Interventional Radiology, University Medical Center Göttingen, Göttingen, Germany
| | - Onnen Moerer
- Department of Anesthesiology, University Medical Center Göttingen, Göttingen, Germany
| | - Leif Saager
- Department of Anesthesiology, University Medical Center Göttingen, Göttingen, Germany
| | - Konrad Meissner
- Department of Anesthesiology, University Medical Center Göttingen, Göttingen, Germany
| | - Michael Quintel
- Department of Anesthesiology, University Medical Center Göttingen, Göttingen, Germany.,Department of Anesthesiology, DONAUISAR Klinikum Deggendorf, Deggendorf, Germany
| | - Luciano Gattinoni
- Department of Anesthesiology, University Medical Center Göttingen, Göttingen, Germany
| |
Collapse
|
33
|
Yeung M, Sala E, Schönlieb CB, Rundo L. Focus U-Net: A novel dual attention-gated CNN for polyp segmentation during colonoscopy. Comput Biol Med 2021; 137:104815. [PMID: 34507156 PMCID: PMC8505797 DOI: 10.1016/j.compbiomed.2021.104815] [Citation(s) in RCA: 42] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2021] [Revised: 08/26/2021] [Accepted: 08/26/2021] [Indexed: 02/07/2023]
Abstract
BACKGROUND Colonoscopy remains the gold-standard screening for colorectal cancer. However, significant miss rates for polyps have been reported, particularly when there are multiple small adenomas. This presents an opportunity to leverage computer-aided systems to support clinicians and reduce the number of polyps missed. METHOD In this work we introduce the Focus U-Net, a novel dual attention-gated deep neural network, which combines efficient spatial and channel-based attention into a single Focus Gate module to encourage selective learning of polyp features. The Focus U-Net incorporates several further architectural modifications, including the addition of short-range skip connections and deep supervision. Furthermore, we introduce the Hybrid Focal loss, a new compound loss function based on the Focal loss and Focal Tversky loss, designed to handle class-imbalanced image segmentation. For our experiments, we selected five public datasets containing images of polyps obtained during optical colonoscopy: CVC-ClinicDB, Kvasir-SEG, CVC-ColonDB, ETIS-Larib PolypDB and EndoScene test set. We first perform a series of ablation studies and then evaluate the Focus U-Net on the CVC-ClinicDB and Kvasir-SEG datasets separately, and on a combined dataset of all five public datasets. To evaluate model performance, we use the Dice similarity coefficient (DSC) and Intersection over Union (IoU) metrics. RESULTS Our model achieves state-of-the-art results for both CVC-ClinicDB and Kvasir-SEG, with a mean DSC of 0.941 and 0.910, respectively. When evaluated on a combination of five public polyp datasets, our model similarly achieves state-of-the-art results with a mean DSC of 0.878 and mean IoU of 0.809, a 14% and 15% improvement over the previous state-of-the-art results of 0.768 and 0.702, respectively. CONCLUSIONS This study shows the potential for deep learning to provide fast and accurate polyp segmentation results for use during colonoscopy. The Focus U-Net may be adapted for future use in newer non-invasive colorectal cancer screening and more broadly to other biomedical image segmentation tasks similarly involving class imbalance and requiring efficiency.
Collapse
Affiliation(s)
- Michael Yeung
- Department of Radiology, University of Cambridge, Cambridge, CB2 0QQ, United Kingdom; School of Clinical Medicine, University of Cambridge, Cambridge, CB2 0SP, United Kingdom.
| | - Evis Sala
- Department of Radiology, University of Cambridge, Cambridge, CB2 0QQ, United Kingdom; Cancer Research UK Cambridge Centre, University of Cambridge, Cambridge, CB2 0RE, United Kingdom.
| | - Carola-Bibiane Schönlieb
- Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Cambridge, CB3 0WA, United Kingdom.
| | - Leonardo Rundo
- Department of Radiology, University of Cambridge, Cambridge, CB2 0QQ, United Kingdom; Cancer Research UK Cambridge Centre, University of Cambridge, Cambridge, CB2 0RE, United Kingdom.
| |
Collapse
|
34
|
Sun Y, Ji Y. AAWS-Net: Anatomy-aware weakly-supervised learning network for breast mass segmentation. PLoS One 2021; 16:e0256830. [PMID: 34460852 PMCID: PMC8405027 DOI: 10.1371/journal.pone.0256830] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2021] [Accepted: 08/16/2021] [Indexed: 11/18/2022] Open
Abstract
Accurate segmentation of breast masses is an essential step in computer aided diagnosis of breast cancer. The scarcity of annotated training data greatly hinders the model’s generalization ability, especially for the deep learning based methods. However, high-quality image-level annotations are time-consuming and cumbersome in medical image analysis scenarios. In addition, a large amount of weak annotations is under-utilized which comprise common anatomy features. To this end, inspired by teacher-student networks, we propose an Anatomy-Aware Weakly-Supervised learning Network (AAWS-Net) for extracting useful information from mammograms with weak annotations for efficient and accurate breast mass segmentation. Specifically, we adopt a weakly-supervised learning strategy in the Teacher to extract anatomy structure from mammograms with weak annotations by reconstructing the original image. Besides, knowledge distillation is used to suggest morphological differences between benign and malignant masses. Moreover, the prior knowledge learned from the Teacher is introduced to the Student in an end-to-end way, which improves the ability of the student network to locate and segment masses. Experiments on CBIS-DDSM have shown that our method yields promising performance compared with state-of-the-art alternative models for breast mass segmentation in terms of segmentation accuracy and IoU.
Collapse
Affiliation(s)
- Yeheng Sun
- School of Business, University of Shanghai for Science and Technology, Shanghai, China
- * E-mail:
| | - Yule Ji
- School of Business, University of Shanghai for Science and Technology, Shanghai, China
| |
Collapse
|
35
|
Yan W, Meng X, Sun J, Yu H, Wang Z. Intelligent localization and quantitative evaluation of anterior talofibular ligament injury using magnetic resonance imaging of ankle. BMC Med Imaging 2021; 21:130. [PMID: 34454471 PMCID: PMC8403355 DOI: 10.1186/s12880-021-00660-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2021] [Accepted: 08/24/2021] [Indexed: 12/23/2022] Open
Abstract
Background There is a high incidence of injury to the lateral ligament of the ankle in daily living and sports activities. The anterior talofibular ligament (ATFL) is the most frequent types of ankle injuries. It is of great clinical significance to achieve intelligent localization and injury evaluation of ATFL due to its vulnerability. Methods According to the specific characteristics of bones in different slices, the key slice was extracted by image segmentation and characteristic analysis. Then, the talus and fibula in the key slice were segmented by distance regularized level set evolution (DRLSE), and the curvature of their contour pixels was calculated to find useful feature points including the neck of talus, the inner edge of fibula, and the outer edge of fibula. ATFL area can be located using these feature points so as to quantify its first-order gray features and second-order texture features. Support vector machine (SVM) was performed for evaluation of ATFL injury. Results Data were collected retrospectively from 158 patients who underwent MRI, and were divided into normal (68) and tear (90) group. The positioning accuracy and Dice coefficient were used to measure the performance of ATFL localization, and the mean values are 87.7% and 77.1%, respectively, which is helpful for the following feature extraction. SVM gave a good prediction ability with accuracy of 93.8%, sensitivity of 88.9%, specificity of 100%, precision of 100%, and F1 score of 94.2% in the test set. Conclusion Experimental results indicate that the proposed method is reliable in diagnosing ATFL injury. This study may provide a potentially viable method for aided clinical diagnoses of some ligament injury.
Collapse
Affiliation(s)
- Wen Yan
- School of Precision Instrument and Opto-Electronics Engineering, Tianjin University, Nankai District, 92 Weijin Road, Tianjin, 300072, China
| | - Xianghong Meng
- Radiology Department, Tianjin Hospital, 406 Jiefangnan Road, Hexi District, Tianjin, 300210, China
| | - Jinglai Sun
- School of Precision Instrument and Opto-Electronics Engineering, Tianjin University, Nankai District, 92 Weijin Road, Tianjin, 300072, China
| | - Hui Yu
- School of Precision Instrument and Opto-Electronics Engineering, Tianjin University, Nankai District, 92 Weijin Road, Tianjin, 300072, China.
| | - Zhi Wang
- Radiology Department, Tianjin Hospital, 406 Jiefangnan Road, Hexi District, Tianjin, 300210, China.
| |
Collapse
|