1
|
Zhang B, Qiu S, Liang T. Dual Attention-Based 3D U-Net Liver Segmentation Algorithm on CT Images. Bioengineering (Basel) 2024; 11:737. [PMID: 39061819 PMCID: PMC11273630 DOI: 10.3390/bioengineering11070737] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2024] [Revised: 07/11/2024] [Accepted: 07/17/2024] [Indexed: 07/28/2024] Open
Abstract
The liver is a vital organ in the human body, and CT images can intuitively display its morphology. Physicians rely on liver CT images to observe its anatomical structure and areas of pathology, providing evidence for clinical diagnosis and treatment planning. To assist physicians in making accurate judgments, artificial intelligence techniques are adopted. Addressing the limitations of existing methods in liver CT image segmentation, such as weak contextual analysis and semantic information loss, we propose a novel Dual Attention-Based 3D U-Net liver segmentation algorithm on CT images. The innovations of our approach are summarized as follows: (1) We improve the 3D U-Net network by introducing residual connections to better capture multi-scale information and alleviate semantic information loss. (2) We propose the DA-Block encoder structure to enhance feature extraction capability. (3) We introduce the CBAM module into skip connections to optimize feature transmission in the encoder, reducing semantic gaps and achieving accurate liver segmentation. To validate the effectiveness of the algorithm, experiments were conducted on the LiTS dataset. The results showed that the Dice coefficient and HD95 index for liver images were 92.56% and 28.09 mm, respectively, representing an improvement of 0.84% and a reduction of 2.45 mm compared to 3D Res-UNet.
Collapse
Affiliation(s)
- Benyue Zhang
- Key Laboratory of Spectral Imaging Technology CAS, Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi’an 710119, China;
- School of Optoelectronics, University of Chinese Academy of Sciences, Beijing 100408, China
| | - Shi Qiu
- Key Laboratory of Spectral Imaging Technology CAS, Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi’an 710119, China;
| | - Ting Liang
- Department of Radiology, The First Affiliated Hospital of Xi’an Jiaotong University, Xi’an 710119, China
| |
Collapse
|
2
|
Lee JM, Park JY, Kim YJ, Kim KG. Deep-learning-based pelvic automatic segmentation in pelvic fractures. Sci Rep 2024; 14:12258. [PMID: 38806582 PMCID: PMC11133416 DOI: 10.1038/s41598-024-63093-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2023] [Accepted: 05/24/2024] [Indexed: 05/30/2024] Open
Abstract
With the recent increase in traffic accidents, pelvic fractures are increasing, second only to skull fractures, in terms of mortality and risk of complications. Research is actively being conducted on the treatment of intra-abdominal bleeding, the primary cause of death related to pelvic fractures. Considerable preliminary research has also been performed on segmenting tumors and organs. However, studies on clinically useful algorithms for bone and pelvic segmentation, based on developed models, are limited. In this study, we explored the potential of deep-learning models presented in previous studies to accurately segment pelvic regions in X-ray images. Data were collected from X-ray images of 940 patients aged 18 or older at Gachon University Gil Hospital from January 2015 to December 2022. To segment the pelvis, Attention U-Net, Swin U-Net, and U-Net were trained, thereby comparing and analyzing the results using five-fold cross-validation. The Swin U-Net model displayed relatively high performance compared to Attention U-Net and U-Net models, achieving an average sensitivity, specificity, accuracy, and dice similarity coefficient of 96.77%, of 98.50%, 98.03%, and 96.32%, respectively.
Collapse
Affiliation(s)
- Jung Min Lee
- Department of Computer Engineering, College of IT Convergence, Gachon University, Seongnam, Republic of Korea
| | - Jun Young Park
- Department of Health Sciences and Technology, Gachon Advanced Institute for Health Sciences and Technology, Gachon University, Incheon, Republic of Korea
| | - Young Jae Kim
- Department of Computer Engineering, College of IT Convergence, Gachon University, Seongnam, Republic of Korea
- Department of Biomedical Engineering, College of Medicine, Gachon University, Incheon, Republic of Korea
- Medical Device R&D Center, Gachon University Gil Hospital, Incheon, Republic of Korea
| | - Kwang Gi Kim
- Department of Computer Engineering, College of IT Convergence, Gachon University, Seongnam, Republic of Korea.
- Department of Biomedical Engineering, College of Medicine, Gachon University, Incheon, Republic of Korea.
- Medical Device R&D Center, Gachon University Gil Hospital, Incheon, Republic of Korea.
- Department of Health Sciences and Technology, Gachon Advanced Institute for Health Sciences and Technology, Gachon University, Incheon, Republic of Korea.
| |
Collapse
|
3
|
Zhang Z, Han J, Ji W, Lou H, Li Z, Hu Y, Wang M, Qi B, Liu S. Improved deep learning for automatic localisation and segmentation of rectal cancer on T2-weighted MRI. J Med Radiat Sci 2024. [PMID: 38654675 DOI: 10.1002/jmrs.794] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2023] [Accepted: 04/09/2024] [Indexed: 04/26/2024] Open
Abstract
INTRODUCTION The automatic segmentation approaches of rectal cancer from magnetic resonance imaging (MRI) are very valuable to relieve physicians from heavy workloads and enhance working efficiency. This study aimed to compare the segmentation accuracy of a proposed model with the other three models and the inter-observer consistency. METHODS A total of 65 patients with rectal cancer who underwent MRI examination were enrolled in our cohort and were randomly divided into a training cohort (n = 45) and a validation cohort (n = 20). Two experienced radiologists independently segmented rectal cancer lesions. A novel segmentation model (AttSEResUNet) was trained on T2WI based on ResUNet and attention mechanisms. The segmentation performance of the AttSEResUNet, U-Net, ResUNet and U-Net with Attention Gate (AttUNet) was compared, using Dice similarity coefficient (DSC), Hausdorff distance (HD), mean distance to agreement (MDA) and Jaccard index. The segmentation variability of automatic segmentation models and inter-observer was also evaluated. RESULTS The AttSEResUNet with post-processing showed perfect lesion recognition rate (100%) and false recognition rate (0), and its evaluation metrics outperformed other three models for two independent readers (observer 1: DSC = 0.839 ± 0.112, HD = 9.55 ± 6.68, MDA = 0.556 ± 0.722, Jaccard index = 0.736 ± 0.150; observer 2: DSC = 0.856 ± 0.099, HD = 11.0 ± 10.1, MDA = 0.789 ± 1.07, Jaccard index = 0.673 ± 0.130). The segmentation performance of AttSEResUNet was comparable and similar to manual variability (DSC = 0.857 ± 0.115, HD = 10.0 ± 10.0, MDA = 0.704 ± 1.17, Jaccard index = 0.666 ± 0.139). CONCLUSION Comparing with other three models, the proposed AttSEResUNet model was demonstrated as a more accurate model for contouring the rectal tumours in axial T2WI images, whose variability was similar to that of inter-observer.
Collapse
Affiliation(s)
- Zaixian Zhang
- Department of Radiology, The Affiliated Hospital of Qingdao University, Qingdao, China
| | - Junqi Han
- Department of Radiology, The Affiliated Hospital of Qingdao University, Qingdao, China
| | - Weina Ji
- Department of Radiology, The Affiliated Hospital of Qingdao University, Qingdao, China
| | - Henan Lou
- Department of Radiology, The Affiliated Hospital of Qingdao University, Qingdao, China
| | - Zhiming Li
- Department of Radiology, The Affiliated Hospital of Qingdao University, Qingdao, China
| | - Yabin Hu
- Department of Radiology, The Affiliated Hospital of Qingdao University, Qingdao, China
| | - Mingjia Wang
- College of Automation and Electronic Engineering, Qingdao University of Science and Technology, Qingdao, China
| | - Baozhu Qi
- College of Automation and Electronic Engineering, Qingdao University of Science and Technology, Qingdao, China
| | - Shunli Liu
- Department of Radiology, The Affiliated Hospital of Qingdao University, Qingdao, China
| |
Collapse
|
4
|
Barón JR, Bernabé G, González-Férez P, García JM, Casas G, González-Carrillo J. Improving a Deep Learning Model to Accurately Diagnose LVNC. J Clin Med 2023; 12:7633. [PMID: 38137702 PMCID: PMC10743747 DOI: 10.3390/jcm12247633] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2023] [Revised: 11/23/2023] [Accepted: 12/06/2023] [Indexed: 12/24/2023] Open
Abstract
Accurate diagnosis of Left Ventricular Noncompaction Cardiomyopathy (LVNC) is critical for proper patient treatment but remains challenging. This work improves LVNC detection by improving left ventricle segmentation in cardiac MR images. Trabeculated left ventricle indicates LVNC, but automatic segmentation is difficult. We present techniques to improve segmentation and evaluate their impact on LVNC diagnosis. Three main methods are introduced: (1) using full 800 × 800 MR images rather than 512 × 512; (2) a clustering algorithm to eliminate neural network hallucinations; (3) advanced network architectures including Attention U-Net, MSA-UNet, and U-Net++.Experiments utilize cardiac MR datasets from three different hospitals. U-Net++ achieves the best segmentation performance using 800 × 800 images, and it improves the mean segmentation Dice score by 0.02 over the baseline U-Net, the clustering algorithm improves the mean Dice score by 0.06 on the images it affected, and the U-Net++ provides an additional 0.02 mean Dice score over the baseline U-Net. For LVNC diagnosis, U-Net++ achieves 0.896 accuracy, 0.907 precision, and 0.912 F1-score outperforming the baseline U-Net. Proposed techniques enhance LVNC detection, but differences between hospitals reveal problems in improving generalization. This work provides validated methods for precise LVNC diagnosis.
Collapse
Affiliation(s)
- Jaime Rafael Barón
- Computer Engineering Department, University of Murcia, 30100 Murcia, Spain; (J.R.B.); (P.G.-F.); (J.M.G.)
| | - Gregorio Bernabé
- Computer Engineering Department, University of Murcia, 30100 Murcia, Spain; (J.R.B.); (P.G.-F.); (J.M.G.)
| | - Pilar González-Férez
- Computer Engineering Department, University of Murcia, 30100 Murcia, Spain; (J.R.B.); (P.G.-F.); (J.M.G.)
| | - José Manuel García
- Computer Engineering Department, University of Murcia, 30100 Murcia, Spain; (J.R.B.); (P.G.-F.); (J.M.G.)
| | - Guillem Casas
- Hospital Universitari Vall d’Hbron, 08035 Barcelona, Spain;
| | | |
Collapse
|
5
|
Lakshmipriya B, Pottakkat B, Ramkumar G. Deep learning techniques in liver tumour diagnosis using CT and MR imaging - A systematic review. Artif Intell Med 2023; 141:102557. [PMID: 37295904 DOI: 10.1016/j.artmed.2023.102557] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Revised: 04/15/2023] [Accepted: 04/18/2023] [Indexed: 06/12/2023]
Abstract
Deep learning has become a thriving force in the computer aided diagnosis of liver cancer, as it solves extremely complicated challenges with high accuracy over time and facilitates medical experts in their diagnostic and treatment procedures. This paper presents a comprehensive systematic review on deep learning techniques applied for various applications pertaining to liver images, challenges faced by the clinicians in liver tumour diagnosis and how deep learning bridges the gap between clinical practice and technological solutions with an in-depth summary of 113 articles. Since, deep learning is an emerging revolutionary technology, recent state-of-the-art research implemented on liver images are reviewed with more focus on classification, segmentation and clinical applications in the management of liver diseases. Additionally, similar review articles in literature are reviewed and compared. The review is concluded by presenting the contemporary trends and unaddressed research issues in the field of liver tumour diagnosis, offering directions for future research in this field.
Collapse
Affiliation(s)
- B Lakshmipriya
- Department of Surgical Gastroenterology, Jawaharlal Institute of Postgraduate Medical Education and Research, Puducherry, India
| | - Biju Pottakkat
- Department of Surgical Gastroenterology, Jawaharlal Institute of Postgraduate Medical Education and Research, Puducherry, India.
| | - G Ramkumar
- Department of Radio Diagnosis, Jawaharlal Institute of Postgraduate Medical Education and Research, Puducherry, India
| |
Collapse
|
6
|
Mahmud S, Ibtehaz N, Khandakar A, Sohel Rahman M, JR. Gonzales A, Rahman T, Shafayet Hossain M, Sakib Abrar Hossain M, Ahasan Atick Faisal M, Fuad Abir F, Musharavati F, E. H. Chowdhury M. NABNet: A Nested Attention-guided BiConvLSTM network for a robust prediction of Blood Pressure components from reconstructed Arterial Blood Pressure waveforms using PPG and ECG signals. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104247] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
7
|
A Sustainable Deep Learning-Based Framework for Automated Segmentation of COVID-19 Infected Regions: Using U-Net with an Attention Mechanism and Boundary Loss Function. ELECTRONICS 2022. [DOI: 10.3390/electronics11152296] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
COVID-19 has been spreading rapidly, affecting billions of people globally, with significant public health impacts. Biomedical imaging, such as computed tomography (CT), has significant potential as a possible substitute for the screening process. Because of this, automatic segmentation of images is highly desirable as clinical decision support for an extensive evaluation of disease control and monitoring. It is a dynamic tool and performs a central role in precise or accurate segmentation of infected areas or regions in CT scans, thus helping in screening, diagnosing, and disease monitoring. For this purpose, we introduced a deep learning framework for automated segmentation of COVID-19 infected lesions/regions in lung CT scan images. Specifically, we adopted a segmentation model, i.e., U-Net, and utilized an attention mechanism to enhance the framework’s ability for the segmentation of virus-infected regions. Since all of the features extracted or obtained from the encoders are not valuable for segmentation; thus, we applied the U-Net architecture with a mechanism of attention for a better representation of the features. Moreover, we applied a boundary loss function to deal with small and unbalanced lesion segmentation’s. Using different public CT scan image data sets, we validated the framework’s effectiveness in contrast with other segmentation techniques. The experimental outcomes showed the improved performance of the presented framework for the automated segmentation of lungs and infected areas in CT scan images. We also considered both boundary loss and weighted binary cross-entropy dice loss function. The overall dice accuracies of the framework are 0.93 and 0.76 for lungs and COVID-19 infected areas/regions.
Collapse
|
8
|
Rethinking U-Net from an Attention Perspective with Transformers for Osteosarcoma MRI Image Segmentation. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:7973404. [PMID: 35707196 PMCID: PMC9192230 DOI: 10.1155/2022/7973404] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/27/2022] [Revised: 04/24/2022] [Accepted: 04/28/2022] [Indexed: 12/17/2022]
Abstract
Osteosarcoma is one of the most common primary malignancies of bone in the pediatric and adolescent populations. The morphology and size of osteosarcoma MRI images often show great variability and randomness with different patients. In developing countries, with large populations and lack of medical resources, it is difficult to effectively address the difficulties of early diagnosis of osteosarcoma with limited physician manpower alone. In addition, with the proposal of precision medicine, existing MRI image segmentation models for osteosarcoma face the challenges of insufficient segmentation accuracy and high resource consumption. Inspired by transformer's self-attention mechanism, this paper proposes a lightweight osteosarcoma image segmentation architecture, UATransNet, by adding a multilevel guided self-aware attention module (MGAM) to the encoder-decoder architecture of U-Net. We successively perform dataset classification optimization and remove MRI image irrelevant background. Then, UATransNet is designed with transformer self-attention component (TSAC) and global context aggregation component (GCAC) at the bottom of the encoder-decoder architecture to perform integration of local features and global dependencies and aggregation of contexts to learned features. In addition, we apply dense residual learning to the convolution module and combined with multiscale jump connections, to improve the feature extraction capability. In this paper, we experimentally evaluate more than 80,000 osteosarcoma MRI images and show that our UATransNet yields more accurate segmentation performance. The IOU and DSC values of osteosarcoma are 0.922 ± 0.03 and 0.921 ± 0.04, respectively, and provide intuitive and accurate efficient decision information support for physicians.
Collapse
|