1
|
Ding X, Jiang X, Zheng H, Shi H, Wang B, Chan S. MARes-Net: multi-scale attention residual network for jaw cyst image segmentation. Front Bioeng Biotechnol 2024; 12:1454728. [PMID: 39161348 PMCID: PMC11330813 DOI: 10.3389/fbioe.2024.1454728] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2024] [Accepted: 07/25/2024] [Indexed: 08/21/2024] Open
Abstract
Jaw cyst is a fluid-containing cystic lesion that can occur in any part of the jaw and cause facial swelling, dental lesions, jaw fractures, and other associated issues. Due to the diversity and complexity of jaw images, existing deep-learning methods still have challenges in segmentation. To this end, we propose MARes-Net, an innovative multi-scale attentional residual network architecture. Firstly, the residual connection is used to optimize the encoder-decoder process, which effectively solves the gradient disappearance problem and improves the training efficiency and optimization ability. Secondly, the scale-aware feature extraction module (SFEM) significantly enhances the network's perceptual abilities by extending its receptive field across various scales, spaces, and channel dimensions. Thirdly, the multi-scale compression excitation module (MCEM) compresses and excites the feature map, and combines it with contextual information to obtain better model performance capabilities. Furthermore, the introduction of the attention gate module marks a significant advancement in refining the feature map output. Finally, rigorous experimentation conducted on the original jaw cyst dataset provided by Quzhou People's Hospital to verify the validity of MARes-Net architecture. The experimental data showed that precision, recall, IoU and F1-score of MARes-Net reached 93.84%, 93.70%, 86.17%, and 93.21%, respectively. Compared with existing models, our MARes-Net shows its unparalleled capabilities in accurately delineating and localizing anatomical structures in the jaw cyst image segmentation.
Collapse
Affiliation(s)
- Xiaokang Ding
- College of Mechanical Engineering, Quzhou University, Quzhou, China
| | - Xiaoliang Jiang
- College of Mechanical Engineering, Quzhou University, Quzhou, China
| | - Huixia Zheng
- Department of Stomatology, Quzhou People’s Hospital, The Quzhou Affiliated Hospital of Wenzhou Medical University, Quzhou, China
| | - Hualuo Shi
- College of Mechanical Engineering, Quzhou University, Quzhou, China
- School of Mechanical Engineering, Hangzhou Dianzi University, Hangzhou, China
| | - Ban Wang
- School of Mechanical Engineering, Hangzhou Dianzi University, Hangzhou, China
| | - Sixian Chan
- College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, China
| |
Collapse
|
2
|
Kunkyab T, Bahrami Z, Zhang H, Liu Z, Hyde D. A deep learning-based framework (Co-ReTr) for auto-segmentation of non-small cell-lung cancer in computed tomography images. J Appl Clin Med Phys 2024; 25:e14297. [PMID: 38373289 DOI: 10.1002/acm2.14297] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Revised: 01/15/2024] [Accepted: 01/23/2024] [Indexed: 02/21/2024] Open
Abstract
PURPOSE Deep learning-based auto-segmentation algorithms can improve clinical workflow by defining accurate regions of interest while reducing manual labor. Over the past decade, convolutional neural networks (CNNs) have become prominent in medical image segmentation applications. However, CNNs have limitations in learning long-range spatial dependencies due to the locality of the convolutional layers. Transformers were introduced to address this challenge. In transformers with self-attention mechanism, even the first layer of information processing makes connections between distant image locations. Our paper presents a novel framework that bridges these two unique techniques, CNNs and transformers, to segment the gross tumor volume (GTV) accurately and efficiently in computed tomography (CT) images of non-small cell-lung cancer (NSCLC) patients. METHODS Under this framework, input of multiple resolution images was used with multi-depth backbones to retain the benefits of high-resolution and low-resolution images in the deep learning architecture. Furthermore, a deformable transformer was utilized to learn the long-range dependency on the extracted features. To reduce computational complexity and to efficiently process multi-scale, multi-depth, high-resolution 3D images, this transformer pays attention to small key positions, which were identified by a self-attention mechanism. We evaluated the performance of the proposed framework on a NSCLC dataset which contains 563 training images and 113 test images. Our novel deep learning algorithm was benchmarked against five other similar deep learning models. RESULTS The experimental results indicate that our proposed framework outperforms other CNN-based, transformer-based, and hybrid methods in terms of Dice score (0.92) and Hausdorff Distance (1.33). Therefore, our proposed model could potentially improve the efficiency of auto-segmentation of early-stage NSCLC during the clinical workflow. This type of framework may potentially facilitate online adaptive radiotherapy, where an efficient auto-segmentation workflow is required. CONCLUSIONS Our deep learning framework, based on CNN and transformer, performs auto-segmentation efficiently and could potentially assist clinical radiotherapy workflow.
Collapse
Affiliation(s)
- Tenzin Kunkyab
- Department of Computer Science, Mathematics, Physics and Statistics, University of British Columbia Okanagan, Kelowna, British Columbia, Canada
| | - Zhila Bahrami
- School of Engineering, The University of British Columbia Okanagan Campus, Kelowna, British Columbia, Canada
| | - Heqing Zhang
- School of Engineering, The University of British Columbia Okanagan Campus, Kelowna, British Columbia, Canada
| | - Zheng Liu
- School of Engineering, The University of British Columbia Okanagan Campus, Kelowna, British Columbia, Canada
| | - Derek Hyde
- Department of Medical Physics, BC Cancer - Kelowna, Kelowna, Canada
| |
Collapse
|
4
|
Ferrante M, Rinaldi L, Botta F, Hu X, Dolp A, Minotti M, De Piano F, Funicelli G, Volpe S, Bellerba F, De Marco P, Raimondi S, Rizzo S, Shi K, Cremonesi M, Jereczek-Fossa BA, Spaggiari L, De Marinis F, Orecchia R, Origgi D. Application of nnU-Net for Automatic Segmentation of Lung Lesions on CT Images and Its Implication for Radiomic Models. J Clin Med 2022; 11:7334. [PMID: 36555950 PMCID: PMC9784875 DOI: 10.3390/jcm11247334] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Revised: 12/05/2022] [Accepted: 12/07/2022] [Indexed: 12/14/2022] Open
Abstract
Radiomics investigates the predictive role of quantitative parameters calculated from radiological images. In oncology, tumour segmentation constitutes a crucial step of the radiomic workflow. Manual segmentation is time-consuming and prone to inter-observer variability. In this study, a state-of-the-art deep-learning network for automatic segmentation (nnU-Net) was applied to computed tomography images of lung tumour patients, and its impact on the performance of survival radiomic models was assessed. In total, 899 patients were included, from two proprietary and one public datasets. Different network architectures (2D, 3D) were trained and tested on different combinations of the datasets. Automatic segmentations were compared to reference manual segmentations performed by physicians using the DICE similarity coefficient. Subsequently, the accuracy of radiomic models for survival classification based on either manual or automatic segmentations were compared, considering both hand-crafted and deep-learning features. The best agreement between automatic and manual contours (DICE = 0.78 ± 0.12) was achieved averaging 2D and 3D predictions and applying customised post-processing. The accuracy of the survival classifier (ranging between 0.65 and 0.78) was not statistically different when using manual versus automatic contours, both with hand-crafted and deep features. These results support the promising role nnU-Net can play in automatic segmentation, accelerating the radiomic workflow without impairing the models' accuracy. Further investigations on different clinical endpoints and populations are encouraged to confirm and generalise these findings.
Collapse
Affiliation(s)
- Matteo Ferrante
- Medical Physics Unit, IEO European Institute of Oncology IRCCS, via Ripamonti 435, 20141 Milan, Italy
| | - Lisa Rinaldi
- Radiation Research Unit, IEO European Institute of Oncology IRCCS, via Ripamonti 435, 20141 Milan, Italy
| | - Francesca Botta
- Medical Physics Unit, IEO European Institute of Oncology IRCCS, via Ripamonti 435, 20141 Milan, Italy
| | - Xiaobin Hu
- Department of Informatics, Technical University of Munich, Arcisstraße 21, 80333 Munich, Germany
| | - Andreas Dolp
- Department of Informatics, Technical University of Munich, Arcisstraße 21, 80333 Munich, Germany
| | - Marta Minotti
- Division of Radiology, IEO, European Institute of Oncology IRCCS, via Ripamonti 435, 20141 Milan, Italy
| | - Francesca De Piano
- Division of Radiology, IEO, European Institute of Oncology IRCCS, via Ripamonti 435, 20141 Milan, Italy
| | - Gianluigi Funicelli
- Division of Radiology, IEO, European Institute of Oncology IRCCS, via Ripamonti 435, 20141 Milan, Italy
| | - Stefania Volpe
- Division of Radiation Oncology, IEO European Institute of Oncology IRCCS, via Ripamonti 435, 20141 Milan, Italy
- Department of Oncology and Hemato-Oncology, University of Milan, via Festa del Perdono 7, 20122 Milan, Italy
| | - Federica Bellerba
- Department of Experimental Oncology, IEO European Institute of Oncology IRCCS, via Ripamonti 435, 20141 Milan, Italy
| | - Paolo De Marco
- Medical Physics Unit, IEO European Institute of Oncology IRCCS, via Ripamonti 435, 20141 Milan, Italy
| | - Sara Raimondi
- Department of Experimental Oncology, IEO European Institute of Oncology IRCCS, via Ripamonti 435, 20141 Milan, Italy
| | - Stefania Rizzo
- Clinica di Radiologia EOC, Istituto Imaging della Svizzera Italiana (IIMSI), via Tesserete 46, 6900 Lugano, Switzerland
- Faculty of Biomedical Sciences, Università della Svizzera Italiana (USI), via G. Buffi 13, 6900 Lugano, Switzerland
| | - Kuangyu Shi
- Chair for Computer-Aided Medical Procedures, Department of Informatics, Technical University of Munich, Arcisstraße 21, 80333 Munich, Germany
- Department of Nuclear Medicine, Bern University Hospital, University of Bern, Freiburgstrasse 18, 3010 Bern, Switzerland
| | - Marta Cremonesi
- Radiation Research Unit, IEO European Institute of Oncology IRCCS, via Ripamonti 435, 20141 Milan, Italy
| | - Barbara A. Jereczek-Fossa
- Division of Radiation Oncology, IEO European Institute of Oncology IRCCS, via Ripamonti 435, 20141 Milan, Italy
- Department of Oncology and Hemato-Oncology, University of Milan, via Festa del Perdono 7, 20122 Milan, Italy
| | - Lorenzo Spaggiari
- Department of Oncology and Hemato-Oncology, University of Milan, via Festa del Perdono 7, 20122 Milan, Italy
- Division of Thoracic Surgery, IEO, European Institute of Oncology IRCCS, via Ripamonti 435, 20141 Milan, Italy
| | - Filippo De Marinis
- Division of Thoracic Oncology, IEO, European Institute of Oncology IRCCS, via Ripamonti 435, 20141 Milan, Italy
| | - Roberto Orecchia
- Division of Radiology, IEO, European Institute of Oncology IRCCS, via Ripamonti 435, 20141 Milan, Italy
- Scientific Direction, IEO, European Institute of Oncology IRCCS, via Ripamonti 435, 20141 Milan, Italy
| | - Daniela Origgi
- Medical Physics Unit, IEO European Institute of Oncology IRCCS, via Ripamonti 435, 20141 Milan, Italy
| |
Collapse
|
5
|
Gu J, Li B, Shu H, Zhu J, Qiu Q, Bai T. Development and verification of radiomics framework for computed tomography image segmentation. Med Phys 2022; 49:6527-6537. [PMID: 35917213 PMCID: PMC9805121 DOI: 10.1002/mp.15904] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Revised: 07/19/2022] [Accepted: 07/21/2022] [Indexed: 01/09/2023] Open
Abstract
BACKGROUND Radiomics has been considered an imaging marker for capturing quantitative image information (QII). The introduction of radiomics to image segmentation is desirable but challenging. PURPOSE This study aims to develop and validate a radiomics-based framework for image segmentation (RFIS). METHODS RFIS is designed using features extracted from volume (svfeatures) created by sliding window (swvolume). The 53 svfeatures are extracted from 11 phantom series. Outliers in the svfeature datasets are detected by isolation forest (iForest) and specified as the mean value. The percentage coefficient of variation (%COV) is calculated to evaluate the reproducibility of svfeatures. RFIS is constructed and applied to the gross target volume (GTV) segmentation from the peritumoral region (GTV with a 10 mm margin) to assess its feasibility. The 127 lung cancer images are enrolled. The test-retest method, correlation matrix, and Mann-Whitney U test (p < 0.05) are used to select non-redundant svfeatures of statistical significance from the reproducible svfeatures. The synthetic minority over-sampling technique is utilized to balance the minority group in the training sets. The support vector machine is employed for RFIS construction, which is tuned in the training set using 10-fold stratified cross-validation and then evaluated in the test sets. The swvolumes with the consistent classification results are grouped and merged. Mode filtering is performed to remove very small subvolumes and create relatively large regions of completely uniform character. In addition, RFIS performance is evaluated by the area under the receiver operating characteristic (ROC) curve (AUC), accuracy, sensitivity, specificity, and Dice similarity coefficient (DSC). RESULTS 30249 phantom and 145008 patient image swvolumes were analyzed. Forty-nine (92.45% of 53) svfeatures represented excellent reproducibility(%COV<15). Forty-five features (91.84% of 49) included five categories that passed test-retest analysis. Thirteen svfeatures (28.89% of 45) svfeatures were selected for RFIS construction. RFIS showed an average (95% confidence interval) sensitivity of 0.848 (95% CI:0.844-0.883), a specificity of 0.821 (95% CI: 0.818-0.825), an accuracy of 83.48% (95% CI: 83.27%-83.70%), and an AUC of 0.906 (95% CI: 0.904-0.908) with cross-validation. The sensitivity, specificity, accuracy, and AUC were equal to 0.762 (95% CI: 0.754-0.770), 0.840 (95% CI: 0.837-0.844), 82.29% (95% CI: 81.90%-82.60%), and 0.877 (95% CI: 0.873-0.881) in the test set, respectively. GTV was segmented by grouping and merging swvolume with identical classification results. The mean DSC after mode filtering was 0.707 ± 0.093 in the training sets and 0.688 ± 0.072 in the test sets. CONCLUSION Reproducible svfeatures can capture the differences in QII among swvolumes. RFIS can be applied to swvolume classification, which achieves image segmentation by grouping and merging the swvolume with similar QII.
Collapse
Affiliation(s)
- Jiabing Gu
- Southeast UniversityLaboratory of Image Science and TechnologyJiangsu Provincial Joint International Research Laboratory of Medical Information ProcessingCentre de Recherche en Information Biomédicale Sino‐français (CRIBs)NanjingP. R. China,Department of Radiation Oncology Physics and TechnologyShandong Cancer Hospital and InstituteShandong First Medical University and Shandong Academy of Medical SciencesJinanChina
| | - Baosheng Li
- Southeast UniversityLaboratory of Image Science and TechnologyJiangsu Provincial Joint International Research Laboratory of Medical Information ProcessingCentre de Recherche en Information Biomédicale Sino‐français (CRIBs)NanjingP. R. China,Department of Radiation Oncology Physics and TechnologyShandong Cancer Hospital and InstituteShandong First Medical University and Shandong Academy of Medical SciencesJinanChina
| | - Huazhong Shu
- Southeast UniversityLaboratory of Image Science and TechnologyJiangsu Provincial Joint International Research Laboratory of Medical Information ProcessingCentre de Recherche en Information Biomédicale Sino‐français (CRIBs)NanjingP. R. China
| | - Jian Zhu
- Department of Radiation Oncology Physics and TechnologyShandong Cancer Hospital and InstituteShandong First Medical University and Shandong Academy of Medical SciencesJinanChina,Shandong Key Laboratory of Digital Medicine and Computer Assisted SurgeryThe Affiliated Hospital of Qingdao UniversityQingdaoP. R. China
| | - Qingtao Qiu
- Department of Radiation Oncology Physics and TechnologyShandong Cancer Hospital and InstituteShandong First Medical University and Shandong Academy of Medical SciencesJinanChina
| | - Tong Bai
- Department of Radiation Oncology Physics and TechnologyShandong Cancer Hospital and InstituteShandong First Medical University and Shandong Academy of Medical SciencesJinanChina
| |
Collapse
|