1
|
Ren J, Hochreuter K, Kallehauge JF, Korreman SS. UMamba Adjustment: Advancing GTV Segmentation for Head and Neck Cancer in MRI-Guided RT with UMamba and NnU-Net ResEnc Planner. HEAD AND NECK TUMOR SEGMENTATION FOR MR-GUIDED APPLICATIONS : FIRST MICCAI CHALLENGE, HNTS-MRG 2024, HELD IN CONJUNCTION WITH MICCAI 2024, MARRAKESH, MOROCCO, OCTOBER 17, 2024, PROCEEDINGS 2025; 15273:123-135. [PMID: 40236615 PMCID: PMC11997962 DOI: 10.1007/978-3-031-83274-1_9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 04/17/2025]
Abstract
Magnetic Resonance Imaging (MRI) plays a crucial role in MRI-guided adaptive radiotherapy for head and neck cancer (HNC) due to its superior soft-tissue contrast. However, accurately segmenting the gross tumor volume (GTV), which includes both the primary tumor (GTVp) and lymph nodes (GTVn), remains challenging. Recently, two deep learning segmentation innovations have shown great promise: UMamba, which effectively captures long-range dependencies, and the nnU-Net Residual Encoder (ResEnc), which enhances feature extraction through multistage residual blocks. In this study, we integrate these strengths into a novel approach, termed 'UMambaAdj'. Our proposed method was evaluated on the HNTS-MRG 2024 challenge test set using pre-RT T2-weighted MRI images, achieving an aggregated Dice Similarity Coefficient ( D S C a g g ) of 0.751 for GTVp and 0.842 for GTVn, with a mean D S C a g g of 0.796. This approach demonstrates potential for more precise tumor delineation in MRI-guided adaptive radiotherapy, ultimately improving treatment outcomes for HNC patients. Team: DCPT-Stine's group.
Collapse
Affiliation(s)
- Jintao Ren
- Department of Clinical Medicine, Aarhus University, Nordre Palle Juul-Jensens Blvd. 11, 8200 Aarhus, Denmark
- Aarhus University Hospital, Danish Centre for Particle Therapy, Palle Juul-Jensens Blvd. 25, 8200 Aarhus, Denmark
| | - Kim Hochreuter
- Department of Clinical Medicine, Aarhus University, Nordre Palle Juul-Jensens Blvd. 11, 8200 Aarhus, Denmark
- Aarhus University Hospital, Danish Centre for Particle Therapy, Palle Juul-Jensens Blvd. 25, 8200 Aarhus, Denmark
| | - Jesper Folsted Kallehauge
- Department of Clinical Medicine, Aarhus University, Nordre Palle Juul-Jensens Blvd. 11, 8200 Aarhus, Denmark
- Aarhus University Hospital, Danish Centre for Particle Therapy, Palle Juul-Jensens Blvd. 25, 8200 Aarhus, Denmark
| | - Stine Sofia Korreman
- Department of Clinical Medicine, Aarhus University, Nordre Palle Juul-Jensens Blvd. 11, 8200 Aarhus, Denmark
- Aarhus University Hospital, Danish Centre for Particle Therapy, Palle Juul-Jensens Blvd. 25, 8200 Aarhus, Denmark
- Aarhus University, Department of Oncology, Palle Juul-Jensens Blvd. 35, 8200 Aarhus, Denmark
| |
Collapse
|
2
|
Toosi A, Shiri I, Zaidi H, Rahmim A. Segmentation-Free Outcome Prediction from Head and Neck Cancer PET/CT Images: Deep Learning-Based Feature Extraction from Multi-Angle Maximum Intensity Projections (MA-MIPs). Cancers (Basel) 2024; 16:2538. [PMID: 39061178 PMCID: PMC11274485 DOI: 10.3390/cancers16142538] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2024] [Revised: 07/09/2024] [Accepted: 07/10/2024] [Indexed: 07/28/2024] Open
Abstract
We introduce an innovative, simple, effective segmentation-free approach for survival analysis of head and neck cancer (HNC) patients from PET/CT images. By harnessing deep learning-based feature extraction techniques and multi-angle maximum intensity projections (MA-MIPs) applied to Fluorodeoxyglucose Positron Emission Tomography (FDG-PET) images, our proposed method eliminates the need for manual segmentations of regions-of-interest (ROIs) such as primary tumors and involved lymph nodes. Instead, a state-of-the-art object detection model is trained utilizing the CT images to perform automatic cropping of the head and neck anatomical area, instead of only the lesions or involved lymph nodes on the PET volumes. A pre-trained deep convolutional neural network backbone is then utilized to extract deep features from MA-MIPs obtained from 72 multi-angel axial rotations of the cropped PET volumes. These deep features extracted from multiple projection views of the PET volumes are then aggregated and fused, and employed to perform recurrence-free survival analysis on a cohort of 489 HNC patients. The proposed approach outperforms the best performing method on the target dataset for the task of recurrence-free survival analysis. By circumventing the manual delineation of the malignancies on the FDG PET-CT images, our approach eliminates the dependency on subjective interpretations and highly enhances the reproducibility of the proposed survival analysis method. The code for this work is publicly released.
Collapse
Affiliation(s)
- Amirhosein Toosi
- Department of Radiology, University of British Columbia, Vancouver, BC V5Z 1M9, Canada;
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, BC V5Z 1L3, Canada
| | - Isaac Shiri
- Department of Cardiology, University Hospital Bern, CH-3010 Bern, Switzerland;
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland;
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland;
| | - Arman Rahmim
- Department of Radiology, University of British Columbia, Vancouver, BC V5Z 1M9, Canada;
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, BC V5Z 1L3, Canada
- Department of Physics & Astronomy, University of British Columbia, Vancouver, BC V6T 1Z1, Canada
- Department of Biomedical Engineering, University of British Columbia, Vancouver, BC V6T 1Z3, Canada
| |
Collapse
|
3
|
Ma Q, Liu Z, Zhang J, Fu C, Li R, Sun Y, Tong T, Gu Y. Multi-task reconstruction network for synthetic diffusion kurtosis imaging: Predicting neoadjuvant chemoradiotherapy response in locally advanced rectal cancer. Eur J Radiol 2024; 174:111402. [PMID: 38461737 DOI: 10.1016/j.ejrad.2024.111402] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2023] [Revised: 02/12/2024] [Accepted: 02/29/2024] [Indexed: 03/12/2024]
Abstract
PURPOSE To assess the feasibility and clinical value of synthetic diffusion kurtosis imaging (DKI) generated from diffusion weighted imaging (DWI) through multi-task reconstruction network (MTR-Net) for tumor response prediction in patients with locally advanced rectal cancer (LARC). METHODS In this retrospective study, 120 eligible patients with LARC were enrolled and randomly divided into training and testing datasets with a 7:3 ratio. The MTR-Net was developed for reconstructing Dapp and Kapp images from apparent diffusion coefficient (ADC) images. Tumor regions were manually segmented on both true and synthetic DKI images. The synthetic image quality and manual segmentation agreement were quantitatively assessed. The support vector machine (SVM) classifier was used to construct radiomics models based on the true and synthetic DKI images for pathological complete response (pCR) prediction. The prediction performance for the models was evaluated by the receiver operating characteristic (ROC) curve analysis. RESULTS The mean squared error (MSE), peak signal-to-noise ratio (PSNR), and structural similarity index measure (SSIM) for tumor regions were 0.212, 24.278, and 0.853, respectively, for the synthetic Dapp images and 0.516, 24.883, and 0.804, respectively, for the synthetic Kapp images. The Dice similarity coefficient (DSC), positive predictive value (PPV), sensitivity (SEN), and Hausdorff distance (HD) for the manually segmented tumor regions were 0.786, 0.844, 0.755, and 0.582, respectively. For predicting pCR, the true and synthetic DKI-based radiomics models achieved area under the curve (AUC) values of 0.825 and 0.807 in the testing datasets, respectively. CONCLUSIONS Generating synthetic DKI images from DWI images using MTR-Net is feasible, and the efficiency of synthetic DKI images in predicting pCR is comparable to that of true DKI images.
Collapse
Affiliation(s)
- Qiong Ma
- Department of Radiology, Fudan University Shanghai Cancer Center, Shanghai 200032, China; Shanghai Key Laboratory of Radiation Oncology, Shanghai 200032, China
| | - Zonglin Liu
- Department of Radiology, Fudan University Shanghai Cancer Center, Shanghai 200032, China; Shanghai Key Laboratory of Radiation Oncology, Shanghai 200032, China
| | - Jiadong Zhang
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong 999077, China; School of Biomedical Engineering, ShanghaiTech University, Shanghai 201210, China
| | - Caixia Fu
- MR Application Development, Siemens Shenzhen Magnetic Resonance Ltd., Shenzhen 518057, China
| | - Rong Li
- Department of Radiology, Fudan University Shanghai Cancer Center, Shanghai 200032, China; Shanghai Key Laboratory of Radiation Oncology, Shanghai 200032, China
| | - Yiqun Sun
- Department of Radiology, Fudan University Shanghai Cancer Center, Shanghai 200032, China; Shanghai Key Laboratory of Radiation Oncology, Shanghai 200032, China.
| | - Tong Tong
- Department of Radiology, Fudan University Shanghai Cancer Center, Shanghai 200032, China; Shanghai Key Laboratory of Radiation Oncology, Shanghai 200032, China.
| | - Yajia Gu
- Department of Radiology, Fudan University Shanghai Cancer Center, Shanghai 200032, China; Shanghai Key Laboratory of Radiation Oncology, Shanghai 200032, China.
| |
Collapse
|
4
|
Xiong X, Smith BJ, Graves SA, Graham MM, Buatti JM, Beichel RR. Head and Neck Cancer Segmentation in FDG PET Images: Performance Comparison of Convolutional Neural Networks and Vision Transformers. Tomography 2023; 9:1933-1948. [PMID: 37888743 PMCID: PMC10611182 DOI: 10.3390/tomography9050151] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Revised: 10/11/2023] [Accepted: 10/13/2023] [Indexed: 10/28/2023] Open
Abstract
Convolutional neural networks (CNNs) have a proven track record in medical image segmentation. Recently, Vision Transformers were introduced and are gaining popularity for many computer vision applications, including object detection, classification, and segmentation. Machine learning algorithms such as CNNs or Transformers are subject to an inductive bias, which can have a significant impact on the performance of machine learning models. This is especially relevant for medical image segmentation applications where limited training data are available, and a model's inductive bias should help it to generalize well. In this work, we quantitatively assess the performance of two CNN-based networks (U-Net and U-Net-CBAM) and three popular Transformer-based segmentation network architectures (UNETR, TransBTS, and VT-UNet) in the context of HNC lesion segmentation in volumetric [F-18] fluorodeoxyglucose (FDG) PET scans. For performance assessment, 272 FDG PET-CT scans of a clinical trial (ACRIN 6685) were utilized, which includes a total of 650 lesions (primary: 272 and secondary: 378). The image data used are highly diverse and representative for clinical use. For performance analysis, several error metrics were utilized. The achieved Dice coefficient ranged from 0.833 to 0.809 with the best performance being achieved by CNN-based approaches. U-Net-CBAM, which utilizes spatial and channel attention, showed several advantages for smaller lesions compared to the standard U-Net. Furthermore, our results provide some insight regarding the image features relevant for this specific segmentation application. In addition, results highlight the need to utilize primary as well as secondary lesions to derive clinically relevant segmentation performance estimates avoiding biases.
Collapse
Affiliation(s)
- Xiaofan Xiong
- Department of Biomedical Engineering, The University of Iowa, Iowa City, IA 52242, USA
| | - Brian J. Smith
- Department of Biostatistics, The University of Iowa, Iowa City, IA 52242, USA
| | - Stephen A. Graves
- Department of Radiology, The University of Iowa, Iowa City, IA 52242, USA; (S.A.G.)
| | - Michael M. Graham
- Department of Radiology, The University of Iowa, Iowa City, IA 52242, USA; (S.A.G.)
| | - John M. Buatti
- Department of Radiation Oncology, University of Iowa Hospitals and Clinics, Iowa City, IA 52242, USA
| | - Reinhard R. Beichel
- Department of Electrical and Computer Engineering, The University of Iowa, Iowa City, IA 52242, USA
| |
Collapse
|