1
|
Kunkyab T, Bahrami Z, Zhang H, Liu Z, Hyde D. A deep learning-based framework (Co-ReTr) for auto-segmentation of non-small cell-lung cancer in computed tomography images. J Appl Clin Med Phys 2024; 25:e14297. [PMID: 38373289 DOI: 10.1002/acm2.14297] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Revised: 01/15/2024] [Accepted: 01/23/2024] [Indexed: 02/21/2024] Open
Abstract
PURPOSE Deep learning-based auto-segmentation algorithms can improve clinical workflow by defining accurate regions of interest while reducing manual labor. Over the past decade, convolutional neural networks (CNNs) have become prominent in medical image segmentation applications. However, CNNs have limitations in learning long-range spatial dependencies due to the locality of the convolutional layers. Transformers were introduced to address this challenge. In transformers with self-attention mechanism, even the first layer of information processing makes connections between distant image locations. Our paper presents a novel framework that bridges these two unique techniques, CNNs and transformers, to segment the gross tumor volume (GTV) accurately and efficiently in computed tomography (CT) images of non-small cell-lung cancer (NSCLC) patients. METHODS Under this framework, input of multiple resolution images was used with multi-depth backbones to retain the benefits of high-resolution and low-resolution images in the deep learning architecture. Furthermore, a deformable transformer was utilized to learn the long-range dependency on the extracted features. To reduce computational complexity and to efficiently process multi-scale, multi-depth, high-resolution 3D images, this transformer pays attention to small key positions, which were identified by a self-attention mechanism. We evaluated the performance of the proposed framework on a NSCLC dataset which contains 563 training images and 113 test images. Our novel deep learning algorithm was benchmarked against five other similar deep learning models. RESULTS The experimental results indicate that our proposed framework outperforms other CNN-based, transformer-based, and hybrid methods in terms of Dice score (0.92) and Hausdorff Distance (1.33). Therefore, our proposed model could potentially improve the efficiency of auto-segmentation of early-stage NSCLC during the clinical workflow. This type of framework may potentially facilitate online adaptive radiotherapy, where an efficient auto-segmentation workflow is required. CONCLUSIONS Our deep learning framework, based on CNN and transformer, performs auto-segmentation efficiently and could potentially assist clinical radiotherapy workflow.
Collapse
Affiliation(s)
- Tenzin Kunkyab
- Department of Computer Science, Mathematics, Physics and Statistics, University of British Columbia Okanagan, Kelowna, British Columbia, Canada
| | - Zhila Bahrami
- School of Engineering, The University of British Columbia Okanagan Campus, Kelowna, British Columbia, Canada
| | - Heqing Zhang
- School of Engineering, The University of British Columbia Okanagan Campus, Kelowna, British Columbia, Canada
| | - Zheng Liu
- School of Engineering, The University of British Columbia Okanagan Campus, Kelowna, British Columbia, Canada
| | - Derek Hyde
- Department of Medical Physics, BC Cancer - Kelowna, Kelowna, Canada
| |
Collapse
|
2
|
Li X, Jia L, Lin F, Chai F, Liu T, Zhang W, Wei Z, Xiong W, Li H, Zhang M, Wang Y. Semi-supervised auto-segmentation method for pelvic organ-at-risk in magnetic resonance images based on deep-learning. J Appl Clin Med Phys 2024; 25:e14296. [PMID: 38386963 DOI: 10.1002/acm2.14296] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2023] [Revised: 01/06/2024] [Accepted: 01/23/2024] [Indexed: 02/24/2024] Open
Abstract
BACKGROUND AND PURPOSE In radiotherapy, magnetic resonance (MR) imaging has higher contrast for soft tissues compared to computed tomography (CT) scanning and does not emit radiation. However, manual annotation of the deep learning-based automatic organ-at-risk (OAR) delineation algorithms is expensive, making the collection of large-high-quality annotated datasets a challenge. Therefore, we proposed the low-cost semi-supervised OAR segmentation method using small pelvic MR image annotations. METHODS We trained a deep learning-based segmentation model using 116 sets of MR images from 116 patients. The bladder, femoral heads, rectum, and small intestine were selected as OAR regions. To generate the training set, we utilized a semi-supervised method and ensemble learning techniques. Additionally, we employed a post-processing algorithm to correct the self-annotation data. Both 2D and 3D auto-segmentation networks were evaluated for their performance. Furthermore, we evaluated the performance of semi-supervised method for 50 labeled data and only 10 labeled data. RESULTS The Dice similarity coefficient (DSC) of the bladder, femoral heads, rectum and small intestine between segmentation results and reference masks is 0.954, 0.984, 0.908, 0.852 only using self-annotation and post-processing methods of 2D segmentation model. The DSC of corresponding OARs is 0.871, 0.975, 0.975, 0.783, 0.724 using 3D segmentation network, 0.896, 0.984, 0.890, 0.828 using 2D segmentation network and common supervised method. CONCLUSION The outcomes of our study demonstrate that it is possible to train a multi-OAR segmentation model using small annotation samples and additional unlabeled data. To effectively annotate the dataset, ensemble learning and post-processing methods were employed. Additionally, when dealing with anisotropy and limited sample sizes, the 2D model outperformed the 3D model in terms of performance.
Collapse
Affiliation(s)
- Xianan Li
- Department of Radiation Oncology, Peking University People's Hospital, Beijing, China
| | - Lecheng Jia
- Radiotherapy laboratory, Shenzhen United Imaging Research Institute of Innovative Medical Equipment, Shenzhen, China
- Zhejiang Engineering Research Center for Innovation and Application of Intelligent Radiotherapy Technology, Wenzhou, China
| | - Fengyu Lin
- Radiotherapy laboratory, Shenzhen United Imaging Research Institute of Innovative Medical Equipment, Shenzhen, China
| | - Fan Chai
- Department of Radiology, Peking University People's Hospital, Beijing, China
| | - Tao Liu
- Department of Radiology, Peking University People's Hospital, Beijing, China
| | - Wei Zhang
- Radiotherapy Business Unit, Shanghai United Imaging Healthcare Co., Ltd., Shanghai, China
| | - Ziquan Wei
- Radiotherapy laboratory, Shenzhen United Imaging Research Institute of Innovative Medical Equipment, Shenzhen, China
| | - Weiqi Xiong
- Radiotherapy Business Unit, Shanghai United Imaging Healthcare Co., Ltd., Shanghai, China
| | - Hua Li
- Radiotherapy laboratory, Shenzhen United Imaging Research Institute of Innovative Medical Equipment, Shenzhen, China
| | - Min Zhang
- Department of Radiation Oncology, Peking University People's Hospital, Beijing, China
| | - Yi Wang
- Department of Radiology, Peking University People's Hospital, Beijing, China
| |
Collapse
|
3
|
Temple SWP, Rowbottom CG. Gross failure rates and failure modes for a commercial AI-based auto-segmentation algorithm in head and neck cancer patients. J Appl Clin Med Phys 2024:e14273. [PMID: 38263866 DOI: 10.1002/acm2.14273] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2023] [Revised: 12/15/2023] [Accepted: 12/20/2023] [Indexed: 01/25/2024] Open
Abstract
PURPOSE Artificial intelligence (AI) based commercial software can be used to automatically delineate organs at risk (OAR), with potential for efficiency savings in the radiotherapy treatment planning pathway, and reduction of inter- and intra-observer variability. There has been little research investigating gross failure rates and failure modes of such systems. METHOD 50 head and neck (H&N) patient data sets with "gold standard" contours were compared to AI-generated contours to produce expected mean and standard deviation values for the Dice Similarity Coefficient (DSC), for four common H&N OARs (brainstem, mandible, left and right parotid). An AI-based commercial system was applied to 500 H&N patients. AI-generated contours were compared to manual contours, outlined by an expert human, and a gross failure was set at three standard deviations below the expected mean DSC. Failures were inspected to assess reason for failure of the AI-based system with failures relating to suboptimal manual contouring censored. True failures were classified into 4 sub-types (setup position, anatomy, image artefacts and unknown). RESULTS There were 24 true failures of the AI-based commercial software, a gross failure rate of 1.2%. Fifteen failures were due to patient anatomy, four were due to dental image artefacts, three were due to patient position and two were unknown. True failure rates by OAR were 0.4% (brainstem), 2.2% (mandible), 1.4% (left parotid) and 0.8% (right parotid). CONCLUSION True failures of the AI-based system were predominantly associated with a non-standard element within the CT scan. It is likely that these non-standard elements were the reason for the gross failure, and suggests that patient datasets used to train the AI model did not contain sufficient heterogeneity of data. Regardless of the reasons for failure, the true failure rate for the AI-based system in the H&N region for the OARs investigated was low (∼1%).
Collapse
Affiliation(s)
- Simon W P Temple
- Medical Physics Department, The Clatterbridge Cancer Centre NHS Foundation Trust, Liverpool, UK
| | - Carl G Rowbottom
- Medical Physics Department, The Clatterbridge Cancer Centre NHS Foundation Trust, Liverpool, UK
- Department of Physics, University of Liverpool, Liverpool, UK
| |
Collapse
|
4
|
Kawamura M, Kamomae T, Yanagawa M, Kamagata K, Fujita S, Ueda D, Matsui Y, Fushimi Y, Fujioka T, Nozaki T, Yamada A, Hirata K, Ito R, Fujima N, Tatsugami F, Nakaura T, Tsuboyama T, Naganawa S. Revolutionizing radiation therapy: the role of AI in clinical practice. J Radiat Res 2024; 65:1-9. [PMID: 37996085 PMCID: PMC10803173 DOI: 10.1093/jrr/rrad090] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Revised: 09/25/2023] [Accepted: 10/16/2023] [Indexed: 11/25/2023]
Abstract
This review provides an overview of the application of artificial intelligence (AI) in radiation therapy (RT) from a radiation oncologist's perspective. Over the years, advances in diagnostic imaging have significantly improved the efficiency and effectiveness of radiotherapy. The introduction of AI has further optimized the segmentation of tumors and organs at risk, thereby saving considerable time for radiation oncologists. AI has also been utilized in treatment planning and optimization, reducing the planning time from several days to minutes or even seconds. Knowledge-based treatment planning and deep learning techniques have been employed to produce treatment plans comparable to those generated by humans. Additionally, AI has potential applications in quality control and assurance of treatment plans, optimization of image-guided RT and monitoring of mobile tumors during treatment. Prognostic evaluation and prediction using AI have been increasingly explored, with radiomics being a prominent area of research. The future of AI in radiation oncology offers the potential to establish treatment standardization by minimizing inter-observer differences in segmentation and improving dose adequacy evaluation. RT standardization through AI may have global implications, providing world-standard treatment even in resource-limited settings. However, there are challenges in accumulating big data, including patient background information and correlating treatment plans with disease outcomes. Although challenges remain, ongoing research and the integration of AI technology hold promise for further advancements in radiation oncology.
Collapse
Affiliation(s)
- Mariko Kawamura
- Department of Radiology, Nagoya University Graduate School of Medicine, 65 Tsurumaicho, Showa-ku, Nagoya, Aichi, 466-8550, Japan
| | - Takeshi Kamomae
- Department of Radiology, Nagoya University Graduate School of Medicine, 65 Tsurumaicho, Showa-ku, Nagoya, Aichi, 466-8550, Japan
| | - Masahiro Yanagawa
- Department of Radiology, Osaka University Graduate School of Medicine, 2-2 Yamadaoka, Suita, 565-0871, Japan
| | - Koji Kamagata
- Department of Radiology, Juntendo University Graduate School of Medicine, 2-1-1 Hongo, Bunkyo-ku, Tokyo, 113-8421, Japan
| | - Shohei Fujita
- Department of Radiology, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Daiju Ueda
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, 1-4-3, Asahi-machi, Abeno-ku, Osaka, 545-8585, Japan
| | - Yusuke Matsui
- Department of Radiology, Faculty of Medicine, Dentistry and Pharmaceutical Sciences, Okayama University, 2-5-1 Shikata-cho, Kitaku, Okayama, 700-8558, Japan
| | - Yasutaka Fushimi
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, 54 Shogoin Kawaharacho, Sakyo-ku, Kyoto, 606-8507, Japan
| | - Tomoyuki Fujioka
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo, 113-8510, Japan
| | - Taiki Nozaki
- Department of Radiology, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-ku, Tokyo, 160-8582, Japan
| | - Akira Yamada
- Department of Radiology, Shinshu University School of Medicine, 3-1-1 Asahi, Matsumoto, Nagano, 390-8621, Japan
| | - Kenji Hirata
- Department of Diagnostic Imaging, Faculty of Medicine, Hokkaido University, Kita15, Nishi7, Kita-Ku, Sapporo, Hokkaido, 060-8638, Japan
| | - Rintaro Ito
- Department of Radiology, Nagoya University Graduate School of Medicine, 65 Tsurumaicho, Showa-ku, Nagoya, Aichi, 466-8550, Japan
| | - Noriyuki Fujima
- Department of Diagnostic and Interventional Radiology, Hokkaido University Hospital, Kita15, Nishi7, Kita-Ku, Sapporo, Hokkaido, 060-8638, Japan
| | - Fuminari Tatsugami
- Department of Diagnostic Radiology, Hiroshima University, 1-2-3 Kasumi, Minami-ku, Hiroshima, 734-8551, Japan
| | - Takeshi Nakaura
- Department of Diagnostic Radiology, Kumamoto University Graduate School of Medicine, 1-1-1 Honjo, Chuo-ku, Kumamoto, 860-8556, Japan
| | - Takahiro Tsuboyama
- Department of Radiology, Osaka University Graduate School of Medicine, 2-2 Yamadaoka, Suita, 565-0871, Japan
| | - Shinji Naganawa
- Department of Radiology, Nagoya University Graduate School of Medicine, 65 Tsurumaicho, Showa-ku, Nagoya, Aichi, 466-8550, Japan
| |
Collapse
|
5
|
Paudyal R, Jiang J, Han J, Diplas BH, Riaz N, Hatzoglou V, Lee N, Deasy JO, Veeraraghavan H, Shukla-Dave A. Auto-segmentation of neck nodal metastases using self-distilled masked image transformer on longitudinal MR images. BJR Artif Intell 2024; 1:ubae004. [PMID: 38476956 PMCID: PMC10928808 DOI: 10.1093/bjrai/ubae004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/15/2023] [Revised: 01/23/2024] [Accepted: 01/24/2024] [Indexed: 03/14/2024]
Abstract
Objectives Auto-segmentation promises greater speed and lower inter-reader variability than manual segmentations in radiation oncology clinical practice. This study aims to implement and evaluate the accuracy of the auto-segmentation algorithm, "Masked Image modeling using the vision Transformers (SMIT)," for neck nodal metastases on longitudinal T2-weighted (T2w) MR images in oropharyngeal squamous cell carcinoma (OPSCC) patients. Methods This prospective clinical trial study included 123 human papillomaviruses (HPV-positive [+]) related OSPCC patients who received concurrent chemoradiotherapy. T2w MR images were acquired on 3 T at pre-treatment (Tx), week 0, and intra-Tx weeks (1-3). Manual delineations of metastatic neck nodes from 123 OPSCC patients were used for the SMIT auto-segmentation, and total tumor volumes were calculated. Standard statistical analyses compared contour volumes from SMIT vs manual segmentation (Wilcoxon signed-rank test [WSRT]), and Spearman's rank correlation coefficients (ρ) were computed. Segmentation accuracy was evaluated on the test data set using the dice similarity coefficient (DSC) metric value. P-values <0.05 were considered significant. Results No significant difference in manual and SMIT delineated tumor volume at pre-Tx (8.68 ± 7.15 vs 8.38 ± 7.01 cm3, P = 0.26 [WSRT]), and the Bland-Altman method established the limits of agreement as -1.71 to 2.31 cm3, with a mean difference of 0.30 cm3. SMIT model and manually delineated tumor volume estimates were highly correlated (ρ = 0.84-0.96, P < 0.001). The mean DSC metric values were 0.86, 0.85, 0.77, and 0.79 at the pre-Tx and intra-Tx weeks (1-3), respectively. Conclusions The SMIT algorithm provides sufficient segmentation accuracy for oncological applications in HPV+ OPSCC. Advances in knowledge First evaluation of auto-segmentation with SMIT using longitudinal T2w MRI in HPV+ OPSCC.
Collapse
Affiliation(s)
- Ramesh Paudyal
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
| | - Jue Jiang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
| | - James Han
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
| | - Bill H Diplas
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
| | - Nadeem Riaz
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
| | - Vaios Hatzoglou
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
| | - Nancy Lee
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
| | - Joseph O Deasy
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
| | - Harini Veeraraghavan
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
| | - Amita Shukla-Dave
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
| |
Collapse
|
6
|
Breto AL, Cullison K, Zacharaki EI, Wallaengen V, Maziero D, Jones K, Valderrama A, de la Fuente MI, Meshman J, Azzam GA, Ford JC, Stoyanova R, Mellon EA. A Deep Learning Approach for Automatic Segmentation during Daily MRI-Linac Radiotherapy of Glioblastoma. Cancers (Basel) 2023; 15:5241. [PMID: 37958415 PMCID: PMC10647471 DOI: 10.3390/cancers15215241] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Revised: 10/25/2023] [Accepted: 10/30/2023] [Indexed: 11/15/2023] Open
Abstract
Glioblastoma changes during chemoradiotherapy are inferred from high-field MRI before and after treatment but are rarely investigated during radiotherapy. The purpose of this study was to develop a deep learning network to automatically segment glioblastoma tumors on daily treatment set-up scans from the first glioblastoma patients treated on MRI-linac. Glioblastoma patients were prospectively imaged daily during chemoradiotherapy on 0.35T MRI-linac. Tumor and edema (tumor lesion) and resection cavity kinetics throughout the treatment were manually segmented on these daily MRI. Utilizing a convolutional neural network, an automatic segmentation deep learning network was built. A nine-fold cross-validation schema was used to train the network using 80:10:10 for training, validation, and testing. Thirty-six glioblastoma patients were imaged pre-treatment and 30 times during radiotherapy (n = 31 volumes, total of 930 MRIs). The average tumor lesion and resection cavity volumes were 94.56 ± 64.68 cc and 72.44 ± 35.08 cc, respectively. The average Dice similarity coefficient between manual and auto-segmentation for tumor lesion and resection cavity across all patients was 0.67 and 0.84, respectively. This is the first brain lesion segmentation network developed for MRI-linac. The network performed comparably to the only other published network for auto-segmentation of post-operative glioblastoma lesions. Segmented volumes can be utilized for adaptive radiotherapy and propagated across multiple MRI contrasts to create a prognostic model for glioblastoma based on multiparametric MRI.
Collapse
Affiliation(s)
- Adrian L. Breto
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, FL 33136, USA; (A.L.B.); (K.C.); (R.S.)
| | - Kaylie Cullison
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, FL 33136, USA; (A.L.B.); (K.C.); (R.S.)
| | - Evangelia I. Zacharaki
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, FL 33136, USA; (A.L.B.); (K.C.); (R.S.)
| | - Veronica Wallaengen
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, FL 33136, USA; (A.L.B.); (K.C.); (R.S.)
| | - Danilo Maziero
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, FL 33136, USA; (A.L.B.); (K.C.); (R.S.)
- Department of Radiation Medicine & Applied Sciences, UC San Diego Health, La Jolla, CA 92093, USA
| | - Kolton Jones
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, FL 33136, USA; (A.L.B.); (K.C.); (R.S.)
- West Physics, Atlanta, GA 30339, USA
| | - Alessandro Valderrama
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, FL 33136, USA; (A.L.B.); (K.C.); (R.S.)
| | - Macarena I. de la Fuente
- Department of Neurology, Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, FL 33136, USA
| | - Jessica Meshman
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, FL 33136, USA; (A.L.B.); (K.C.); (R.S.)
| | - Gregory A. Azzam
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, FL 33136, USA; (A.L.B.); (K.C.); (R.S.)
| | - John C. Ford
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, FL 33136, USA; (A.L.B.); (K.C.); (R.S.)
| | - Radka Stoyanova
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, FL 33136, USA; (A.L.B.); (K.C.); (R.S.)
| | - Eric A. Mellon
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, FL 33136, USA; (A.L.B.); (K.C.); (R.S.)
| |
Collapse
|
7
|
Heydarheydari S, Birgani MJT, Rezaeijo SM. Auto-segmentation of head and neck tumors in positron emission tomography images using non-local means and morphological frameworks. Pol J Radiol 2023; 88:e365-e370. [PMID: 37701174 PMCID: PMC10493858 DOI: 10.5114/pjr.2023.130815] [Citation(s) in RCA: 21] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2023] [Accepted: 06/30/2023] [Indexed: 09/14/2023] Open
Abstract
Purpose Accurately segmenting head and neck cancer (HNC) tumors in medical images is crucial for effective treatment planning. However, current methods for HNC segmentation are limited in their accuracy and efficiency. The present study aimed to design a model for segmenting HNC tumors in three-dimensional (3D) positron emission tomography (PET) images using Non-Local Means (NLM) and morphological operations. Material and Methods The proposed model was tested using data from the HECKTOR challenge public dataset, which included 408 patient images with HNC tumors. NLM was utilized for image noise reduction and preservation of critical image information. Following pre-processing, morphological operations were used to assess the similarity of intensity and edge information within the images. The Dice score, Intersection Over Union (IoU), and accuracy were used to evaluate the manual and predicted segmentation results. Results The proposed model achieved an average Dice score of 81.47 ± 3.15, IoU of 80 ± 4.5, and accuracy of 94.03 ± 4.44, demonstrating its effectiveness in segmenting HNC tumors in PET images. Conclusions The proposed algorithm provides the capability to produce patient-specific tumor segmentation without manual interaction, addressing the limitations of current methods for HNC segmentation. The model has the potential to improve treatment planning and aid in the development of personalized medicine. Additionally, this model can be extended to effectively segment other organs from limited annotated medical images.
Collapse
Affiliation(s)
- Sahel Heydarheydari
- Department of Medical Imaging and Radiation Sciences, Faculty of Paramedicine, Ahvaz Jundishapur University of Medical Sciences, Ahvaz, Iran
| | | | - Seyed Masoud Rezaeijo
- Department of Medical Physics, Faculty of Medicine, Ahvaz Jundishapur University of Medical Sciences, Ahvaz, Iran
- Cancer Research Center, Ahvaz Jundishapur University of Medical Sciences, Ahvaz, Iran
| |
Collapse
|
8
|
Amjad A, Xu J, Thill D, Zhang Y, Ding J, Paulson E, Hall W, Erickson BA, Li XA. Deep learning auto-segmentation on multi-sequence magnetic resonance images for upper abdominal organs. Front Oncol 2023; 13:1209558. [PMID: 37483486 PMCID: PMC10358771 DOI: 10.3389/fonc.2023.1209558] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Accepted: 06/19/2023] [Indexed: 07/25/2023] Open
Abstract
Introduction Multi-sequence multi-parameter MRIs are often used to define targets and/or organs at risk (OAR) in radiation therapy (RT) planning. Deep learning has so far focused on developing auto-segmentation models based on a single MRI sequence. The purpose of this work is to develop a multi-sequence deep learning based auto-segmentation (mS-DLAS) based on multi-sequence abdominal MRIs. Materials and methods Using a previously developed 3DResUnet network, a mS-DLAS model using 4 T1 and T2 weighted MRI acquired during routine RT simulation for 71 cases with abdominal tumors was trained and tested. Strategies including data pre-processing, Z-normalization approach, and data augmentation were employed. Additional 2 sequence specific T1 weighted (T1-M) and T2 weighted (T2-M) models were trained to evaluate performance of sequence-specific DLAS. Performance of all models was quantitatively evaluated using 6 surface and volumetric accuracy metrics. Results The developed DLAS models were able to generate reasonable contours of 12 upper abdomen organs within 21 seconds for each testing case. The 3D average values of dice similarity coefficient (DSC), mean distance to agreement (MDA mm), 95 percentile Hausdorff distance (HD95% mm), percent volume difference (PVD), surface DSC (sDSC), and relative added path length (rAPL mm/cc) over all organs were 0.87, 1.79, 7.43, -8.95, 0.82, and 12.25, respectively, for mS-DLAS model. Collectively, 71% of the auto-segmented contours by the three models had relatively high quality. Additionally, the obtained mS-DLAS successfully segmented 9 out of 16 MRI sequences that were not used in the model training. Conclusion We have developed an MRI-based mS-DLAS model for auto-segmenting of upper abdominal organs on MRI. Multi-sequence segmentation is desirable in routine clinical practice of RT for accurate organ and target delineation, particularly for abdominal tumors. Our work will act as a stepping stone for acquiring fast and accurate segmentation on multi-contrast MRI and make way for MR only guided radiation therapy.
Collapse
Affiliation(s)
- Asma Amjad
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI, United States
| | | | - Dan Thill
- Elekta Inc., ST. Charles, MO, United States
| | - Ying Zhang
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI, United States
| | - Jie Ding
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI, United States
| | - Eric Paulson
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI, United States
| | - William Hall
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI, United States
| | - Beth A. Erickson
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI, United States
| | - X. Allen Li
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI, United States
| |
Collapse
|
9
|
Adams J, Luca K, Yang X, Patel P, Jani A, Roper J, Zhang J. Plan Quality Analysis of Automated Treatment Planning Workflow With Commercial Auto-Segmentation Tools and Clinical Knowledge-Based Planning Models for Prostate Cancer. Cureus 2023; 15:e41260. [PMID: 37529805 PMCID: PMC10389787 DOI: 10.7759/cureus.41260] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/01/2023] [Indexed: 08/03/2023] Open
Abstract
This study evaluated the feasibility of using artificial intelligence (AI) segmentation software for volume-modulated arc therapy (VMAT) prostate planning in conjunction with knowledge-based planning to facilitate a fully automated workflow. Two commercially available AI software programs, Radformation AutoContour (Radformation, New York, NY) and Siemens AI-Rad Companion (Siemens Healthineers, Malvern, PA) were used to auto-segment the rectum, bladder, femoral heads, and bowel bag on 30 retrospective clinical cases (10 intact prostate, 10 prostate bed, and 10 prostate and lymph node). Physician-segmented target volumes were transferred to AI structure sets. In-house RapidPlan models were used to generate plans using the original, physician-segmented structure sets as well as Radformation and Siemens AI-generated structure sets. Thus, there were three plans for each of the 30 cases, totaling 90 plans. Following RapidPlan optimization, planning target volume (PTV) coverage was set to 95%. Then, the plans optimized using AI structures were recalculated on the physician structure set with fixed monitor units. In this way, physician contours were used as the gold standard for identifying any clinically relevant differences in dose distributions. One-way analysis of variation (ANOVA) was used for statistical analysis. No statistically significant differences were observed across the three sets of plans for intact prostate, prostate bed, or prostate and lymph nodes. The results indicate that an automated volumetric modulated arc therapy (VMAT) prostate planning workflow can consistently achieve high plan quality. However, our results also show that small but consistent differences in contouring preferences may lead to subtle differences in planning results. Therefore, the clinical implementation of auto-contouring should be carefully validated.
Collapse
Affiliation(s)
- Jacob Adams
- Department of Radiation Oncology, Emory University, Atlanta, USA
| | - Kirk Luca
- Department of Radiation Oncology, Emory University, Atlanta, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology, Emory University, Atlanta, USA
| | - Pretesh Patel
- Department of Radiation Oncology, Emory University, Atlanta, USA
| | - Ashesh Jani
- Department of Radiation Oncology, Emory University, Atlanta, USA
| | - Justin Roper
- Department of Radiation Oncology, Emory University, Atlanta, USA
| | - Jiahan Zhang
- Department of Radiation Oncology, Emory University, Atlanta, USA
| |
Collapse
|
10
|
Gifford R, Jhawar SR, Krening S. Deep Learning Architecture to Improve Edge Accuracy of Auto-Contouring for Head and Neck Radiotherapy. Diagnostics (Basel) 2023; 13:2159. [PMID: 37443553 DOI: 10.3390/diagnostics13132159] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2023] [Revised: 06/16/2023] [Accepted: 06/22/2023] [Indexed: 07/15/2023] Open
Abstract
Deep learning (DL) methods have shown great promise in auto-segmentation problems. However, for head and neck cancer, we show that DL methods fail at the axial edges of the gross tumor volume (GTV) where the segmentation is dependent on information closer to the center of the tumor. These failures may decrease trust and usage of proposed auto-contouring systems. To increase performance at the axial edges, we propose the spatially adjusted recurrent convolution U-Net (SARC U-Net). Our method uses convolutional recurrent neural networks and spatial transformer networks to push information from salient regions out to the axial edges. On average, our model increased the Sørensen-Dice coefficient (DSC) at the axial edges of the GTV by 11% inferiorly and 19.3% superiorly over a baseline 2D U-Net, which has no inherent way to capture information between adjacent slices. Over all slices, our proposed architecture achieved a DSC of 0.613, whereas a 3D and 2D U-Net achieved a DSC of 0.586 and 0.540, respectively. SARC U-Net can increase accuracy at the axial edges of GTV contours while also increasing accuracy over baseline models, creating a more robust contour.
Collapse
Affiliation(s)
- Ryan Gifford
- Department of Integrated Systems Engineering, The Ohio State University, 1971 Neil Ave, Columbus, OH 43210, USA
| | - Sachin R Jhawar
- Comprehensive Cancer Center, Department of Radiation Oncology, The Ohio State University, 410 W 10th Ave, Columbus, OH 43210, USA
| | - Samantha Krening
- Department of Integrated Systems Engineering, The Ohio State University, 1971 Neil Ave, Columbus, OH 43210, USA
| |
Collapse
|
11
|
Li J, Anne R. Evaluation of Atlas-based auto-segmentation of liver in MR images for Yttrium-90 selective internal radiation therapy. J Appl Clin Med Phys 2023; 24:e13979. [PMID: 37070130 PMCID: PMC10161143 DOI: 10.1002/acm2.13979] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2023] [Revised: 03/08/2023] [Accepted: 03/17/2023] [Indexed: 04/19/2023] Open
Abstract
PURPOSE The aim was to explore the feasibility of applying an atlas-based auto-segmentation tool, MIM Atlas Segment, for liver delineation in MR images in Y-90 selective internal radiation therapy (SIRT). MATERIALS AND METHODS MR images of 41 liver patients treated with resin Y-90 SIRT were included: 20 patients' images were used to create an atlas, and the other 21 patients' images were used for testing. Auto-segmentation of liver in the MR images was performed with MIM Atlas Segment, and various settings for the auto-segmentation (i.e., with and without normalized deformable registration, single atlas-match and multi-atlas match, and multi-atlas match using different finalization methods) were tested. Auto-segmented liver contours were compared with physician manually-delineated contours, using Dice similarity coefficient (DSC) and mean distance to agreement (MDA). Ratio of volume (RV) and ratio of activity (RA) were calculated to further evaluate the auto-segmentation results. RESULTS Auto-segmentations with normalized deformable registration generated better contours than those without normalized deformable registration. With normalized deformable registration, 3-atlas match using Majority Vote (MV) method generated better results than single-atlas match and 3-atlas match using STAPLE method, and generated similar results as 5-atlas match using MV method or STAPLE method. The average DSC, MDA, and RV of the contours generated with normalized deformable registration are 0.80-0.83, 0.60-0.67, and 0.91-1.00 cm, respectively. The average RA are 1.00-1.01, which indicate that the activities calculated using the auto-segmented liver contours are close to the accurate activities. CONCLUSION The atlas-based auto-segmentation can be applied to generate initial liver contours in MR images for resin Y-90 SIRT, which can be used for activity calculations after physicians review.
Collapse
Affiliation(s)
- Jun Li
- Department of Radiation Oncology, Thomas Jefferson University, Philadelphia, Pennsylvania, USA
| | - Rani Anne
- Department of Radiation Oncology, Thomas Jefferson University, Philadelphia, Pennsylvania, USA
| |
Collapse
|
12
|
Zhong Y, Guo Y, Fang Y, Wu Z, Wang J, Hu W. Geometric and dosimetric evaluation of deep learning based auto-segmentation for clinical target volume on breast cancer. J Appl Clin Med Phys 2023:e13951. [PMID: 36920901 DOI: 10.1002/acm2.13951] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2022] [Revised: 02/09/2023] [Accepted: 02/12/2023] [Indexed: 03/16/2023] Open
Abstract
BACKGROUND Recently, target auto-segmentation techniques based on deep learning (DL) have shown promising results. However, inaccurate target delineation will directly affect the treatment planning dose distribution and the effect of subsequent radiotherapy work. Evaluation based on geometric metrics alone may not be sufficient for target delineation accuracy assessment. The purpose of this paper is to validate the performance of automatic segmentation with dosimetric metrics and try to construct new evaluation geometric metrics to comprehensively understand the dose-response relationship from the perspective of clinical application. MATERIALS AND METHODS A DL-based target segmentation model was developed by using 186 manual delineation modified radical mastectomy breast cancer cases. The resulting DL model were used to generate alternative target contours in a new set of 48 patients. The Auto-plan was reoptimized to ensure the same optimized parameters as the reference Manual-plan. To assess the dosimetric impact of target auto-segmentation, not only common geometric metrics but also new spatial parameters with distance and relative volume ( R V ${R}_V$ ) to target were used. Correlations were performed using Spearman's correlation between segmentation evaluation metrics and dosimetric changes. RESULTS Only strong (|R2 | > 0.6, p < 0.01) or moderate (|R2 | > 0.4, p < 0.01) Pearson correlation was established between the traditional geometric metric and three dosimetric evaluation indices to target (conformity index, homogeneity index, and mean dose). For organs at risk (OARs), inferior or no significant relationship was found between geometric parameters and dosimetric differences. Furthermore, we found that OARs dose distribution was affected by boundary error of target segmentation instead of distance and R V ${R}_V$ to target. CONCLUSIONS Current geometric metrics could reflect a certain degree of dose effect of target variation. To find target contour variations that do lead to OARs dosimetry changes, clinically oriented metrics that more accurately reflect how segmentation quality affects dosimetry should be constructed.
Collapse
Affiliation(s)
- Yang Zhong
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China.,Shanghai Clinical Research Center for Radiation Oncology, Shanghai, China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, China
| | - Ying Guo
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China.,Shanghai Clinical Research Center for Radiation Oncology, Shanghai, China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, China
| | - Yingtao Fang
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China.,Shanghai Clinical Research Center for Radiation Oncology, Shanghai, China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, China
| | - Zhiqiang Wu
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China.,Shanghai Clinical Research Center for Radiation Oncology, Shanghai, China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, China
| | - Jiazhou Wang
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China.,Shanghai Clinical Research Center for Radiation Oncology, Shanghai, China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, China
| | - Weigang Hu
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China.,Shanghai Clinical Research Center for Radiation Oncology, Shanghai, China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, China
| |
Collapse
|
13
|
Podobnik G, Strojan P, Peterlin P, Ibragimov B, Vrtovec T. HaN-Seg: The head and neck organ-at-risk CT and MR segmentation dataset. Med Phys 2023; 50:1917-1927. [PMID: 36594372 DOI: 10.1002/mp.16197] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2022] [Revised: 11/17/2022] [Accepted: 12/07/2022] [Indexed: 01/04/2023] Open
Abstract
PURPOSE For the cancer in the head and neck (HaN), radiotherapy (RT) represents an important treatment modality. Segmentation of organs-at-risk (OARs) is the starting point of RT planning, however, existing approaches are focused on either computed tomography (CT) or magnetic resonance (MR) images, while multimodal segmentation has not been thoroughly explored yet. We present a dataset of CT and MR images of the same patients with curated reference HaN OAR segmentations for an objective evaluation of segmentation methods. ACQUISITION AND VALIDATION METHODS The cohort consists of HaN images of 56 patients that underwent both CT and T1-weighted MR imaging for image-guided RT. For each patient, reference segmentations of up to 30 OARs were obtained by experts performing manual pixel-wise image annotation. By maintaining the distribution of patient age and gender, and annotation type, the patients were randomly split into training Set 1 (42 cases or 75%) and test Set 2 (14 cases or 25%). Baseline auto-segmentation results are also provided by training the publicly available deep nnU-Net architecture on Set 1, and evaluating its performance on Set 2. DATA FORMAT AND USAGE NOTES The data are publicly available through an open-access repository under the name HaN-Seg: The Head and Neck Organ-at-Risk CT & MR Segmentation Dataset. Images and reference segmentations are stored in the NRRD file format, where the OAR filenames correspond to the nomenclature recommended by the American Association of Physicists in Medicine, and OAR and demographics information is stored in separate comma-separated value files. POTENTIAL APPLICATIONS The HaN-Seg: The Head and Neck Organ-at-Risk CT & MR Segmentation Challenge is launched in parallel with the dataset release to promote the development of automated techniques for OAR segmentation in the HaN. Other potential applications include out-of-challenge algorithm development and benchmarking, as well as external validation of the developed algorithms.
Collapse
Affiliation(s)
- Gašper Podobnik
- Faculty Electrical Engineering, University of Ljubljana, Ljubljana, Slovenia
| | | | | | - Bulat Ibragimov
- Faculty Electrical Engineering, University of Ljubljana, Ljubljana, Slovenia
- Department of Computer Science, University of Copenhagen, Copenhagen, Denmark
| | - Tomaž Vrtovec
- Faculty Electrical Engineering, University of Ljubljana, Ljubljana, Slovenia
| |
Collapse
|
14
|
Wang J, Chen Y, Tu Y, Xie H, Chen Y, Luo L, Zhou P, Tang Q. Evaluation of auto-segmentation for brachytherapy of postoperative cervical cancer using deep learning-based workflow. Phys Med Biol 2023; 68. [PMID: 36753762 DOI: 10.1088/1361-6560/acba76] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2022] [Accepted: 02/08/2023] [Indexed: 02/10/2023]
Abstract
Objective. The purpose of this study was to evaluate the accuracy of brachytherapy (BT) planning structures derived from Deep learning (DL) based auto-segmentation compared with standard manual delineation for postoperative cervical cancer.Approach. We introduced a convolutional neural networks (CNN) which was developed and presented for auto-segmentation in cervical cancer radiotherapy. The dataset of 60 patients received BT of postoperative cervical cancer was used to train and test this model for delineation of high-risk clinical target volume (HRCTV) and organs at risk (OARs). Dice similarity coefficient (DSC), 95% Hausdorff distance (95%HD), Jaccard coefficient (JC) and dose-volume index (DVI) were used to evaluate the accuracy. The correlation between geometric metrics and dosimetric difference was performed by Spearman's correlation analysis. The radiation oncologists scored the auto-segmented contours by rating the lever of satisfaction (no edits, minor edits, major edits).Main results. The mean DSC values of DL based model were 0.87, 0.94, 0.86, 0.79 and 0.92 for HRCTV, bladder, rectum, sigmoid and small intestine, respectively. The Bland-Altman test obtained dose agreement for HRCTV_D90%, HRCTV_Dmean, bladder_D2cc, sigmoid_D2ccand small intestine_D2cc. Wilcoxon's signed-rank test indicated significant dosimetric differences in bladder_D0.1cc, rectum_D0.1ccand rectum_D2cc(P< 0.05). A strong correlation between HRCTV_D90%with its DSC (R= -0.842,P= 0.002) and JC (R= -0.818,P= 0.004) were found in Spearman's correlation analysis. From the physician review, 80% of HRCTVs and 72.5% of OARs in the test dataset were shown satisfaction (no edits).Significance. The proposed DL based model achieved a satisfied agreement between the auto-segmented and manually defined contours of HRCTV and OARs, although the clinical acceptance of small volume dose of OARs around the target was a concern. DL based auto-segmentation was an essential component in cervical cancer workflow which would generate the accurate contouring.
Collapse
Affiliation(s)
- Jiahao Wang
- Department of Radiation Oncology, Women's Hospital, School of Medicine, Zhejiang University, Hangzhou, 310006, People's Republic of China
| | - Yuanyuan Chen
- Department of Radiation Oncology, Women's Hospital, School of Medicine, Zhejiang University, Hangzhou, 310006, People's Republic of China
| | - Yeqiang Tu
- Department of Radiation Oncology, Women's Hospital, School of Medicine, Zhejiang University, Hangzhou, 310006, People's Republic of China
| | - Hongling Xie
- Department of Radiation Oncology, Women's Hospital, School of Medicine, Zhejiang University, Hangzhou, 310006, People's Republic of China
| | - Yukai Chen
- Department of Radiation Oncology, Women's Hospital, School of Medicine, Zhejiang University, Hangzhou, 310006, People's Republic of China
| | - Lumeng Luo
- Department of Radiation Oncology, Women's Hospital, School of Medicine, Zhejiang University, Hangzhou, 310006, People's Republic of China
| | - Pengfei Zhou
- Department of Radiation Oncology, Women's Hospital, School of Medicine, Zhejiang University, Hangzhou, 310006, People's Republic of China
| | - Qiu Tang
- Department of Radiation Oncology, Women's Hospital, School of Medicine, Zhejiang University, Hangzhou, 310006, People's Republic of China
| |
Collapse
|
15
|
Avesta A, Hossain S, Lin M, Aboian M, Krumholz HM, Aneja S. Comparing 3D, 2.5D, and 2D Approaches to Brain Image Auto-Segmentation. Bioengineering (Basel) 2023; 10:bioengineering10020181. [PMID: 36829675 PMCID: PMC9952534 DOI: 10.3390/bioengineering10020181] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Revised: 01/09/2023] [Accepted: 01/09/2023] [Indexed: 02/04/2023] Open
Abstract
Deep-learning methods for auto-segmenting brain images either segment one slice of the image (2D), five consecutive slices of the image (2.5D), or an entire volume of the image (3D). Whether one approach is superior for auto-segmenting brain images is not known. We compared these three approaches (3D, 2.5D, and 2D) across three auto-segmentation models (capsule networks, UNets, and nnUNets) to segment brain structures. We used 3430 brain MRIs, acquired in a multi-institutional study, to train and test our models. We used the following performance metrics: segmentation accuracy, performance with limited training data, required computational memory, and computational speed during training and deployment. The 3D, 2.5D, and 2D approaches respectively gave the highest to lowest Dice scores across all models. 3D models maintained higher Dice scores when the training set size was decreased from 3199 MRIs down to 60 MRIs. 3D models converged 20% to 40% faster during training and were 30% to 50% faster during deployment. However, 3D models require 20 times more computational memory compared to 2.5D or 2D models. This study showed that 3D models are more accurate, maintain better performance with limited training data, and are faster to train and deploy. However, 3D models require more computational memory compared to 2.5D or 2D models.
Collapse
Affiliation(s)
- Arman Avesta
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT 06510, USA
- Department of Therapeutic Radiology, Yale School of Medicine, New Haven, CT 06510, USA
- Center for Outcomes Research and Evaluation, Yale School of Medicine, New Haven, CT 06510, USA
| | - Sajid Hossain
- Department of Therapeutic Radiology, Yale School of Medicine, New Haven, CT 06510, USA
- Center for Outcomes Research and Evaluation, Yale School of Medicine, New Haven, CT 06510, USA
| | - MingDe Lin
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT 06510, USA
- Visage Imaging, Inc., San Diego, CA 92130, USA
| | - Mariam Aboian
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT 06510, USA
| | - Harlan M. Krumholz
- Center for Outcomes Research and Evaluation, Yale School of Medicine, New Haven, CT 06510, USA
- Division of Cardiovascular Medicine, Yale School of Medicine, New Haven, CT 06510, USA
| | - Sanjay Aneja
- Department of Therapeutic Radiology, Yale School of Medicine, New Haven, CT 06510, USA
- Center for Outcomes Research and Evaluation, Yale School of Medicine, New Haven, CT 06510, USA
- Department of Biomedical Engineering, Yale University, New Haven, CT 06510, USA
- Correspondence: ; Tel.: +1-203-200-2100; Fax: +1-203-737-1467
| |
Collapse
|
16
|
Hirashima H, Nakamura M, Imanishi K, Nakao M, Mizowaki T. Evaluation of generalization ability for deep learning-based auto-segmentation accuracy in limited field of view CBCT of male pelvic region. J Appl Clin Med Phys 2023; 24:e13912. [PMID: 36659871 PMCID: PMC10161011 DOI: 10.1002/acm2.13912] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2022] [Revised: 01/09/2023] [Accepted: 01/10/2023] [Indexed: 01/21/2023] Open
Abstract
PURPOSE The aim of this study was to evaluate generalization ability of segmentation accuracy for limited FOV CBCT in the male pelvic region using a full-image CNN. Auto-segmentation accuracy was evaluated using various datasets with different intensity distributions and FOV sizes. METHODS A total of 171 CBCT datasets from patients with prostate cancer were enrolled. There were 151, 10, and 10 CBCT datasets acquired from Vero4DRT, TrueBeam STx, and Clinac-iX, respectively. The FOV for Vero4DRT, TrueBeam STx, and Clinac-iX was 20, 26, and 25 cm, respectively. The ROIs, including the bladder, prostate, rectum, and seminal vesicles, were manually delineated. The U2 -Net CNN network architecture was used to train the segmentation model. A total of 131 limited FOV CBCT datasets from Vero4DRT were used for training (104 datasets) and validation (27 datasets); thereafter the rest were for testing. The training routine was set to save the best weight values when the DSC in the validation set was maximized. Segmentation accuracy was qualitatively and quantitatively evaluated between the ground truth and predicted ROIs in the different testing datasets. RESULTS The mean scores ± standard deviation of visual evaluation for bladder, prostate, rectum, and seminal vesicle in all treatment machines were 1.0 ± 0.7, 1.5 ± 0.6, 1.4 ± 0.6, and 2.1 ± 0.8 points, respectively. The median DSC values for all imaging devices were ≥0.94 for the bladder, 0.84-0.87 for the prostate and rectum, and 0.48-0.69 for the seminal vesicles. Although the DSC values for the bladder and seminal vesicles were significantly different among the three imaging devices, the DSC value of the bladder changed by less than 1% point. The median MSD values for all imaging devices were ≤1.2 mm for the bladder and 1.4-2.2 mm for the prostate, rectum, and seminal vesicles. The MSD values for the seminal vesicles were significantly different between the three imaging devices. CONCLUSION The proposed method is effective for testing datasets with different intensity distributions and FOV from training datasets.
Collapse
Affiliation(s)
- Hideaki Hirashima
- Department of Radiation Oncology and Image-Applied Therapy, Graduate School of Medicine, Kyoto University, Sakyo-ku, Kyoto, Japan
| | - Mitsuhiro Nakamura
- Department of Radiation Oncology and Image-Applied Therapy, Graduate School of Medicine, Kyoto University, Sakyo-ku, Kyoto, Japan.,Department of Advanced Medical Physics, Graduate School of Medicine, Kyoto University, Sakyo-ku, Kyoto, Japan
| | | | - Megumi Nakao
- Department of Advanced Medical Engineering and Intelligence, Graduate School of Medicine, Kyoto University, Sakyo-ku, Kyoto, Japan
| | - Takashi Mizowaki
- Department of Radiation Oncology and Image-Applied Therapy, Graduate School of Medicine, Kyoto University, Sakyo-ku, Kyoto, Japan
| |
Collapse
|
17
|
Chung SY, Chang JS, Kim YB. Comprehensive clinical evaluation of deep learning-based auto-segmentation for radiotherapy in patients with cervical cancer. Front Oncol 2023; 13:1119008. [PMID: 37188180 PMCID: PMC10175826 DOI: 10.3389/fonc.2023.1119008] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2022] [Accepted: 04/13/2023] [Indexed: 05/17/2023] Open
Abstract
Background and purpose Deep learning-based models have been actively investigated for various aspects of radiotherapy. However, for cervical cancer, only a few studies dealing with the auto-segmentation of organs-at-risk (OARs) and clinical target volumes (CTVs) exist. This study aimed to train a deep learning-based auto-segmentation model for OAR/CTVs for patients with cervical cancer undergoing radiotherapy and to evaluate the model's feasibility and efficacy with not only geometric indices but also comprehensive clinical evaluation. Materials and methods A total of 180 abdominopelvic computed tomography images were included (training set, 165; validation set, 15). Geometric indices such as the Dice similarity coefficient (DSC) and the 95% Hausdorff distance (HD) were analyzed. A Turing test was performed and physicians from other institutions were asked to delineate contours with and without using auto-segmented contours to assess inter-physician heterogeneity and contouring time. Results The correlation between the manual and auto-segmented contours was acceptable for the anorectum, bladder, spinal cord, cauda equina, right and left femoral heads, bowel bag, uterocervix, liver, and left and right kidneys (DSC greater than 0.80). The stomach and duodenum showed DSCs of 0.67 and 0.73, respectively. CTVs showed DSCs between 0.75 and 0.80. Turing test results were favorable for most OARs and CTVs. No auto-segmented contours had large, obvious errors. The median overall satisfaction score of the participating physicians was 7 out of 10. Auto-segmentation reduced heterogeneity and shortened contouring time by 30 min among radiation oncologists from different institutions. Most participants favored the auto-contouring system. Conclusion The proposed deep learning-based auto-segmentation model may be an efficient tool for patients with cervical cancer undergoing radiotherapy. Although the current model may not completely replace humans, it can serve as a useful and efficient tool in real-world clinics.
Collapse
Affiliation(s)
- Seung Yeun Chung
- Department of Radiation Oncology, Yonsei University College of Medicine, Seoul, Republic of Korea
- Department of Radiation Oncology, Ajou University School of Medicine, Suwon, Republic of Korea
| | - Jee Suk Chang
- Department of Radiation Oncology, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Yong Bae Kim
- Department of Radiation Oncology, Yonsei University College of Medicine, Seoul, Republic of Korea
- *Correspondence: Yong Bae Kim,
| |
Collapse
|
18
|
Xia X, Wang J, Liang S, Ye F, Tian MM, Hu W, Xu L. An attention base U-net for parotid tumor autosegmentation. Front Oncol 2022; 12:1028382. [PMID: 36505865 PMCID: PMC9730401 DOI: 10.3389/fonc.2022.1028382] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Accepted: 10/26/2022] [Indexed: 11/25/2022] Open
Abstract
A parotid neoplasm is an uncommon condition that only accounts for less than 3% of all head and neck cancers, and they make up less than 0.3% of all new cancers diagnosed annually. Due to their nonspecific imaging features and heterogeneous nature, accurate preoperative diagnosis remains a challenge. Automatic parotid tumor segmentation may help physicians evaluate these tumors. Two hundred eighty-five patients diagnosed with benign or malignant parotid tumors were enrolled in this study. Parotid and tumor tissues were segmented by 3 radiologists on T1-weighted (T1w), T2-weighted (T2w) and T1-weighted contrast-enhanced (T1wC) MR images. These images were randomly divided into two datasets, including a training dataset (90%) and an validation dataset (10%). A 10-fold cross-validation was performed to assess the performance. An attention base U-net for parotid tumor autosegmentation was created on the MRI T1w, T2 and T1wC images. The results were evaluated in a separate dataset, and the mean Dice similarity coefficient (DICE) for both parotids was 0.88. The mean DICE for left and right tumors was 0.85 and 0.86, respectively. These results indicate that the performance of this model corresponds with the radiologist's manual segmentation. In conclusion, an attention base U-net for parotid tumor autosegmentation may assist physicians to evaluate parotid gland tumors.
Collapse
Affiliation(s)
- Xianwu Xia
- The Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, Zhejiang, China,Department of Oncology Intervention, The Affiliated Municipal Hospital of Taizhou University, Taizhou, China,Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
| | - Jiazhou Wang
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
| | - Sheng Liang
- Department of Oncology Intervention, The Affiliated Municipal Hospital of Taizhou University, Taizhou, China
| | - Fangfang Ye
- Department of Oncology Intervention, The Affiliated Municipal Hospital of Taizhou University, Taizhou, China
| | - Min-Ming Tian
- Department of Oncology Intervention, Jiangxi University of Traditional Chinese Medicine, Nanchang, Jiangxi, China
| | - Weigang Hu
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China,*Correspondence: Weigang Hu, ; Leiming Xu,
| | - Leiming Xu
- The Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, Zhejiang, China,*Correspondence: Weigang Hu, ; Leiming Xu,
| |
Collapse
|
19
|
Yan C, Guo B, Tendulkar R, Xia P. Contour similarity and its implication on inverse prostate SBRT treatment planning. J Appl Clin Med Phys 2022; 24:e13809. [PMID: 36300837 PMCID: PMC9924104 DOI: 10.1002/acm2.13809] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2020] [Revised: 08/01/2022] [Accepted: 09/13/2022] [Indexed: 11/11/2022] Open
Abstract
PURPOSE Success of auto-segmentation is measured by the similarity between auto and manual contours that is often quantified by Dice coefficient (DC). The dosimetric impact of contour variability on inverse planning has been rarely reported. The main aim of this study is to investigate whether automatically generated organs-at-risk (OARs) could be used in inverse prostate stereotactic body radiation therapy (SBRT) planning and whether the dosimetric parameters are still clinically acceptable after radiation oncologists modify the OARs. METHODS AND MATERIALS Planning computed tomography images from 10 patients treated with SBRT for prostate cancer were selected and automatically segmented by commercially available atlas-based software. The automatically generated OAR contours were compared with the manually drawn contours. Two volumetric modulated arc therapy (VMAT) plans, autoRec-VMAT (where only automatically generated rectums were used in optimization) and autoAll-VMAT (where automatically generated OARs were used in inverse optimization) were generated. Dosimetric parameters based on the manually drawn PTV and OARs were compared with the clinically approved plans. RESULTS The DCs for the rectum contours varied from 0.55 to 0.74 with a mean value of 0.665. Differences of D95 of the PTV between autoRec-VMAT and manu-VMAT plans varied from 0.03% to -2.85% with a mean value of -0.64%. Differences of D0.03cc of manual rectum between the two plans varied from -0.86% to 9.94% with a mean value of 2.71%. D95 of PTV between autoAll-VMAT and manu-VMAT plans varied from 0.28% to -2.9% with a mean value -0.83%. Differences of D0.03cc of manual rectum between the two plans varied from -0.76% to 6.72% with a mean value of 2.62%. CONCLUSION Our study implies that it is possible to use unedited automatically generated OARs to perform initial inverse prostate SBRT planning. After radiation oncologists modify/approve the OARs, the plan qualities based on the manually drawn OARs are still clinically acceptable, and a re-optimization may not be needed.
Collapse
Affiliation(s)
- Chenyu Yan
- Department of Radiation OncologyCleveland Clinic FoundationClevelandOhioUSA
| | - Bingqi Guo
- Department of Radiation OncologyCleveland Clinic FoundationClevelandOhioUSA
| | - Rahul Tendulkar
- Department of Radiation OncologyCleveland Clinic FoundationClevelandOhioUSA
| | - Ping Xia
- Department of Radiation OncologyCleveland Clinic FoundationClevelandOhioUSA
| |
Collapse
|
20
|
Gibbons E, Hoffmann M, Westhuyzen J, Hodgson A, Chick B, Last A. Clinical evaluation of deep learning and atlas-based auto-segmentation for critical organs at risk in radiation therapy. J Med Radiat Sci 2022; 70 Suppl 2:15-25. [PMID: 36148621 PMCID: PMC10122925 DOI: 10.1002/jmrs.618] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2022] [Accepted: 08/27/2022] [Indexed: 11/12/2022] Open
Abstract
INTRODUCTION Contouring organs at risk (OARs) is a time-intensive task that is a critical part of radiation therapy. Atlas-based automatic segmentation has shown some success at reducing this time burden on practitioners; however, this method often requires significant manual editing to reach a clinically accurate standard. Deep learning (DL) auto-segmentation has recently emerged as a promising solution. This study compares the accuracy of DL and atlas-based auto-segmentation in relation to clinical 'gold standard' reference contours. METHODS Ninety CT datasets (30 head and neck, 30 thoracic, 30 pelvic) were automatically contoured using both atlas and DL segmentation techniques. Sixteen critical OARs were then quantitatively measured for accuracy using the Dice similarity coefficient (DSC) and Hausdorff distance (HD). Qualitative analysis was performed to visually classify the accuracy of each structure into one of four explicitly defined categories. Additionally, the time to edit atlas and DL contours to a clinically acceptable level was recorded for a subset of 9 OARs. RESULTS Of the 16 OARs analysed, DL delivered statistically significant improvements over atlas segmentation in 13 OARs measured with DSC, 12 OARs measured with HD, and 12 OARs measured qualitatively. The mean editing time for the subset of DL contours was 50%, 23% and 61% faster (all P < 0.05) than that of atlas segmentation for the head and neck, thorax, and pelvis respectively. CONCLUSIONS Deep learning segmentation comprehensively outperformed atlas-based contouring for the majority of evaluated OARs. Improvements were observed in geometric accuracy and visual acceptability, while editing time was reduced leading to increased workflow efficiency.
Collapse
Affiliation(s)
- Eddie Gibbons
- Department of Radiation Oncology, Mid North Coast Cancer Institute, Port Macquarie, New South Wales, Australia
| | - Matthew Hoffmann
- Department of Radiation Oncology, Mid North Coast Cancer Institute, Port Macquarie, New South Wales, Australia
| | - Justin Westhuyzen
- Department of Radiation Oncology, Mid North Coast Cancer Institute, Coffs Harbour, New South Wales, Australia
| | - Andrew Hodgson
- Department of Radiation Oncology, Mid North Coast Cancer Institute, Port Macquarie, New South Wales, Australia
| | - Brendan Chick
- Department of Radiation Oncology, Mid North Coast Cancer Institute, Port Macquarie, New South Wales, Australia
| | - Andrew Last
- Department of Radiation Oncology, Mid North Coast Cancer Institute, Port Macquarie, New South Wales, Australia
| |
Collapse
|
21
|
Abstract
Purpose To evaluate the accuracy and efficiency of Artificial-Intelligence (AI) segmentation in Total Marrow Irradiation (TMI) including contours throughout the head and neck (H&N), thorax, abdomen, and pelvis. Methods An AI segmentation software was clinically introduced for total body contouring in TMI including 27 organs at risk (OARs) and 4 planning target volumes (PTVs). This work compares the clinically utilized contours to the AI-TMI contours for 21 patients. Structure and image dicom data was used to generate comparisons including volumetric, spatial, and dosimetric variations between the AI- and human-edited contour sets. Conventional volume and surface measures including the Sørensen-Dice coefficient (Dice) and the 95th% Hausdorff Distance (HD95) were used, and novel efficiency metrics were introduced. The clinical efficiency gains were estimated by the percentage of the AI-contour-surface within 1mm of the clinical contour surface. An unedited AI-contour has an efficiency gain=100%, an AI-contour with 70% of its surface<1mm from a clinical contour has an efficiency gain of 70%. The dosimetric deviations were estimated from the clinical dose distribution to compute the dose volume histogram (DVH) for all structures. Results A total of 467 contours were compared in the 21 patients. In PTVs, contour surfaces deviated by >1mm in 38.6% ± 23.1% of structures, an average efficiency gain of 61.4%. Deviations >5mm were detected in 12.0% ± 21.3% of the PTV contours. In OARs, deviations >1mm were detected in 24.4% ± 27.1% of the structure surfaces and >5mm in 7.2% ± 18.0%; an average clinical efficiency gain of 75.6%. In H&N OARs, efficiency gains ranged from 42% in optic chiasm to 100% in eyes (unedited in all cases). In thorax, average efficiency gains were >80% in spinal cord, heart, and both lungs. Efficiency gains ranged from 60-70% in spleen, stomach, rectum, and bowel and 75-84% in liver, kidney, and bladder. DVH differences exceeded 0.05 in 109/467 curves at any dose level. The most common 5%-DVH variations were in esophagus (86%), rectum (48%), and PTVs (22%). Conclusions AI auto-segmentation software offers a powerful solution for enhanced efficiency in TMI treatment planning. Whole body segmentation including PTVs and normal organs was successful based on spatial and dosimetric comparison.
Collapse
Affiliation(s)
- William Tyler Watkins
- Department of Radiation Oncology, City of Hope National Medical Center, Duarte, CA, United States
| | | | | | | | | |
Collapse
|
22
|
Li J, Anne R. Comparison of Eclipse Smart Segmentation and MIM Atlas Segment for liver delineation for yttrium-90 selective internal radiation therapy. J Appl Clin Med Phys 2022; 23:e13668. [PMID: 35702944 PMCID: PMC9359022 DOI: 10.1002/acm2.13668] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Revised: 04/12/2022] [Accepted: 05/19/2022] [Indexed: 11/09/2022] Open
Abstract
Purpose The aim was to compare Smart Segmentation of Eclipse treatment planning system and Atlas Segment of MIM software for liver delineation for resin yttrium‐90 (Y‐90) procedures. Materials and methods CT images of 20 patients treated with resin Y‐90 selective internal radiation therapy (SIRT) were tested. Liver contours generated with Smart Segmentation and Atlas Segment were compared with physician manually delineated contours. Dice similarity coefficient (DSC), mean distance to agreement (MDA), and ratio of volume (RV) were calculated. The contours were evaluated with activity calculations and ratio of activity (RA) was calculated. Results Mean DSCs were 0.77 and 0.83, MDAs were 0.88 and 0.71 cm, mean RVs were 0.95 and 1.02, and mean RAs were 1.00 and 1.00, for Eclipse and MIM results, respectively. Conclusion MIM outperformed Eclipse in both DSC and MDA, whereas the differences in liver volumes and calculated activities were statistically insignificant between the Eclipse and MIM results. Both auto‐segmentation tools can be used to generate initial liver contours for resin Y‐90 SIRT, which need to be reviewed and edited by physicians.
Collapse
Affiliation(s)
- Jun Li
- Department of Radiation Oncology, Thomas Jefferson University, Philadelphia, Pennsylvania, USA
| | - Rani Anne
- Department of Radiation Oncology, Thomas Jefferson University, Philadelphia, Pennsylvania, USA
| |
Collapse
|
23
|
Claessens M, Vanreusel V, De Kerf G, Mollaert I, Löfman F, Gooding MJ, Brouwer C, Dirix P, Verellen D. Machine learning-based detection of aberrant deep learning segmentations of target and organs at risk for prostate radiotherapy using a secondary segmentation algorithm. Phys Med Biol 2022; 67. [PMID: 35561701 DOI: 10.1088/1361-6560/ac6fad] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2021] [Accepted: 05/13/2022] [Indexed: 11/11/2022]
Abstract
Objective.The output of a deep learning (DL) auto-segmentation application should be reviewed, corrected if needed and approved before being used clinically. This verification procedure is labour-intensive, time-consuming and user-dependent, which potentially leads to significant errors with impact on the overall treatment quality. Additionally, when the time needed to correct auto-segmentations approaches the time to delineate target and organs at risk from scratch, the usability of the DL model can be questioned. Therefore, an automated quality assurance framework was developed with the aim to detect in advance aberrant auto-segmentations.Approach. Five organs (prostate, bladder, anorectum, femoral head left and right) were auto-delineated on CT acquisitions for 48 prostate patients by an in-house trained primary DL model. An experienced radiation oncologist assessed the correctness of the model output and categorised the auto-segmentations into two classes whether minor or major adaptations were needed. Subsequently, an independent, secondary DL model was implemented to delineate the same structures as the primary model. Quantitative comparison metrics were calculated using both models' segmentations and used as input features for a machine learning classification model to predict the output quality of the primary model.Main results. For every organ, the approach of independent validation by the secondary model was able to detect primary auto-segmentations that needed major adaptation with high sensitivity (recall = 1) based on the calculated quantitative metrics. The surface DSC and APL were found to be the most indicated parameters in comparison to standard quantitative metrics for the time needed to adapt auto-segmentations.Significance. This proposed method includes a proof of concept for the use of an independent DL segmentation model in combination with a ML classifier to improve time saving during QA of auto-segmentations. The integration of such system into current automatic segmentation pipelines can increase the efficiency of the radiotherapy contouring workflow.
Collapse
Affiliation(s)
- Michaël Claessens
- Department of Radiation Oncology, Iridium Network, Wilrijk (Antwerp), Belgium.,Centre for Oncological Research (CORE), Integrated Personalized and Precision Oncology Network (IPPON), University of Antwerp, Belgium
| | - Verdi Vanreusel
- Department of Radiation Oncology, Iridium Network, Wilrijk (Antwerp), Belgium
| | - Geert De Kerf
- Department of Radiation Oncology, Iridium Network, Wilrijk (Antwerp), Belgium
| | - Isabelle Mollaert
- Department of Radiation Oncology, Iridium Network, Wilrijk (Antwerp), Belgium
| | - Fredrik Löfman
- Department of Machine Learning, RaySearch Laboratories AB, Stockholm, Sweden
| | | | - Charlotte Brouwer
- University of Groningen, University Medical Center Groningen, Department of Radiation Oncology, The Netherlands
| | - Piet Dirix
- Department of Radiation Oncology, Iridium Network, Wilrijk (Antwerp), Belgium.,Centre for Oncological Research (CORE), Integrated Personalized and Precision Oncology Network (IPPON), University of Antwerp, Belgium
| | - Dirk Verellen
- Department of Radiation Oncology, Iridium Network, Wilrijk (Antwerp), Belgium.,Centre for Oncological Research (CORE), Integrated Personalized and Precision Oncology Network (IPPON), University of Antwerp, Belgium
| |
Collapse
|
24
|
Wang J, Chen Z, Yang C, Qu B, Ma L, Fan W, Zhou Q, Zheng Q, Xu S. Evaluation Exploration of Atlas-Based and Deep Learning-Based Automatic Contouring for Nasopharyngeal Carcinoma. Front Oncol 2022; 12:833816. [PMID: 35433460 PMCID: PMC9008357 DOI: 10.3389/fonc.2022.833816] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2021] [Accepted: 02/25/2022] [Indexed: 11/13/2022] Open
Abstract
Purpose The purpose of this study was to evaluate and explore the difference between an atlas-based and deep learning (DL)-based auto-segmentation scheme for organs at risk (OARs) of nasopharyngeal carcinoma cases to provide valuable help for clinical practice. Methods 120 nasopharyngeal carcinoma cases were established in the MIM Maestro (atlas) database and trained by a DL-based model (AccuContour®), and another 20 nasopharyngeal carcinoma cases were randomly selected outside the atlas database. The experienced physicians contoured 14 OARs from 20 patients based on the published consensus guidelines, and these were defined as the reference volumes (Vref). Meanwhile, these OARs were auto-contoured using an atlas-based model, a pre-built DL-based model, and an on-site trained DL-based model. These volumes were named Vatlas, VDL-pre-built, and VDL-trained, respectively. The similarities between Vatlas, VDL-pre-built, VDL-trained, and Vref were assessed using the Dice similarity coefficient (DSC), Jaccard coefficient (JAC), maximum Hausdorff distance (HDmax), and deviation of centroid (DC) methods. A one-way ANOVA test was carried out to show the differences (between each two of them). Results The results of the three methods were almost similar for the brainstem and eyes. For inner ears and temporomandibular joints, the results of the pre-built DL-based model are the worst, as well as the results of atlas-based auto-segmentation for the lens. For the segmentation of optic nerves, the trained DL-based model shows the best performance (p < 0.05). For the contouring of the oral cavity, the DSC value of VDL-pre-built is the smallest, and VDL-trained is the most significant (p < 0.05). For the parotid glands, the DSC of Vatlas is the minimum (about 0.80 or so), and VDL-pre-built and VDL-trained are slightly larger (about 0.82 or so). In addition to the oral cavity, parotid glands, and the brainstem, the maximum Hausdorff distances of the other organs are below 0.5 cm using the trained DL-based segmentation model. The trained DL-based segmentation method behaves well in the contouring of all the organs that the maximum average deviation of the centroid is no more than 0.3 cm. Conclusion The trained DL-based segmentation performs significantly better than atlas-based segmentation for nasopharyngeal carcinoma, especially for the OARs with small volumes. Although some delineation results still need further modification, auto-segmentation methods improve the work efficiency and provide a level of help for clinical work.
Collapse
Affiliation(s)
- Jinyuan Wang
- Department of Radiation Oncology, The First Medical Center of the Chinese PLA General Hospital, Beijing, China
| | | | | | - Baolin Qu
- Department of Radiation Oncology, The First Medical Center of the Chinese PLA General Hospital, Beijing, China
| | - Lin Ma
- Department of Radiation Oncology, The First Medical Center of the Chinese PLA General Hospital, Beijing, China
| | - Wenjun Fan
- Department of Radiation Oncology, The First Medical Center of the Chinese PLA General Hospital, Beijing, China
| | - Qichao Zhou
- Manteia Technologies Co., Ltd., Xiamen, China
| | - Qingzeng Zheng
- Department of Radiation Oncology, Beijing Geriatric Hospital, Beijing, China
| | - Shouping Xu
- Department of Radiation Oncology, The First Medical Center of the Chinese PLA General Hospital, Beijing, China
| |
Collapse
|
25
|
Jiang X, Yu H, Deng Z, Zhu Z, Fu Y. [The Effects of Different Adaptive Statistical Iterative Reconstruction-V and Convolution Kernel Parameters on Auto-Segmentation Stability in CT Images]. Zhongguo Yi Liao Qi Xie Za Zhi 2022; 46:219-224. [PMID: 35411755 DOI: 10.3969/j.issn.1671-7104.2022.02.022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Objective The study aims to investigate the effects of different adaptive statistical iterative reconstruction-V( ASiR-V) and convolution kernel parameters on stability of CT auto-segmentation which is based on deep learning. Method Twenty patients who have received pelvic radiotherapy were selected and different reconstruction parameters were used to establish CT images dataset. Then structures including three soft tissue organs (bladder, bowelbag, small intestine) and five bone organs (left and right femoral head, left and right femur, pelvic) were segmented automatically by deep learning neural network. Performance was evaluated by dice similarity coefficient( DSC) and Hausdorff distance, using filter back projection(FBP) as the reference. Results Auto-segmentation of deep learning is greatly affected by ASIR-V, but less affected by convolution kernel, especially in soft tissues. Conclusion The stability of auto-segmentation is affected by parameter selection of reconstruction algorithm. In practical application, it is necessary to find a balance between image quality and segmentation quality, or improve segmentation network to enhance the stability of auto-segmentation.
Collapse
Affiliation(s)
- Xiaoxuan Jiang
- Department of Radiotherapy, West China Hospital, Sichuan University, Chengdu, 610041
| | - Hang Yu
- Department of Radiotherapy, West China Hospital, Sichuan University, Chengdu, 610041
| | - Zhonghua Deng
- Department of Radiotherapy, West China Hospital, Sichuan University, Chengdu, 610041
| | - Zhihui Zhu
- Department of Radiotherapy, West China Hospital, Sichuan University, Chengdu, 610041
| | - Yuchuan Fu
- Department of Radiotherapy, West China Hospital, Sichuan University, Chengdu, 610041
| |
Collapse
|
26
|
Duan J, Bernard M, Downes L, Willows B, Feng X, Mourad W, St Clair W, Chen Q. Evaluating the clinical acceptability of deep learning contours of prostate and organs-at-risk in an automated prostate treatment planning process. Med Phys 2022; 49:2570-2581. [PMID: 35147216 DOI: 10.1002/mp.15525] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2022] [Revised: 01/17/2022] [Accepted: 01/29/2021] [Indexed: 11/09/2022] Open
Abstract
BACKGROUND Radiation treatment is considered an effective and the most common treatment option for prostate cancer. The treatment planning process requires accurate and precise segmentation of the prostate and organs at risk (OARs), which is laborious and time-consuming when contoured manually. Artificial intelligence (AI)-based auto-segmentation has the potential to significantly accelerate the radiation therapy treatment planning process; however, the accuracy of auto-segmentation needs to be validated before its full clinical adoption. PURPOSE A commercial AI-based contouring model was trained to provide segmentation of the prostate and surrounding OARs. The segmented structures were input to a commercial auto-planning module for automated prostate treatment planning. This study comprehensively evaluates the performance of this contouring model in the automated prostate treatment planning process. METHODS AND MATERIALS A 3D U-Net-based model (INTContour, Carina AI) was trained and validated on 84 computed tomography (CT) scans and tested on an additional 23 CT scans from patients treated in our local institution. Prostate and OARs contours generated by the AI model (AI contour) were geometrically evaluated against Reference contours. The prostate contours were further evaluated against AI, Reference, and two additional observer contours for comparison using inter-observer variation (IOV) and 3D boundaries discrepancy analyses. A blinded evaluation was introduced to assess subjectively the clinical acceptability of the AI contours. Finally, treatment plans were created from an automated prostate planning workflow using the AI contours and were evaluated for their clinical acceptability following the RTOG-0815 protocol. RESULTS The AI contours demonstrated good geometric accuracy on OARs and prostate contours, with average Dice similarity coefficients (DSC) for bladder, rectum, femoral heads, seminal vesicles, and penile bulb of 0.93, 0.85, 0.96, 0.72, and 0.53, respectively. The DSC, 95% directed Hausdorff Distance (HD95), and Mean Surface Distance (MSD) for the prostate were 0.83±0.05, 6.07±1.87 mm, and 2.07±0.73 mm, respectively. No significant differences were found when comparing with IOV. In the double-blinded evaluation, 95.7% of the AI contours were scored as either "Perfect" (34.8%) or "Acceptable" (60.9%), while only one case (4.3%) was scored as "Unacceptable with minor changes required". In total, 69.6% of the AI contours were considered equal to or better than the Reference contours by an independent radiation oncologist. Automated treatment plans created from the AI contours produced similar and clinically-acceptable dosimetric distributions as those from plans created from Reference contours. CONCLUSIONS The investigated AI-based commercial model for prostate segmentation demonstrated good performance in clinical practice. Using this model, the implementation of an automated prostate treatment planning process is clinically feasible. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Jingwei Duan
- Department of Radiation Medicine, University of Kentucky, Lexington, 40506, KY
| | - Mark Bernard
- Department of Radiation Medicine, University of Kentucky, Lexington, 40506, KY
| | - Laura Downes
- Department of Radiation Medicine, University of Kentucky, Lexington, 40506, KY
| | - Brooke Willows
- Department of Radiation Medicine, University of Kentucky, Lexington, 40506, KY
| | - Xue Feng
- Carina Medical LLC, 145 Graham Ave, A168, Lexington, 40506, KY
| | - Waleed Mourad
- Department of Radiation Medicine, University of Kentucky, Lexington, 40506, KY
| | - William St Clair
- Department of Radiation Medicine, University of Kentucky, Lexington, 40506, KY
| | - Quan Chen
- Department of Radiation Medicine, University of Kentucky, Lexington, 40506, KY
| |
Collapse
|
27
|
Iyer A, Thor M, Onochie I, Hesse J, Zakeri K, LoCastro E, Jiang J, Veeraraghavan H, Elguindi S, Lee NY, Deasy JO, Apte AP. Prospectively-validated deep learning model for segmenting swallowing and chewing structures in CT. Phys Med Biol 2022; 67:10.1088/1361-6560/ac4000. [PMID: 34874302 PMCID: PMC8911366 DOI: 10.1088/1361-6560/ac4000] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2021] [Accepted: 12/03/2021] [Indexed: 01/19/2023]
Abstract
Objective.Delineating swallowing and chewing structures aids in radiotherapy (RT) treatment planning to limit dysphagia, trismus, and speech dysfunction. We aim to develop an accurate and efficient method to automate this process.Approach.CT scans of 242 head and neck (H&N) cancer patients acquired from 2004 to 2009 at our institution were used to develop auto-segmentation models for the masseters, medial pterygoids, larynx, and pharyngeal constrictor muscle using DeepLabV3+. A cascaded framework was used, wherein models were trained sequentially to spatially constrain each structure group based on prior segmentations. Additionally, an ensemble of models, combining contextual information from axial, coronal, and sagittal views was used to improve segmentation accuracy. Prospective evaluation was conducted by measuring the amount of manual editing required in 91 H&N CT scans acquired February-May 2021.Main results. Medians and inter-quartile ranges of Dice similarity coefficients (DSC) computed on the retrospective testing set (N = 24) were 0.87 (0.85-0.89) for the masseters, 0.80 (0.79-0.81) for the medial pterygoids, 0.81 (0.79-0.84) for the larynx, and 0.69 (0.67-0.71) for the constrictor. Auto-segmentations, when compared to two sets of manual segmentations in 10 randomly selected scans, showed better agreement (DSC) with each observer than inter-observer DSC. Prospective analysis showed most manual modifications needed for clinical use were minor, suggesting auto-contouring could increase clinical efficiency. Trained segmentation models are available for research use upon request viahttps://github.com/cerr/CERR/wiki/Auto-Segmentation-models.Significance.We developed deep learning-based auto-segmentation models for swallowing and chewing structures in CT and demonstrated its potential for use in treatment planning to limit complications post-RT. To the best of our knowledge, this is the only prospectively-validated deep learning-based model for segmenting chewing and swallowing structures in CT. Segmentation models have been made open-source to facilitate reproducibility and multi-institutional research.
Collapse
Affiliation(s)
- Aditi Iyer
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, USA
| | - Maria Thor
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, USA
| | | | - Jennifer Hesse
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, USA
| | - Kaveh Zakeri
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, USA
| | - Eve LoCastro
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, USA
| | - Jue Jiang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, USA
| | - Harini Veeraraghavan
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, USA
| | - Sharif Elguindi
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, USA
| | - Nancy Y. Lee
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, USA
| | - Joseph O. Deasy
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, USA
| | - Aditya P. Apte
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, USA
| |
Collapse
|
28
|
Abstract
INTRODUCTION Radiotherapy is one of the most effective ways to treat lung cancer. Accurately delineating the gross target volume is a key step in the radiotherapy process. In current clinical practice, the target area is still delineated manually by radiologists, which is time-consuming and laborious. However, these problems can be better solved by deep learning-assisted automatic segmentation methods. METHODS In this paper, a 3D CNN model named 3D ResSE-Unet is proposed for gross tumor volume segmentation for stage III NSCLC radiotherapy. This model is based on 3D Unet and combines residual connection and channel attention mechanisms. Three-dimensional convolution operation and encoding-decoding structure are used to mine three-dimensional spatial information of tumors from computed tomography data. Inspired by ResNet and SE-Net, residual connection and channel attention mechanisms are used to improve segmentation performance. A total of 214 patients with stage III NSCLC were collected selectively and 148 cases were randomly selected as the training set, 30 cases as the validation set, and 36 cases as the testing set. The segmentation performance of models was evaluated by the testing set. In addition, the segmentation results of different depths of 3D Unet were analyzed. And the performance of 3D ResSE-Unet was compared with 3D Unet, 3D Res-Unet, and 3D SE-Unet. RESULTS Compared with other depths, 3D Unet with four downsampling depths is more suitable for our work. Compared with 3D Unet, 3D Res-Unet, and 3D SE-Unet, 3D ResSE-Unet can obtain superior results. Its dice similarity coefficient, 95th-percentile of Hausdorff distance, and average surface distance can reach 0.7367, 21.39mm, 4.962mm, respectively. And the average time cost of 3D ResSE-Unet to segment a patient is only about 10s. CONCLUSION The method proposed in this study provides a new tool for GTV auto-segmentation and may be useful for lung cancer radiotherapy.
Collapse
Affiliation(s)
- Xinhao Yu
- College of Bioengineering, 47913Chongqing University, Chongqing, China.,Department of radiation oncology, 605425Chongqing University Cancer Hospital, Chongqing, China
| | - Fu Jin
- Department of radiation oncology, 605425Chongqing University Cancer Hospital, Chongqing, China
| | - HuanLi Luo
- Department of radiation oncology, 605425Chongqing University Cancer Hospital, Chongqing, China
| | - Qianqian Lei
- Department of radiation oncology, 605425Chongqing University Cancer Hospital, Chongqing, China
| | - Yongzhong Wu
- Department of radiation oncology, 605425Chongqing University Cancer Hospital, Chongqing, China
| |
Collapse
|
29
|
Ma CY, Zhou JY, Xu XT, Guo J, Han MF, Gao YZ, Du H, Stahl JN, Maltz JS. Deep learning-based auto-segmentation of clinical target volumes for radiotherapy treatment of cervical cancer. J Appl Clin Med Phys 2021; 23:e13470. [PMID: 34807501 PMCID: PMC8833283 DOI: 10.1002/acm2.13470] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2021] [Revised: 10/17/2021] [Accepted: 10/21/2021] [Indexed: 02/06/2023] Open
Abstract
Objectives Because radiotherapy is indispensible for treating cervical cancer, it is critical to accurately and efficiently delineate the radiation targets. We evaluated a deep learning (DL)‐based auto‐segmentation algorithm for automatic contouring of clinical target volumes (CTVs) in cervical cancers. Methods Computed tomography (CT) datasets from 535 cervical cancers treated with definitive or postoperative radiotherapy were collected. A DL tool based on VB‐Net was developed to delineate CTVs of the pelvic lymph drainage area (dCTV1) and parametrial area (dCTV2) in the definitive radiotherapy group. The training/validation/test number is 157/20/23. CTV of the pelvic lymph drainage area (pCTV1) was delineated in the postoperative radiotherapy group. The training/validation/test number is 272/30/33. Dice similarity coefficient (DSC), mean surface distance (MSD), and Hausdorff distance (HD) were used to evaluate the contouring accuracy. Contouring times were recorded for efficiency comparison. Results The mean DSC, MSD, and HD values for our DL‐based tool were 0.88/1.32 mm/21.60 mm for dCTV1, 0.70/2.42 mm/22.44 mm for dCTV2, and 0.86/1.15 mm/20.78 mm for pCTV1. Only minor modifications were needed for 63.5% of auto‐segmentations to meet the clinical requirements. The contouring accuracy of the DL‐based tool was comparable to that of senior radiation oncologists and was superior to that of junior/intermediate radiation oncologists. Additionally, DL assistance improved the performance of junior radiation oncologists for dCTV2 and pCTV1 contouring (mean DSC increases: 0.20 for dCTV2, 0.03 for pCTV1; mean contouring time decrease: 9.8 min for dCTV2, 28.9 min for pCTV1). Conclusions DL‐based auto‐segmentation improves CTV contouring accuracy, reduces contouring time, and improves clinical efficiency for treating cervical cancer.
Collapse
Affiliation(s)
- Chen-Ying Ma
- Department of Radiation Oncology, First Affiliated Hospital of Soochow University, Suzhou, China
| | - Ju-Ying Zhou
- Department of Radiation Oncology, First Affiliated Hospital of Soochow University, Suzhou, China
| | - Xiao-Ting Xu
- Department of Radiation Oncology, First Affiliated Hospital of Soochow University, Suzhou, China
| | - Jian Guo
- Department of Radiation Oncology, First Affiliated Hospital of Soochow University, Suzhou, China
| | - Miao-Fei Han
- Shanghai United Imaging Healthcare, Co. Ltd., Jiading, China
| | - Yao-Zong Gao
- Shanghai United Imaging Healthcare, Co. Ltd., Jiading, China
| | - Hui Du
- Shanghai United Imaging Healthcare, Co. Ltd., Jiading, China
| | | | | |
Collapse
|
30
|
Liu Z, Liu F, Chen W, Tao Y, Liu X, Zhang F, Shen J, Guan H, Zhen H, Wang S, Chen Q, Chen Y, Hou X. Automatic Segmentation of Clinical Target Volume and Organs-at-Risk for Breast Conservative Radiotherapy Using a Convolutional Neural Network. Cancer Manag Res 2021; 13:8209-8217. [PMID: 34754241 PMCID: PMC8572021 DOI: 10.2147/cmar.s330249] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2021] [Accepted: 10/04/2021] [Indexed: 12/14/2022] Open
Abstract
Objective Delineation of clinical target volume (CTV) and organs at risk (OARs) is important for radiotherapy but is time-consuming. We trained and evaluated a U-ResNet model to provide fast and consistent auto-segmentation. Methods We collected 160 patients’ CT scans with breast cancer who underwent breast-conserving surgery (BCS) and were treated with radiotherapy. CTV and OARs were delineated manually and were used for model training. The dice similarity coefficient (DSC) and 95th percentile Hausdorff distance (95HD) were used to assess the performance of our model. CTV and OARs were randomly selected as ground truth (GT) masks, and artificial intelligence (AI) masks were generated by the proposed model. Two clinicians randomly compared CTV score differences of the contour. The consistency between two clinicians was tested. Time cost for auto-delineation was evaluated. Results The mean DSC values of the proposed method were 0.94, 0.95, 0.94, 0.96, 0.96 and 0.93 for breast CTV, contralateral breast, heart, right lung, left lung and spinal cord, respectively. The mean 95HD values were 4.31mm, 3.59mm, 4.86mm, 3.18mm, 2.79mm and 4.37mm for the above structures, respectively. The average CTV scores for AI and GT were 2.89 versus 2.92 when evaluated by oncologist A (P=0.612), and 2.75 versus 2.83 by oncologist B (P=0.213), with no statistically significant differences. The consistency between two clinicians was poor (kappa=0.282). The time for auto-segmentation of CTV and OARs was 10.03 s. Conclusion Our proposed model (U-ResNet) can improve the efficiency and accuracy of delineation compared with U-Net, performing equally well with the segmentation generated by oncologists.
Collapse
Affiliation(s)
- Zhikai Liu
- Department of Radiation Oncology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, 100730, People's Republic of China
| | - Fangjie Liu
- Department of Radiation Oncology, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, 510060, People's Republic of China
| | - Wanqi Chen
- Department of Radiation Oncology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, 100730, People's Republic of China
| | - Yinjie Tao
- Department of Radiation Oncology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, 100730, People's Republic of China
| | - Xia Liu
- Department of Radiation Oncology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, 100730, People's Republic of China
| | - Fuquan Zhang
- Department of Radiation Oncology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, 100730, People's Republic of China
| | - Jing Shen
- Department of Radiation Oncology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, 100730, People's Republic of China
| | - Hui Guan
- Department of Radiation Oncology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, 100730, People's Republic of China
| | - Hongnan Zhen
- Department of Radiation Oncology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, 100730, People's Republic of China
| | - Shaobin Wang
- MedMind Technology Co., Ltd., Beijing, 100055, People's Republic of China
| | - Qi Chen
- MedMind Technology Co., Ltd., Beijing, 100055, People's Republic of China
| | - Yu Chen
- MedMind Technology Co., Ltd., Beijing, 100055, People's Republic of China
| | - Xiaorong Hou
- Department of Radiation Oncology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, 100730, People's Republic of China
| |
Collapse
|
31
|
Ren J, Eriksen JG, Nijkamp J, Korreman SS. Comparing different CT, PET and MRI multi-modality image combinations for deep learning-based head and neck tumor segmentation. Acta Oncol 2021; 60:1399-1406. [PMID: 34264157 DOI: 10.1080/0284186x.2021.1949034] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
Abstract
BACKGROUND Manual delineation of gross tumor volume (GTV) is essential for radiotherapy treatment planning, but it is time-consuming and suffers inter-observer variability (IOV). In clinics, CT, PET, and MRI are used to inform delineation accuracy due to their different complementary characteristics. This study aimed to investigate deep learning to assist GTV delineation in head and neck squamous cell carcinoma (HNSCC) by comparing various modality combinations. MATERIALS AND METHODS This retrospective study had 153 patients with multiple sites of HNSCC including their planning CT, PET, and MRI (T1-weighted and T2-weighted). Clinical delineations of gross tumor volume (GTV-T) and involved lymph nodes (GTV-N) were collected as the ground truth. The dataset was randomly divided into 92 patients for training, 31 for validation, and 30 for testing. We applied a residual 3 D UNet as the deep learning architecture. We independently trained the UNet with four different modality combinations (CT-PET-MRI, CT-MRI, CT-PET, and PET-MRI). Additionally, analogical to post-processing, an average fusion of three bi-modality combinations (CT-PET, CT-MRI, and PET-MRI) was produced as an ensemble. Segmentation accuracy was evaluated on the test set, using Dice similarity coefficient (Dice), Hausdorff Distance 95 percentile (HD95), and Mean Surface Distance (MSD). RESULTS All imaging combinations including PET provided similar average scores in range of Dice: 0.72-0.74, HD95: 8.8-9.5 mm, MSD: 2.6-2.8 mm. Only CT-MRI had a lower score with Dice: 0.58, HD95: 12.9 mm, MSD: 3.7 mm. The average of three bi-modality combinations reached Dice: 0.74, HD95: 7.9 mm, MSD: 2.4 mm. CONCLUSION Multimodal deep learning-based auto segmentation of HNSCC GTV was demonstrated and inclusion of the PET image was shown to be crucial. Training on combined MRI, PET, and CT data provided limited improvements over CT-PET and PET-MRI. However, when combining three bimodal trained networks into an ensemble, promising improvements were shown.
Collapse
Affiliation(s)
- Jintao Ren
- Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
- Danish Centre for Particle Therapy, Aarhus University Hospital, Aarhus, Denmark
- Department of Oncology, Aarhus University Hospital, Aarhus, Denmark
| | - Jesper Grau Eriksen
- Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
- Department of Experimental Clinical Oncology, Aarhus University Hospital, Aarhus, Denmark
| | - Jasper Nijkamp
- Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
- Danish Centre for Particle Therapy, Aarhus University Hospital, Aarhus, Denmark
| | - Stine Sofia Korreman
- Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
- Danish Centre for Particle Therapy, Aarhus University Hospital, Aarhus, Denmark
- Department of Oncology, Aarhus University Hospital, Aarhus, Denmark
| |
Collapse
|
32
|
Lorenzen EL, Kallehauge JF, Byskov CS, Dahlrot RH, Haslund CA, Guldberg TL, Lassen-Ramshad Y, Lukacova S, Muhic A, Witt Nyström P, Haldbo-Classen L, Bahij I, Larsen L, Weber B, Hansen CR. A national study on the inter-observer variability in the delineation of organs at risk in the brain. Acta Oncol 2021; 60:1548-1554. [PMID: 34629014 DOI: 10.1080/0284186x.2021.1975813] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
BACKGROUND The Danish Neuro Oncology Group (DNOG) has established national consensus guidelines for the delineation of organs at risk (OAR) structures based on published literature. This study was conducted to finalise these guidelines and evaluate the inter-observer variability of the delineated OAR structures by expert observers. MATERIAL AND METHODS The DNOG delineation guidelines were formed by participants from all Danish centres that treat brain tumours with radiotherapy. In a two-day workshop, guidelines were discussed and finalised based on a pilot study. Following this, the ten participants contoured the following OARs on T1-weighted gadolinium enhanced MRI from 13 patients with brain tumours: optic tracts, optic nerves, chiasm, spinal cord, brainstem, pituitary gland and hippocampus. The metrics used for comparison were the Dice similarity coefficient (Dice), mean surface distance (MSD) and others. RESULTS A total of 968 contours were delineated across the 13 patients. On average eight (range six to nine) individual contour sets were made per patient. Good agreement was found across all structures with a median MSD below 1 mm for most structures, with the chiasm performing the best with a median MSD of 0.45 mm. The Dice was as expected highly volume dependent, the brainstem (the largest structure) had the highest Dice value with a median of 0.89 whereas smaller volumes such as the chiasm had a Dice of 0.71. CONCLUSION Except for the caudal definition of the spinal cord, the variances observed in the contours of OARs in the brain were generally low and consistent. Surface mapping revealed sub-regions of higher variance for some organs. The data set is being prepared as a validation data set for auto-segmentation algorithms for use within the Danish Comprehensive Cancer Centre - Radiotherapy and potential collaborators.
Collapse
Affiliation(s)
| | - Jesper Folsted Kallehauge
- Danish Centre for Particle Therapy, Aarhus University Hospital, Aarhus, Denmark
- Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
| | - Camilla Skinnerup Byskov
- Danish Centre for Particle Therapy, Aarhus University Hospital, Aarhus, Denmark
- Department of Oncology, Aarhus University Hospital, Aarhus, Denmark
| | - Rikke Hedegaard Dahlrot
- Danish Centre for Particle Therapy, Aarhus University Hospital, Aarhus, Denmark
- Department of Oncology, Odense University Hospital, Odense, Denmark
- Institute of Clinical Research, University of Southern Denmark, Odense, Denmark
| | | | | | | | - Slávka Lukacova
- Department of Oncology, Aarhus University Hospital, Aarhus, Denmark
| | - Aida Muhic
- Department of Oncology, Rigshospitalet, Copenhagen, Denmark
| | - Petra Witt Nyström
- Danish Centre for Particle Therapy, Aarhus University Hospital, Aarhus, Denmark
| | | | - Ihsan Bahij
- Danish Centre for Particle Therapy, Aarhus University Hospital, Aarhus, Denmark
| | - Lone Larsen
- Department of Oncology, Aalborg University Hospital, Aalborg, Denmark
| | - Britta Weber
- Danish Centre for Particle Therapy, Aarhus University Hospital, Aarhus, Denmark
| | - Christian Rønn Hansen
- Laboratory of Radiation Physics, Odense University Hospital, Odense, Denmark
- Danish Centre for Particle Therapy, Aarhus University Hospital, Aarhus, Denmark
- Institute of Clinical Research, University of Southern Denmark, Odense, Denmark
| |
Collapse
|
33
|
Fang Y, Wang J, Ou X, Ying H, Hu C, Zhang Z, Hu W. The impact of training sample size on deep learning-based organ auto-segmentation for head-and-neck patients. Phys Med Biol 2021; 66. [PMID: 34450599 DOI: 10.1088/1361-6560/ac2206] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2021] [Accepted: 08/27/2021] [Indexed: 12/23/2022]
Abstract
To investigate the impact of training sample size on the performance of deep learning-based organ auto-segmentation for head-and-neck cancer patients, a total of 1160 patients with head-and-neck cancer who received radiotherapy were enrolled in this study. Patient planning CT images and regions of interest (ROIs) delineation, including the brainstem, spinal cord, eyes, lenses, optic nerves, temporal lobes, parotids, larynx and body, were collected. An evaluation dataset with 200 patients were randomly selected and combined with Dice similarity index to evaluate the model performances. Eleven training datasets with different sample sizes were randomly selected from the remaining 960 patients to form auto-segmentation models. All models used the same data augmentation methods, network structures and training hyperparameters. A performance estimation model of the training sample size based on the inverse power law function was established. Different performance change patterns were found for different organs. Six organs had the best performance with 800 training samples and others achieved their best performance with 600 training samples or 400 samples. The benefit of increasing the size of the training dataset gradually decreased. Compared to the best performance, optic nerves and lenses reached 95% of their best effect at 200, and the other organs reached 95% of their best effect at 40. For the fitting effect of the inverse power law function, the fitted root mean square errors of all ROIs were less than 0.03 (left eye: 0.024, others: <0.01), and theRsquare of all ROIs except for the body was greater than 0.5. The sample size has a significant impact on the performance of deep learning-based auto-segmentation. The relationship between sample size and performance depends on the inherent characteristics of the organ. In some cases, relatively small samples can achieve satisfactory performance.
Collapse
Affiliation(s)
- Yingtao Fang
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, People's Republic of China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, People's Republic of China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, People's Republic of China
| | - Jiazhou Wang
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, People's Republic of China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, People's Republic of China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, People's Republic of China
| | - Xiaomin Ou
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, People's Republic of China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, People's Republic of China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, People's Republic of China
| | - Hongmei Ying
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, People's Republic of China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, People's Republic of China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, People's Republic of China
| | - Chaosu Hu
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, People's Republic of China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, People's Republic of China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, People's Republic of China
| | - Zhen Zhang
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, People's Republic of China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, People's Republic of China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, People's Republic of China
| | - Weigang Hu
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, People's Republic of China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, People's Republic of China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, People's Republic of China
| |
Collapse
|
34
|
Liu Z, Chen W, Guan H, Zhen H, Shen J, Liu X, Liu A, Li R, Geng J, You J, Wang W, Li Z, Zhang Y, Chen Y, Du J, Chen Q, Chen Y, Wang S, Zhang F, Qiu J. An Adversarial Deep-Learning-Based Model for Cervical Cancer CTV Segmentation With Multicenter Blinded Randomized Controlled Validation. Front Oncol 2021; 11:702270. [PMID: 34490103 PMCID: PMC8417437 DOI: 10.3389/fonc.2021.702270] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2021] [Accepted: 07/29/2021] [Indexed: 12/31/2022] Open
Abstract
Purpose To propose a novel deep-learning-based auto-segmentation model for CTV delineation in cervical cancer and to evaluate whether it can perform comparably well to manual delineation by a three-stage multicenter evaluation framework. Methods An adversarial deep-learning-based auto-segmentation model was trained and configured for cervical cancer CTV contouring using CT data from 237 patients. Then CT scans of additional 20 consecutive patients with locally advanced cervical cancer were collected to perform a three-stage multicenter randomized controlled evaluation involving nine oncologists from six medical centers. This evaluation system is a combination of objective performance metrics, radiation oncologist assessment, and finally the head-to-head Turing imitation test. Accuracy and effectiveness were evaluated step by step. The intra-observer consistency of each oncologist was also tested. Results In stage-1 evaluation, the mean DSC and the 95HD value of the proposed model were 0.88 and 3.46 mm, respectively. In stage-2, the oncologist grading evaluation showed the majority of AI contours were comparable to the GT contours. The average CTV scores for AI and GT were 2.68 vs. 2.71 in week 0 (P = .206), and 2.62 vs. 2.63 in week 2 (P = .552), with no significant statistical differences. In stage-3, the Turing imitation test showed that the percentage of AI contours, which were judged to be better than GT contours by ≥5 oncologists, was 60.0% in week 0 and 42.5% in week 2. Most oncologists demonstrated good consistency between the 2 weeks (P > 0.05). Conclusions The tested AI model was demonstrated to be accurate and comparable to the manual CTV segmentation in cervical cancer patients when assessed by our three-stage evaluation framework.
Collapse
Affiliation(s)
- Zhikai Liu
- Department of Radiation Oncology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Wanqi Chen
- Department of Nuclear Medicine, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Hui Guan
- Department of Radiation Oncology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Hongnan Zhen
- Department of Radiation Oncology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Jing Shen
- Department of Radiation Oncology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Xia Liu
- Department of Radiation Oncology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - An Liu
- Department of Radiation Oncology, City of Hope National Medical Center, Duarte, CA, United States
| | - Richard Li
- Department of Radiation Oncology, City of Hope National Medical Center, Duarte, CA, United States
| | - Jianhao Geng
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Department of Radiation Oncology, Peking University Cancer Hospital and Institute, Beijing, China
| | - Jing You
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Department of Radiation Oncology, Peking University Cancer Hospital and Institute, Beijing, China
| | - Weihu Wang
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Department of Radiation Oncology, Peking University Cancer Hospital and Institute, Beijing, China
| | - Zhouyu Li
- Department of Radiation Oncology, Affiliated Cancer Hospital & Institute of Guangzhou Medical University, Guangzhou, China
| | - Yongfeng Zhang
- Department of Radiation Oncology, The Fourth Hospital of Jilin University (FAW General Hospital), Jilin, China
| | - Yuanyuan Chen
- Oncology Department, Cangzhou Hospital of Integrated Traditional Chinese and Western Medicine, Hebei, China
| | - Junjie Du
- Department of Radiation Oncology, Yangquan First People's Hospital, Shanxi, China
| | - Qi Chen
- Research and Development Department, MedMind Technology Co., Ltd., Beijing, China
| | - Yu Chen
- Research and Development Department, MedMind Technology Co., Ltd., Beijing, China
| | - Shaobin Wang
- Research and Development Department, MedMind Technology Co., Ltd., Beijing, China
| | - Fuquan Zhang
- Department of Radiation Oncology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Jie Qiu
- Department of Radiation Oncology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| |
Collapse
|
35
|
Chen H, Ban D, Qi XS, Pan X, Qiang Y, Yang Q. A Hybrid Feature Selection based Brain Tumor Detection and Segmentation in Multiparametric Magnetic Resonance Imaging. Med Phys 2021; 48:6614-6626. [PMID: 34089524 DOI: 10.1002/mp.15026] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2021] [Revised: 03/29/2021] [Accepted: 05/24/2021] [Indexed: 11/10/2022] Open
Abstract
PURPOSE To develop a novel method based on feature selection, combining convolutional neural network (CNN) and ensemble learning (EL), to achieve high accuracy and efficiency of glioma detection and segmentation using multiparametric MRIs. METHODS We proposed an evolutionary feature selection-based hybrid approach for glioma detection and segmentation on 4 MR sequences (T2-FLAIR, T1, T1Gd, and T2). First, we trained a lightweight CNN to detect glioma and mask the suspected region to process large batch of MRI images. Second, we employed a differential evolution algorithm to search a feature space, which composed of 416-dimensions radiomics features extracted from 4 sequences of MRIs and 128-dimensions high-order features extracted by the CNN, to generate an optimal feature combination for pixel classification. Finally, we trained an EL classifier using the optimal feature combination to segment whole tumor (WT) and its subregions including non-enhancing tumor (NET), peritumoral edema (ED), and enhancing tumor (ET) in the suspected region. Experiments were carried out on 300 glioma patients from the BraTS2019 dataset using 5-fold cross-validation, the model was independently validated using the rest 35 patients from the same database. RESULTS The approach achieved a detection accuracy of 98.8% using four MRIs. The Dice coefficients (and standard deviations) were 0.852±0.057, 0.844±0.046, and 0.799±0.053 for segmentation of WT (NET+ET+ED), tumor core (NET+ET), and ET, respectively. The sensitivities and specificities were 0.873±0.074, 0.863±0.072, and 0.852±0.082; the specificities were 0.994±0.005, 0.994±0.005, and 0.995±0.004 for the WT, tumor core and ET, respectively. The performances and calculation times were compared with the state-of-the-art approaches, our approach yielded a better overall performance with average processing time of 139.5 sec per set of four sequence MRIs. CONCLUSIONS We demonstrated a robust and computational cost-effective hybrid segmentation approach for glioma and its subregions on multi-sequence MR images. The proposed approach can be used for automated target delineation for glioma patients.
Collapse
Affiliation(s)
- Hao Chen
- School of Computer Science and Technology, Xi'an University of Posts and Telecommunications, Xi'an 710121, China.,Shaanxi Key Laboratory of Network Data Analysis and Intelligent Processing, University of Posts and Telecommunications, Xi'an, 710121, China
| | - Duo Ban
- School of Computer Science and Technology, Xi'an University of Posts and Telecommunications, Xi'an 710121, China
| | - X Sharon Qi
- Department of Radiation Oncology, University of California Los Angeles, Los Angeles, CA, 90095, United States
| | - Xiaoying Pan
- School of Computer Science and Technology, Xi'an University of Posts and Telecommunications, Xi'an 710121, China.,Shaanxi Key Laboratory of Network Data Analysis and Intelligent Processing, University of Posts and Telecommunications, Xi'an, 710121, China.,First Affiliated Hospital of Xi`an Jiaotong University, Xi`an 710061, China
| | - Yongqian Qiang
- First Affiliated Hospital of Xi`an Jiaotong University, Xi`an 710061, China
| | - Qing Yang
- School of Sport and Health Sciences, Xi'an Physical Education University, Xi'an, 710068, China
| |
Collapse
|
36
|
Friedrich F, Hörner-Rieber J, Renkamp CK, Klüter S, Bachert P, Ladd ME, Knowles BR. Stability of conventional and machine learning-based tumor auto-segmentation techniques using undersampled dynamic radial bSSFP acquisitions on a 0.35 T hybrid MR-linac system. Med Phys 2021; 48:587-596. [PMID: 33319394 DOI: 10.1002/mp.14659] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2020] [Revised: 12/08/2020] [Accepted: 12/08/2020] [Indexed: 11/11/2022] Open
Abstract
PURPOSE Hybrid MRI-linear accelerator systems (MR-linacs) allow for the incorporation of MR images with high soft-tissue contrast into the radiation therapy procedure prior to, during, or post irradiation. This allows not only for the optimization of the treatment planning, but also for real-time monitoring of the tumor position using cine MRI, from which intrafractional motion can be compensated. Fast imaging and accurate tumor tracking are crucial for effective compensation. This study investigates the application of cine MRI with a radial acquisition scheme on a low-field MR-linac to accelerate the acquisition rate and evaluates the effect on tracking accuracy. METHODS An MR sequence using tiny golden-angle radial k-space sampling was developed and applied to cine imaging on patients with liver tumors on a 0.35 T MR-linac. Tumor tracking was assessed for accuracy and stability from the cine images with increasing k-space undersampling factors. Tracking was achieved using two different auto-segmentation algorithms: a deformable image registration B-spline similar to that implemented on the MR-linac and a convolutional neural network approach known as U-Net. RESULTS Radial imaging allows for increased temporal resolution with reliable tumor tracking, although tracking robustness decreases as temporal resolution increases. Additional acquisition-based artifacts can be avoided by reducing the angle increment using tiny golden-angles. The U-net algorithm was found to have superior auto-segmentation metrics compared to B-spline. U-net was able to track two well-defined tumors, imaged with just 30 spokes per image (10.6 frames per second), with an average Dice coefficient ≥ 83%, Hausdorff distance ≤ 1.4 pixel, and mean contour distance ≤ 0.5 pixel. CONCLUSIONS Radial acquisitions are commonplace in dynamic imaging; however, in MR-guided radiotherapy, robust tumor tracking is also required. This study demonstrates the in vivo feasibility of tumor tracking from radially acquired images on a low-field MR-linac. Radial imaging allows for decreased image acquisition times while maintaining robust tracking. The U-net algorithm can track a tumor with higher accuracy in images with undersampling artifacts than a conventional deformable B-spline algorithm and is a promising tool for tracking in MR-guided radiation therapy.
Collapse
Affiliation(s)
- Florian Friedrich
- Division of Medical Physics in Radiology, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, Heidelberg, 69120, Germany.,Faculty of Physics and Astronomy, University of Heidelberg, Im Neuenheimer Feld 226, Heidelberg, 69120, Germany
| | - Juliane Hörner-Rieber
- Department of Radiation Oncology, University Hospital of Heidelberg, Im Neuenheimer Feld 400, Heidelberg, 69120, Germany.,Heidelberg Institute of Radiation Oncology (HIRO), Im Neuenheimer Feld 400, Heidelberg, 69120, Germany.,National Center for Radiation Research in Oncology (NCRO), Im Neuenheimer Feld 400, Heidelberg, 69120, Germany.,Clinical Cooperation Unit Radiation Oncology, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, Heidelberg, 69120, Germany
| | - C Katharina Renkamp
- Department of Radiation Oncology, University Hospital of Heidelberg, Im Neuenheimer Feld 400, Heidelberg, 69120, Germany.,Heidelberg Institute of Radiation Oncology (HIRO), Im Neuenheimer Feld 400, Heidelberg, 69120, Germany.,National Center for Radiation Research in Oncology (NCRO), Im Neuenheimer Feld 400, Heidelberg, 69120, Germany
| | - Sebastian Klüter
- Department of Radiation Oncology, University Hospital of Heidelberg, Im Neuenheimer Feld 400, Heidelberg, 69120, Germany.,Heidelberg Institute of Radiation Oncology (HIRO), Im Neuenheimer Feld 400, Heidelberg, 69120, Germany.,National Center for Radiation Research in Oncology (NCRO), Im Neuenheimer Feld 400, Heidelberg, 69120, Germany
| | - Peter Bachert
- Division of Medical Physics in Radiology, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, Heidelberg, 69120, Germany.,Faculty of Physics and Astronomy, University of Heidelberg, Im Neuenheimer Feld 226, Heidelberg, 69120, Germany
| | - Mark E Ladd
- Division of Medical Physics in Radiology, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, Heidelberg, 69120, Germany.,Faculty of Physics and Astronomy, University of Heidelberg, Im Neuenheimer Feld 226, Heidelberg, 69120, Germany.,Faculty of Medicine, University of Heidelberg, Im Neuenheimer Feld 672, Heidelberg, 69120, Germany
| | - Benjamin R Knowles
- Division of Medical Physics in Radiology, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, Heidelberg, 69120, Germany
| |
Collapse
|
37
|
Li Q, Li S, He Z, Guan H, Chen R, Xu Y, Wang T, Qi S, Mei J, Wang W. DeepRetina: Layer Segmentation of Retina in OCT Images Using Deep Learning. Transl Vis Sci Technol 2020; 9:61. [PMID: 33329940 PMCID: PMC7726589 DOI: 10.1167/tvst.9.2.61] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2019] [Accepted: 10/19/2020] [Indexed: 11/24/2022] Open
Abstract
Purpose To automate the segmentation of retinal layers, we propose DeepRetina, a method based on deep neural networks. Methods DeepRetina uses the improved Xception65 to extract and learn the characteristics of retinal layers. The Xception65-extracted feature maps are inputted to an atrous spatial pyramid pooling module to obtain multiscale feature information. This information is then recovered to capture clearer retinal layer boundaries in the encoder-decoder module, thus completing retinal layer auto-segmentation of the retinal optical coherence tomography (OCT) images. Results We validated this method using a retinal OCT image database containing 280 volumes (40 B-scans per volume) to demonstrate its effectiveness. The results showed that the method exhibits excellent performance in terms of the mean intersection over union and sensitivity (Se), which are as high as 90.41 and 92.15%, respectively. The intersection over union and Se values of the nerve fiber layer, ganglion cell layer, inner plexiform layer, inner nuclear layer, outer plexiform layer, outer nuclear layer, outer limiting membrane, photoreceptor inner segment, photoreceptor outer segment, and pigment epithelium layer were found to be above 88%. Conclusions DeepRetina can automate the segmentation of retinal layers and has great potential for the early diagnosis of fundus retinal diseases. In addition, our approach will provide a segmentation model framework for other types of tissues and cells in clinical practice. Translational Relevance Automating the segmentation of retinal layers can help effectively diagnose and monitor clinical retinal diseases. In addition, it requires only a small amount of manual segmentation, significantly improving work efficiency.
Collapse
Affiliation(s)
- Qiaoliang Li
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, Department of Biomedical Engineering, School of Medicine, Shenzhen University, Xueyuan Avenue, Nanshan District, Shenzhen, Guangdong Province, China
| | - Shiyu Li
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, Department of Biomedical Engineering, School of Medicine, Shenzhen University, Xueyuan Avenue, Nanshan District, Shenzhen, Guangdong Province, China
| | - Zhuoying He
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, Department of Biomedical Engineering, School of Medicine, Shenzhen University, Xueyuan Avenue, Nanshan District, Shenzhen, Guangdong Province, China
| | - Huimin Guan
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, Department of Biomedical Engineering, School of Medicine, Shenzhen University, Xueyuan Avenue, Nanshan District, Shenzhen, Guangdong Province, China
| | - Runmin Chen
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, Department of Biomedical Engineering, School of Medicine, Shenzhen University, Xueyuan Avenue, Nanshan District, Shenzhen, Guangdong Province, China
| | - Ying Xu
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, Department of Biomedical Engineering, School of Medicine, Shenzhen University, Xueyuan Avenue, Nanshan District, Shenzhen, Guangdong Province, China
| | - Tao Wang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, Department of Biomedical Engineering, School of Medicine, Shenzhen University, Xueyuan Avenue, Nanshan District, Shenzhen, Guangdong Province, China
| | - Suwen Qi
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, Department of Biomedical Engineering, School of Medicine, Shenzhen University, Xueyuan Avenue, Nanshan District, Shenzhen, Guangdong Province, China
| | - Jun Mei
- Medical Imaging Department of Shenzhen Eye Hospital Affiliated to Jinan University, Shenzhen, Guangdong Province, China
| | - Wei Wang
- Department of Pathology, Shenzhen University General Hospital, Shenzhen, Guangdong Province, China
| |
Collapse
|
38
|
Wang Z, Chang Y, Peng Z, Lv Y, Shi W, Wang F, Pei X, Xu XG. Evaluation of deep learning-based auto-segmentation algorithms for delineating clinical target volume and organs at risk involving data for 125 cervical cancer patients. J Appl Clin Med Phys 2020; 21:272-279. [PMID: 33238060 PMCID: PMC7769393 DOI: 10.1002/acm2.13097] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2020] [Revised: 10/03/2020] [Accepted: 10/21/2020] [Indexed: 12/15/2022] Open
Abstract
Objective To evaluate the accuracy of a deep learning‐based auto‐segmentation mode to that of manual contouring by one medical resident, where both entities tried to mimic the delineation "habits" of the same clinical senior physician. Methods This study included 125 cervical cancer patients whose clinical target volumes (CTVs) and organs at risk (OARs) were delineated by the same senior physician. Of these 125 cases, 100 were used for model training and the remaining 25 for model testing. In addition, the medical resident instructed by the senior physician for approximately 8 months delineated the CTVs and OARs for the testing cases. The dice similarity coefficient (DSC) and the Hausdorff Distance (HD) were used to evaluate the delineation accuracy for CTV, bladder, rectum, small intestine, femoral‐head‐left, and femoral‐head‐right. Results The DSC values of the auto‐segmentation model and manual contouring by the resident were, respectively, 0.86 and 0.83 for the CTV (P < 0.05), 0.91 and 0.91 for the bladder (P > 0.05), 0.88 and 0.84 for the femoral‐head‐right (P < 0.05), 0.88 and 0.84 for the femoral‐head‐left (P < 0.05), 0.86 and 0.81 for the small intestine (P < 0.05), and 0.81 and 0.84 for the rectum (P > 0.05). The HD (mm) values were, respectively, 14.84 and 18.37 for the CTV (P < 0.05), 7.82 and 7.63 for the bladder (P > 0.05), 6.18 and 6.75 for the femoral‐head‐right (P > 0.05), 6.17 and 6.31 for the femoral‐head‐left (P > 0.05), 22.21 and 26.70 for the small intestine (P > 0.05), and 7.04 and 6.13 for the rectum (P > 0.05). The auto‐segmentation model took approximately 2 min to delineate the CTV and OARs while the resident took approximately 90 min to complete the same task. Conclusion The auto‐segmentation model was as accurate as the medical resident but with much better efficiency in this study. Furthermore, the auto‐segmentation approach offers additional perceivable advantages of being consistent and ever improving when compared with manual approaches.
Collapse
Affiliation(s)
- Zhi Wang
- Center of Radiological Medical Physics, University of Science and Technology of China, Hefei, China.,Department of Radiation Oncology, First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Yankui Chang
- Center of Radiological Medical Physics, University of Science and Technology of China, Hefei, China
| | - Zhao Peng
- Center of Radiological Medical Physics, University of Science and Technology of China, Hefei, China
| | - Yin Lv
- Department of Radiation Oncology, First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Weijiong Shi
- Department of Radiation Oncology, First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Fan Wang
- Department of Radiation Oncology, First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Xi Pei
- Center of Radiological Medical Physics, University of Science and Technology of China, Hefei, China.,Anhui Wisdom Technology Co., Ltd., Hefei, Anhui, China
| | - X George Xu
- Center of Radiological Medical Physics, University of Science and Technology of China, Hefei, China
| |
Collapse
|
39
|
Fu Y, Yu H. [Application and Development Trend of Medical Image Automatic Segmentation Technology in Radiation Therapy]. Zhongguo Yi Liao Qi Xie Za Zhi 2020; 44:420-424. [PMID: 33047565 DOI: 10.3969/j.issn.1671-7104.2020.05.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
The development of medical image segmentation technology has been briefly reviewed. The applications of auto-segmentation of organs at risk and target volumes based on Atlas and deep learning in the field of radiotherapy have been introduced in detail, respectively. Then the development direction and product model for general automatic sketching tools or systems based on solid clinical data are discussed.
Collapse
Affiliation(s)
- Yuchuan Fu
- Department of Radiation Oncology, West China Hospital, Sichuan University, Chengdu, 610041
| | - Hang Yu
- Department of Radiation Oncology, West China Hospital, Sichuan University, Chengdu, 610041
| |
Collapse
|
40
|
Vrtovec T, Močnik D, Strojan P, Pernuš F, Ibragimov B. Auto-segmentation of organs at risk for head and neck radiotherapy planning: From atlas-based to deep learning methods. Med Phys 2020; 47:e929-e950. [PMID: 32510603 DOI: 10.1002/mp.14320] [Citation(s) in RCA: 69] [Impact Index Per Article: 17.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2019] [Revised: 05/27/2020] [Accepted: 05/29/2020] [Indexed: 02/06/2023] Open
Abstract
Radiotherapy (RT) is one of the basic treatment modalities for cancer of the head and neck (H&N), which requires a precise spatial description of the target volumes and organs at risk (OARs) to deliver a highly conformal radiation dose to the tumor cells while sparing the healthy tissues. For this purpose, target volumes and OARs have to be delineated and segmented from medical images. As manual delineation is a tedious and time-consuming task subjected to intra/interobserver variability, computerized auto-segmentation has been developed as an alternative. The field of medical imaging and RT planning has experienced an increased interest in the past decade, with new emerging trends that shifted the field of H&N OAR auto-segmentation from atlas-based to deep learning-based approaches. In this review, we systematically analyzed 78 relevant publications on auto-segmentation of OARs in the H&N region from 2008 to date, and provided critical discussions and recommendations from various perspectives: image modality - both computed tomography and magnetic resonance image modalities are being exploited, but the potential of the latter should be explored more in the future; OAR - the spinal cord, brainstem, and major salivary glands are the most studied OARs, but additional experiments should be conducted for several less studied soft tissue structures; image database - several image databases with the corresponding ground truth are currently available for methodology evaluation, but should be augmented with data from multiple observers and multiple institutions; methodology - current methods have shifted from atlas-based to deep learning auto-segmentation, which is expected to become even more sophisticated; ground truth - delineation guidelines should be followed and participation of multiple experts from multiple institutions is recommended; performance metrics - the Dice coefficient as the standard volumetric overlap metrics should be accompanied with at least one distance metrics, and combined with clinical acceptability scores and risk assessments; segmentation performance - the best performing methods achieve clinically acceptable auto-segmentation for several OARs, however, the dosimetric impact should be also studied to provide clinically relevant endpoints for RT planning.
Collapse
Affiliation(s)
- Tomaž Vrtovec
- Faculty Electrical Engineering, University of Ljubljana, Tržaška cesta 25, Ljubljana, SI-1000, Slovenia
| | - Domen Močnik
- Faculty Electrical Engineering, University of Ljubljana, Tržaška cesta 25, Ljubljana, SI-1000, Slovenia
| | - Primož Strojan
- Institute of Oncology Ljubljana, Zaloška cesta 2, Ljubljana, SI-1000, Slovenia
| | - Franjo Pernuš
- Faculty Electrical Engineering, University of Ljubljana, Tržaška cesta 25, Ljubljana, SI-1000, Slovenia
| | - Bulat Ibragimov
- Faculty Electrical Engineering, University of Ljubljana, Tržaška cesta 25, Ljubljana, SI-1000, Slovenia.,Department of Computer Science, University of Copenhagen, Universitetsparken 1, Copenhagen, D-2100, Denmark
| |
Collapse
|
41
|
Xue X, Qin N, Hao X, Shi J, Wu A, An H, Zhang H, Wu A, Yang Y. Sequential and Iterative Auto-Segmentation of High-Risk Clinical Target Volume for Radiotherapy of Nasopharyngeal Carcinoma in Planning CT Images. Front Oncol 2020; 10:1134. [PMID: 32793483 PMCID: PMC7390915 DOI: 10.3389/fonc.2020.01134] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2020] [Accepted: 06/05/2020] [Indexed: 12/24/2022] Open
Abstract
Background: Accurate segmentation of tumor targets is critical for maximizing tumor control and minimizing normal tissue toxicity. We proposed a sequential and iterative U-Net (SI-Net) deep learning method to auto-segment the high-risk primary tumor clinical target volume (CTVp1) for treatment planning of nasopharyngeal carcinoma (NPC) radiotherapy. Methods: The SI-Net is a variant of the U-Net architecture. The input of SI-Net includes one CT image, the CTVp1 contour on this image, and the next CT image. The output is the predicted CTVp1 contour on the next CT image. We designed the SI-Net, using the left side to learn the volumetric features and the right to localize the contour on the next image. Two prediction directions, one from inferior to superior (forward direction) and the other from superior to inferior (backward direction), were tested. The performance was compared between the SI-Net and the U-Net using Dice similarity coefficient (DSC), Jaccard index (JI), average surface distance (ASD), and Hausdorff distance (HD) metrics. Results: The DSC and JI values from the forward direction SI-Net model were 5 and 6% higher than those from the U-Net model (0.84 ± 0.04 vs. 0.80 ± 0.05 and 0.74 ± 0.05 vs. 0.69 ± 0.05, p < 0.001). The smaller ASD and HD values also indicated a better performance (2.8 ± 1.0 vs. 3.3 ± 1.0 mm and 8.7 ± 2.5 vs. 9.7 ± 2.7 mm, p < 0.01) for the SI-Net model. For the backward direction SI-Net model, the DSC and JI values were still better than those from the U-Net model (p < 0.01), although there were no significant differences in ASD and HD. Conclusions: The SI-Net model preserved the continuity between adjacent images and thus improved the segmentation accuracy compared with the conventional U-Net model. This model has potential of improving the efficiency and consistence of CTVp1 contouring for NPC patients.
Collapse
Affiliation(s)
- Xudong Xue
- Department of Radiation Oncology, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, China
| | - Nannan Qin
- School of Biomedical Engineering, Anhui Medical University, Hefei, China
| | - Xiaoyu Hao
- School of Computer Science and Technology, University of Science and Technology of China, Hefei, China
| | - Jun Shi
- School of Computer Science and Technology, University of Science and Technology of China, Hefei, China
| | - Ailin Wu
- Department of Radiation Oncology, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, China
| | - Hong An
- School of Computer Science and Technology, University of Science and Technology of China, Hefei, China
| | - Hongyan Zhang
- Department of Radiation Oncology, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, China
| | - Aidong Wu
- Department of Radiation Oncology, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, China
| | - Yidong Yang
- Department of Radiation Oncology, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, China.,School of Physical Sciences, University of Science and Technology of China, Hefei, China
| |
Collapse
|
42
|
Takagi H, Kadoya N, Kajikawa T, Tanaka S, Takayama Y, Chiba T, Ito K, Dobashi S, Takeda K, Jingu K. Multi-atlas-based auto-segmentation for prostatic urethra using novel prediction of deformable image registration accuracy. Med Phys 2020; 47:3023-3031. [PMID: 32201958 DOI: 10.1002/mp.14154] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2019] [Revised: 02/04/2020] [Accepted: 03/14/2020] [Indexed: 01/13/2023] Open
Abstract
PURPOSE Accurate identification of the prostatic urethra and bladder can help determine dosing and evaluate urinary toxicity during intensity-modulated radiation therapy (IMRT) planning in patients with localized prostate cancer. However, it is challenging to locate the prostatic urethra in planning computed tomography (pCT). In the present study, we developed a multiatlas-based auto-segmentation method for prostatic urethra identification using deformable image registration accuracy prediction with machine learning (ML) and assessed its feasibility. METHODS We examined 120 patients with prostate cancer treated with IMRT. All patients underwent temporary urinary catheter placement for identification and contouring of the prostatic urethra in pCT images (ground truth). Our method comprises the following three steps: (a) select four atlas datasets from the atlas datasets using the deformable image registration (DIR) accuracy prediction model, (b) deform them by structure-based DIR, (3) and propagate urethra contour using displacement vector field calculated by the DIR. In (a), for identifying suitable datasets, we used the trained support vector machine regression (SVR) model and five feature descriptors (e.g., prostate volume) to increase DIR accuracy. This method was trained/validated using 100 patients and performance was evaluated within an independent test set of 20 patients. Fivefold cross-validation was used to optimize the hype parameters of the DIR accuracy prediction model. We assessed the accuracy of our method by comparing it with those of two others: Acostas method-based patient selection (previous study method, by Acosta et al.), and the Waterman's method (defines the prostatic urethra based on the center of the prostate, by Waterman et al.). We used the centerlines distance (CLD) between the ground truth and the predicted prostatic urethra as the evaluation index. RESULTS The CLD in the entire prostatic urethra was 2.09 ± 0.89 mm (our proposed method), 2.77 ± 0.99 mm (Acosta et al., P = 0.022), and 3.47 ± 1.19 mm (Waterman et al., P < 0.001); our proposed method showed the highest accuracy. In segmented CLD, CLD in the top 1/3 segment was highly improved from that of Waterman et.al. and was slightly improved from that of Acosta et.al., with results of 2.49 ± 1.78 mm (our proposed method), 2.95 ± 1.75 mm (Acosta et al., P = 0.42), and 5.76 ± 3.09 mm (Waterman et al., P < 0.001). CONCLUSIONS We developed a DIR accuracy prediction model-based multiatlas-based auto-segmentation method for prostatic urethra identification. Our method identified prostatic urethra with mean error of 2.09 mm, likely due to combined effects of SVR model employment in patient selection, modified atlas dataset characteristics and DIR algorithm. Our method has potential utility in prostate cancer IMRT and can replace use of temporary indwelling urinary catheters.
Collapse
Affiliation(s)
- Hisamichi Takagi
- Course of Radiological Technology, Health Sciences, Tohoku University Graduate School of Medicine, Sendai, Miyagi, 980-8575, Japan
| | - Noriyuki Kadoya
- Department of Radiation Oncology, Tohoku University Graduate School of Medicine, Sendai, Miyagi, 980-8574, Japan
| | - Tomohiro Kajikawa
- Department of Radiation Oncology, Tohoku University Graduate School of Medicine, Sendai, Miyagi, 980-8574, Japan
| | - Shohei Tanaka
- Department of Radiation Oncology, Tohoku University Graduate School of Medicine, Sendai, Miyagi, 980-8574, Japan
| | - Yoshiki Takayama
- Department of Radiation Oncology, Tohoku University Graduate School of Medicine, Sendai, Miyagi, 980-8574, Japan
| | - Takahito Chiba
- Department of Radiation Oncology, Tohoku University Graduate School of Medicine, Sendai, Miyagi, 980-8574, Japan
| | - Kengo Ito
- Department of Radiation Oncology, Tohoku University Graduate School of Medicine, Sendai, Miyagi, 980-8574, Japan
| | - Suguru Dobashi
- Course of Radiological Technology, Health Sciences, Tohoku University Graduate School of Medicine, Sendai, Miyagi, 980-8575, Japan
| | - Ken Takeda
- Course of Radiological Technology, Health Sciences, Tohoku University Graduate School of Medicine, Sendai, Miyagi, 980-8575, Japan
| | - Keiichi Jingu
- Department of Radiation Oncology, Tohoku University Graduate School of Medicine, Sendai, Miyagi, 980-8574, Japan
| |
Collapse
|
43
|
Guo B, Shah C, Xia P. Automated planning of whole breast irradiation using hybrid IMRT improves efficiency and quality. J Appl Clin Med Phys 2019; 20:87-96. [PMID: 31743598 PMCID: PMC6909113 DOI: 10.1002/acm2.12767] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2019] [Revised: 09/05/2019] [Accepted: 10/14/2019] [Indexed: 11/25/2022] Open
Abstract
Purpose To develop an automated workflow for whole breast irradiation treatment planning using hybrid intensity modulated radiation therapy (IMRT) approach and to demonstrate that this workflow can improve planning quality and efficiency when compared to manual planning. Methods The auto planning framework was built based on scripting with MIM and Pinnacle systems. MIM workflows were developed to automatically segment normal structures and targets, identify landmarks for beam placement, select beam energies, and set beam configurations. Pinnacle scripts were generated from the MIM workflow to create hybrid IMRT plans automatically. Each hybrid IMRT plan included two prescriptions: a three‐dimensional (3D) prescription consisted of two open tangent beams, and an IMRT prescription consisted of two step‐and‐shoot IMRT beams. The 3D prescription delivered a full prescription dose to the maximum dose point, and the IMRT prescription was optimized to deliver a uniform dose to the entire breast while sparing dose to the normal structures. For 30 patients, the auto plans were compared with clinically accepted manual plans using the paired sample t‐test. Results The auto planning process took approximately 8 min to complete. The mean dice coefficients between auto‐segmentation and manual contours were 0.98, 0.94 and 0.88 for the lungs, heart, and PTVeval_Breast, respectively. The MUs of the auto plans was on average 13% higher than that of the manual plans. Auto planning improved plan quality significantly: percentage volume receiving 95% of the prescription dose (V95%) of the PTVeval_Breast increased from 91.5% to 93.2% (P = 0.001), V105% of the PTVeval_Breast decreased from 7.2% to 1.2% (P = 0.013), V20Gy of the ipsilateral lung decreased from 13.1% to 10.4% (P = 0.001) and mean heart dose for left‐sided breast patients decreased from 1.2 Gy to 0.9 Gy (P < 0.001). Conclusion An automated treatment planning process can make the planning process efficient with improved plan quality.
Collapse
Affiliation(s)
- Bingqi Guo
- Department of Radiation Oncology, Taussig Cancer Institute, Cleveland Clinic, Cleveland, OH, USA
| | - Chirag Shah
- Department of Radiation Oncology, Taussig Cancer Institute, Cleveland Clinic, Cleveland, OH, USA
| | - Ping Xia
- Department of Radiation Oncology, Taussig Cancer Institute, Cleveland Clinic, Cleveland, OH, USA
| |
Collapse
|
44
|
Miller C, Mittelstaedt D, Black N, Klahr P, Nejad-Davarani S, Schulz H, Goshen L, Han X, Ghanem AI, Morris ED, Glide-Hurst C. Impact of CT reconstruction algorithm on auto-segmentation performance. J Appl Clin Med Phys 2019; 20:95-103. [PMID: 31538718 PMCID: PMC6753741 DOI: 10.1002/acm2.12710] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2019] [Revised: 06/28/2019] [Accepted: 07/20/2019] [Indexed: 11/21/2022] Open
Abstract
Model‐based iterative reconstruction (MBIR) reduces CT imaging dose while maintaining image quality. However, MBIR reduces noise while preserving edges which may impact intensity‐based tasks such as auto‐segmentation. This work evaluates the sensitivity of an auto‐contouring prostate atlas across multiple MBIR reconstruction protocols and benchmarks the results against filtered back projection (FBP). Images were created from raw projection data for 11 prostate cancer cases using FBP and nine different MBIR reconstructions (3 protocols/3 noise reduction levels) yielding 10 reconstructions/patient. Five bony structures, bladder, rectum, prostate, and seminal vesicles (SVs) were segmented using an auto‐segmentation pipeline that renders 3D binary masks for analysis. Performance was evaluated for volume percent difference (VPD) and Dice similarity coefficient (DSC), using FBP as the gold standard. Nonparametric Friedman tests plus post hoc all pairwise comparisons were employed to test for significant differences (P < 0.05) for soft tissue organs and protocol/level combinations. A physician performed qualitative grading of 396 MBIR contours across the prostate, bladder, SVs, and rectum in comparison to FBP using a six‐point scale. MBIR contours agreed with FBP for bony anatomy (DSC ≥ 0.98), bladder (DSC ≥ 0.94, VPD < 8.5%), and prostate (DSC = 0.94 ± 0.03, VPD = 4.50 ± 4.77% (range: 0.07–26.39%). Increased variability was observed for rectum (VPD = 7.50 ± 7.56% and DSC = 0.90 ± 0.08) and SVs (VPD and DSC of 8.23 ± 9.86% range (0.00–35.80%) and 0.87 ± 0.11, respectively). Over the all protocol/level comparisons, a significant difference was observed for the prostate VPD between BSPL1 and BSTL2 (adjusted P‐value = 0.039). Nevertheless, 300 of 396 (75.8%) of the four soft tissue structures using MBIR were graded as equivalent or better than FBP, suggesting that MBIR offered potential improvements in auto‐segmentation performance when compared to FBP. Future work may involve tuning organ‐specific MBIR parameters to further improve auto‐segmentation performance. Running title: Impact of CT Reconstruction Algorithm on Auto‐segmentation Performance.
Collapse
Affiliation(s)
- Claudia Miller
- Department of Radiation Oncology, Henry Ford Cancer Institute, Detroit, MI, USA.,Wayne State University, Detroit, MI, USA
| | - Daniel Mittelstaedt
- Department of Radiation Oncology, Henry Ford Cancer Institute, Detroit, MI, USA
| | - Noel Black
- Department of CT Imaging Physics, Philips Healthcare, Cleveland, OH, USA
| | - Paul Klahr
- Department of CT Imaging Physics, Philips Healthcare, Cleveland, OH, USA
| | | | | | - Liran Goshen
- Department of CT Imaging Physics, Philips Healthcare, Cleveland, OH, USA
| | - Xiaoxia Han
- Department of Public Health Sciences, Henry Ford Health System, Detroit, MI, USA
| | - Ahmed I Ghanem
- Department of Radiation Oncology, Henry Ford Cancer Institute, Detroit, MI, USA.,Clinical Oncology Department, Alexandria University, Alexandria, Egypt
| | - Eric D Morris
- Department of Radiation Oncology, Henry Ford Cancer Institute, Detroit, MI, USA.,Wayne State University, Detroit, MI, USA
| | - Carri Glide-Hurst
- Department of Radiation Oncology, Henry Ford Cancer Institute, Detroit, MI, USA.,Wayne State University, Detroit, MI, USA
| |
Collapse
|
45
|
Kadoya N. [Deformable Image Registration and Auto-Segmentation for Various Medical Imaging Types]. Igaku Butsuri 2019; 39:12-19. [PMID: 31168032 DOI: 10.11323/jjmp.39.1_12] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Current status of deformable image registration (DIR) and auto-segmentation for various medical imaging types (e.g., CT, MR, and CBCT) is reported. First, we introduce the advantage/disadvantage of DIR between (1) CT and CT, (2) CT and CBCT, (3) MR and MR, and (4) CT and MR. Next, we explain an atlas-based segmentation. Our explanation about DIR and auto-segmentation will help for understanding DIR technique.
Collapse
Affiliation(s)
- Noriyuki Kadoya
- Department of Radiation Oncology, Tohoku University Graduate School of Medicine
| |
Collapse
|
46
|
Rios Velazquez E, Aerts HJWL, Gu Y, Goldgof DB, De Ruysscher D, Dekker A, Korn R, Gillies RJ, Lambin P. A semiautomatic CT-based ensemble segmentation of lung tumors: comparison with oncologists' delineations and with the surgical specimen. Radiother Oncol 2012; 105:167-73. [PMID: 23157978 PMCID: PMC3749821 DOI: 10.1016/j.radonc.2012.09.023] [Citation(s) in RCA: 80] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2011] [Revised: 09/04/2012] [Accepted: 09/12/2012] [Indexed: 12/28/2022]
Abstract
PURPOSE To assess the clinical relevance of a semiautomatic CT-based ensemble segmentation method, by comparing it to pathology and to CT/PET manual delineations by five independent radiation oncologists in non-small cell lung cancer (NSCLC). MATERIALS AND METHODS For 20 NSCLC patients (stages Ib-IIIb) the primary tumor was delineated manually on CT/PET scans by five independent radiation oncologists and segmented using a CT based semi-automatic tool. Tumor volume and overlap fractions between manual and semiautomatic-segmented volumes were compared. All measurements were correlated with the maximal diameter on macroscopic examination of the surgical specimen. Imaging data are available on www.cancerdata.org. RESULTS High overlap fractions were observed between the semi-automatically segmented volumes and the intersection (92.5±9.0, mean±SD) and union (94.2±6.8) of the manual delineations. No statistically significant differences in tumor volume were observed between the semiautomatic segmentation (71.4±83.2 cm(3), mean±SD) and manual delineations (81.9±94.1 cm(3); p=0.57). The maximal tumor diameter of the semiautomatic-segmented tumor correlated strongly with the macroscopic diameter of the primary tumor (r=0.96). CONCLUSIONS Semiautomatic segmentation of the primary tumor on CT demonstrated high agreement with CT/PET manual delineations and strongly correlated with the macroscopic diameter considered as the "gold standard". This method may be used routinely in clinical practice and could be employed as a starting point for treatment planning, target definition in multi-center clinical trials or for high throughput data mining research. This method is particularly suitable for peripherally located tumors.
Collapse
|