1
|
Liu X, Qu L, Xie Z, Zhao J, Shi Y, Song Z. Towards more precise automatic analysis: a systematic review of deep learning-based multi-organ segmentation. Biomed Eng Online 2024; 23:52. [PMID: 38851691 PMCID: PMC11162022 DOI: 10.1186/s12938-024-01238-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Accepted: 04/11/2024] [Indexed: 06/10/2024] Open
Abstract
Accurate segmentation of multiple organs in the head, neck, chest, and abdomen from medical images is an essential step in computer-aided diagnosis, surgical navigation, and radiation therapy. In the past few years, with a data-driven feature extraction approach and end-to-end training, automatic deep learning-based multi-organ segmentation methods have far outperformed traditional methods and become a new research topic. This review systematically summarizes the latest research in this field. We searched Google Scholar for papers published from January 1, 2016 to December 31, 2023, using keywords "multi-organ segmentation" and "deep learning", resulting in 327 papers. We followed the PRISMA guidelines for paper selection, and 195 studies were deemed to be within the scope of this review. We summarized the two main aspects involved in multi-organ segmentation: datasets and methods. Regarding datasets, we provided an overview of existing public datasets and conducted an in-depth analysis. Concerning methods, we categorized existing approaches into three major classes: fully supervised, weakly supervised and semi-supervised, based on whether they require complete label information. We summarized the achievements of these methods in terms of segmentation accuracy. In the discussion and conclusion section, we outlined and summarized the current trends in multi-organ segmentation.
Collapse
Affiliation(s)
- Xiaoyu Liu
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, 138 Yixueyuan Road, Shanghai, 200032, People's Republic of China
- Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai, 200032, China
| | - Linhao Qu
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, 138 Yixueyuan Road, Shanghai, 200032, People's Republic of China
- Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai, 200032, China
| | - Ziyue Xie
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, 138 Yixueyuan Road, Shanghai, 200032, People's Republic of China
- Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai, 200032, China
| | - Jiayue Zhao
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, 138 Yixueyuan Road, Shanghai, 200032, People's Republic of China
- Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai, 200032, China
| | - Yonghong Shi
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, 138 Yixueyuan Road, Shanghai, 200032, People's Republic of China.
- Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai, 200032, China.
| | - Zhijian Song
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, 138 Yixueyuan Road, Shanghai, 200032, People's Republic of China.
- Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai, 200032, China.
| |
Collapse
|
2
|
El-Beblawy YM, Bakry AM, Mohamed MEA. Accuracy of formula-based volume and image segmentation-based volume in calculation of preoperative cystic jaw lesions' volume. Oral Radiol 2024; 40:259-268. [PMID: 38112919 DOI: 10.1007/s11282-023-00731-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Accepted: 11/26/2023] [Indexed: 12/21/2023]
Abstract
OBJECTIVE The aim of this study was to assess the accuracy of formula-based volume measurements and the 3D volume analysis with different software packages in the calculation of preoperative cystic jaw lesions' volume. The secondary aim was to assess the reliability and the accuracy of 3 imaging software programs for measuring the cystic jaw lesions' volume in CBCT images. MATERIALS AND METHODS This study consisted of two parts: an in vitro part using 2 dry human mandibles that were used to create simulated osteolytic lesions to assess the accuracy of the volumetric analysis and formula-based volume. As a gold standard, the volume of each bone defect was determined by taking an impression using rapid soft silicone (Vinylight) and then quantifying the volume of the replica. Afterward, each tooth socket was scanned using a high-resolution CBCT. A retrospective part using archived CBCT radiographs that were taken from the database of the outpatient clinic of the oral and maxillofacial radiology department, Faculty of Dentistry, Minia University to assess the reliability of the 3 software packages. The volumetric data set was exported for volume quantification using the 3 software packages (MIMICS-OnDemand and InVesalius software). Also, the three greatest orthogonal diameters of the lesions were calculated, and the volume was assessed using the ellipsoid formula. Dunn's test was used for pair-wise comparisons when Friedman's test was significant. The inter-examiner agreement was assessed using Cronbach's alpha reliability coefficient and intra-class correlation coefficient. RESULTS Regarding the results of the retrospective part, there was a statistically significant difference between volumetric measurements by equation and different software (P value < 0.001, Effect size = 0.513). The inter-observer reliability of the measurements of the cystic lesions using the different software packages was very good. The highest inter-examiner agreement for volume measurement was found with InVesalius (Cronbach's alpha = 0.992). On the other hand, there was a statistically significant difference between dry mandible volumetric measurements and Gold Standard. All software showed statistically significantly lower dry mandible volumetric measurements than the gold standard. CONCLUSION Computer-aided assessment of cystic lesion volume using InVesalius, OnDemand, and MIMICS is a readily available, easy to use, non-invasive option. It confers an advantage over formula-based volume as it gives the exact morphology of the lesion so that potential problems can be detected before surgery. Volume analysis with InVesalius software was accurate in determining the volume of simulated periapical defects in a human cadaver mandible as compared to true volume. InVesalius software proved that open-source software can be robust yet user-friendly with the advantage of minimal cost to use.
Collapse
Affiliation(s)
- Yasmein Maher El-Beblawy
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Minia University, Shalaby Street, Minya, Egypt.
| | - Ahmed Mohamed Bakry
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Minia University, Shalaby Street, Minya, Egypt
| | - Maha Eshaq Amer Mohamed
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Minia University, Shalaby Street, Minya, Egypt
| |
Collapse
|
3
|
Balagopal A, Dohopolski M, Suk Kwon Y, Montalvo S, Morgan H, Bai T, Nguyen D, Liang X, Zhong X, Lin MH, Desai N, Jiang S. Deep learning based automatic segmentation of the Internal Pudendal Artery in definitive radiotherapy treatment planning of localized prostate cancer. Phys Imaging Radiat Oncol 2024; 30:100577. [PMID: 38707629 PMCID: PMC11068618 DOI: 10.1016/j.phro.2024.100577] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2023] [Revised: 04/06/2024] [Accepted: 04/08/2024] [Indexed: 05/07/2024] Open
Abstract
Background and purpose Radiation-induced erectile dysfunction (RiED) commonly affects prostate cancer patients, prompting clinical trials across institutions to explore dose-sparing to internal-pudendal-arteries (IPA) for preserving sexual potency. IPA, challenging to segment, isn't conventionally considered an organ-at-risk (OAR). This study proposes a deep learning (DL) auto-segmentation model for IPA, using Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) or CT alone to accommodate varied clinical practices. Materials and methods A total of 86 patients with CT and MRI images and noisy IPA labels were recruited in this study. We split the data into 42/14/30 for model training, testing, and a clinical observer study, respectively. There were three major innovations in this model: 1) we designed an architecture with squeeze-and-excite blocks and modality attention for effective feature extraction and production of accurate segmentation, 2) a novel loss function was used for training the model effectively with noisy labels, and 3) modality dropout strategy was used for making the model capable of segmentation in the absence of MRI. Results Test dataset metrics were DSC 61.71 ± 7.7 %, ASD 2.5 ± .87 mm, and HD95 7.0 ± 2.3 mm. AI segmented contours showed dosimetric similarity to expert physician's contours. Observer study indicated higher scores for AI contours (mean = 3.7) compared to inexperienced physicians' contours (mean = 3.1). Inexperienced physicians improved scores to 3.7 when starting with AI contours. Conclusion The proposed model achieved good quality IPA contours to improve uniformity of segmentation and to facilitate introduction of standardized IPA segmentation into clinical trials and practice.
Collapse
Affiliation(s)
- Anjali Balagopal
- Medical Artificial Intelligence and Automation (MAIA) Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Michael Dohopolski
- Medical Artificial Intelligence and Automation (MAIA) Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Young Suk Kwon
- Medical Artificial Intelligence and Automation (MAIA) Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Steven Montalvo
- Medical Artificial Intelligence and Automation (MAIA) Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Howard Morgan
- Medical Artificial Intelligence and Automation (MAIA) Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Ti Bai
- Medical Artificial Intelligence and Automation (MAIA) Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Dan Nguyen
- Medical Artificial Intelligence and Automation (MAIA) Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Xiao Liang
- Medical Artificial Intelligence and Automation (MAIA) Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Xinran Zhong
- Medical Artificial Intelligence and Automation (MAIA) Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Mu-Han Lin
- Medical Artificial Intelligence and Automation (MAIA) Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Neil Desai
- Medical Artificial Intelligence and Automation (MAIA) Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Steve Jiang
- Medical Artificial Intelligence and Automation (MAIA) Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| |
Collapse
|
4
|
Kakkos I, Vagenas TP, Zygogianni A, Matsopoulos GK. Towards Automation in Radiotherapy Planning: A Deep Learning Approach for the Delineation of Parotid Glands in Head and Neck Cancer. Bioengineering (Basel) 2024; 11:214. [PMID: 38534488 DOI: 10.3390/bioengineering11030214] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2023] [Revised: 02/19/2024] [Accepted: 02/22/2024] [Indexed: 03/28/2024] Open
Abstract
The delineation of parotid glands in head and neck (HN) carcinoma is critical to assess radiotherapy (RT) planning. Segmentation processes ensure precise target position and treatment precision, facilitate monitoring of anatomical changes, enable plan adaptation, and enhance overall patient safety. In this context, artificial intelligence (AI) and deep learning (DL) have proven exceedingly effective in precisely outlining tumor tissues and, by extension, the organs at risk. This paper introduces a DL framework using the AttentionUNet neural network for automatic parotid gland segmentation in HN cancer. Extensive evaluation of the model is performed in two public and one private dataset, while segmentation accuracy is compared with other state-of-the-art DL segmentation schemas. To assess replanning necessity during treatment, an additional registration method is implemented on the segmentation output, aligning images of different modalities (Computed Tomography (CT) and Cone Beam CT (CBCT)). AttentionUNet outperforms similar DL methods (Dice Similarity Coefficient: 82.65% ± 1.03, Hausdorff Distance: 6.24 mm ± 2.47), confirming its effectiveness. Moreover, the subsequent registration procedure displays increased similarity, providing insights into the effects of RT procedures for treatment planning adaptations. The implementation of the proposed methods indicates the effectiveness of DL not only for automatic delineation of the anatomical structures, but also for the provision of information for adaptive RT support.
Collapse
Affiliation(s)
- Ioannis Kakkos
- Biomedical Engineering Laboratory, National Technical University of Athens, 15773 Athens, Greece
| | - Theodoros P Vagenas
- Biomedical Engineering Laboratory, National Technical University of Athens, 15773 Athens, Greece
| | - Anna Zygogianni
- Radiation Oncology Unit, 1st Department of Radiology, ARETAIEION University Hospital, 11528 Athens, Greece
| | - George K Matsopoulos
- Biomedical Engineering Laboratory, National Technical University of Athens, 15773 Athens, Greece
| |
Collapse
|
5
|
Kolasa K, Admassu B, Hołownia-Voloskova M, Kędzior KJ, Poirrier JE, Perni S. Systematic reviews of machine learning in healthcare: a literature review. Expert Rev Pharmacoecon Outcomes Res 2024; 24:63-115. [PMID: 37955147 DOI: 10.1080/14737167.2023.2279107] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Accepted: 10/31/2023] [Indexed: 11/14/2023]
Abstract
INTRODUCTION The increasing availability of data and computing power has made machine learning (ML) a viable approach to faster, more efficient healthcare delivery. METHODS A systematic literature review (SLR) of published SLRs evaluating ML applications in healthcare settings published between1 January 2010 and 27 March 2023 was conducted. RESULTS In total 220 SLRs covering 10,462 ML algorithms were reviewed. The main application of AI in medicine related to the clinical prediction and disease prognosis in oncology and neurology with the use of imaging data. Accuracy, specificity, and sensitivity were provided in 56%, 28%, and 25% SLRs respectively. Internal and external validation was reported in 53% and less than 1% of the cases respectively. The most common modeling approach was neural networks (2,454 ML algorithms), followed by support vector machine and random forest/decision trees (1,578 and 1,522 ML algorithms, respectively). EXPERT OPINION The review indicated considerable reporting gaps in terms of the ML's performance, both internal and external validation. Greater accessibility to healthcare data for developers can ensure the faster adoption of ML algorithms into clinical practice.
Collapse
Affiliation(s)
- Katarzyna Kolasa
- Division of Health Economics and Healthcare Management, Kozminski University, Warsaw, Poland
| | - Bisrat Admassu
- Division of Health Economics and Healthcare Management, Kozminski University, Warsaw, Poland
| | | | | | | | | |
Collapse
|
6
|
Costea M, Zlate A, Serre AA, Racadot S, Baudier T, Chabaud S, Grégoire V, Sarrut D, Biston MC. Evaluation of different algorithms for automatic segmentation of head-and-neck lymph nodes on CT images. Radiother Oncol 2023; 188:109870. [PMID: 37634765 DOI: 10.1016/j.radonc.2023.109870] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2023] [Revised: 07/27/2023] [Accepted: 08/20/2023] [Indexed: 08/29/2023]
Abstract
PURPOSE To investigate the performance of 4 atlas-based (multi-ABAS) and 2 deep learning (DL) solutions for head-and-neck (HN) elective nodes (CTVn) automatic segmentation (AS) on CT images. MATERIAL AND METHODS Bilateral CTVn levels of 69 HN cancer patients were delineated on contrast-enhanced planning CT. Ten and 49 patients were used for atlas library and for training a mono-centric DL model, respectively. The remaining 20 patients were used for testing. Additionally, three commercial multi-ABAS methods and one commercial multi-centric DL solution were investigated. Quantitative evaluation was assessed using volumetric Dice Similarity Coefficient (DSC) and 95-percentile Hausdorff distance (HD95%). Blind evaluation was performed for 3 solutions by 4 physicians. One recorded the time needed for manual corrections. A dosimetric study was finally conducted using automated planning. RESULTS Overall DL solutions had better DSC and HD95% results than multi-ABAS methods. No statistically significant difference was found between the 2 DL solutions. However, the contours provided by multi-centric DL solution were preferred by all physicians and were also faster to correct (1.1 min vs 4.17 min, on average). Manual corrections for multi-ABAS contours took on average 6.52 min Overall, decreased contour accuracy was observed from CTVn2 to CTVn3 and to CTVn4. Using the AS contours in treatment planning resulted in underdosage of the elective target volume. CONCLUSION Among all methods, the multi-centric DL method showed the highest delineation accuracy and was better rated by experts. Manual corrections remain necessary to avoid elective target underdosage. Finally, AS contours help reducing the workload of manual delineation task.
Collapse
Affiliation(s)
- Madalina Costea
- Centre Léon Bérard, 28 rue Laennec, LYON 69373 Cedex 08, France; CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Université Lyon 1, Villeurbanne, France
| | | | | | | | - Thomas Baudier
- Centre Léon Bérard, 28 rue Laennec, LYON 69373 Cedex 08, France; CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Université Lyon 1, Villeurbanne, France
| | - Sylvie Chabaud
- Unité de Biostatistique et d'Evaluation des Thérapeutiques, Centre Léon Bérard, Lyon 69373, France
| | | | - David Sarrut
- Centre Léon Bérard, 28 rue Laennec, LYON 69373 Cedex 08, France; CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Université Lyon 1, Villeurbanne, France
| | - Marie-Claude Biston
- Centre Léon Bérard, 28 rue Laennec, LYON 69373 Cedex 08, France; CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Université Lyon 1, Villeurbanne, France.
| |
Collapse
|
7
|
Doolan PJ, Charalambous S, Roussakis Y, Leczynski A, Peratikou M, Benjamin M, Ferentinos K, Strouthos I, Zamboglou C, Karagiannis E. A clinical evaluation of the performance of five commercial artificial intelligence contouring systems for radiotherapy. Front Oncol 2023; 13:1213068. [PMID: 37601695 PMCID: PMC10436522 DOI: 10.3389/fonc.2023.1213068] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Accepted: 07/17/2023] [Indexed: 08/22/2023] Open
Abstract
Purpose/objectives Auto-segmentation with artificial intelligence (AI) offers an opportunity to reduce inter- and intra-observer variability in contouring, to improve the quality of contours, as well as to reduce the time taken to conduct this manual task. In this work we benchmark the AI auto-segmentation contours produced by five commercial vendors against a common dataset. Methods and materials The organ at risk (OAR) contours generated by five commercial AI auto-segmentation solutions (Mirada (Mir), MVision (MV), Radformation (Rad), RayStation (Ray) and TheraPanacea (Ther)) were compared to manually-drawn expert contours from 20 breast, 20 head and neck, 20 lung and 20 prostate patients. Comparisons were made using geometric similarity metrics including volumetric and surface Dice similarity coefficient (vDSC and sDSC), Hausdorff distance (HD) and Added Path Length (APL). To assess the time saved, the time taken to manually draw the expert contours, as well as the time to correct the AI contours, were recorded. Results There are differences in the number of CT contours offered by each AI auto-segmentation solution at the time of the study (Mir 99; MV 143; Rad 83; Ray 67; Ther 86), with all offering contours of some lymph node levels as well as OARs. Averaged across all structures, the median vDSCs were good for all systems and compared favorably with existing literature: Mir 0.82; MV 0.88; Rad 0.86; Ray 0.87; Ther 0.88. All systems offer substantial time savings, ranging between: breast 14-20 mins; head and neck 74-93 mins; lung 20-26 mins; prostate 35-42 mins. The time saved, averaged across all structures, was similar for all systems: Mir 39.8 mins; MV 43.6 mins; Rad 36.6 min; Ray 43.2 mins; Ther 45.2 mins. Conclusions All five commercial AI auto-segmentation solutions evaluated in this work offer high quality contours in significantly reduced time compared to manual contouring, and could be used to render the radiotherapy workflow more efficient and standardized.
Collapse
Affiliation(s)
- Paul J. Doolan
- Department of Medical Physics, German Oncology Center, Limassol, Cyprus
| | | | - Yiannis Roussakis
- Department of Medical Physics, German Oncology Center, Limassol, Cyprus
| | - Agnes Leczynski
- Department of Radiation Oncology, German Oncology Center, Limassol, Cyprus
| | - Mary Peratikou
- Department of Radiation Oncology, German Oncology Center, Limassol, Cyprus
| | - Melka Benjamin
- Department of Radiation Oncology, German Oncology Center, Limassol, Cyprus
| | - Konstantinos Ferentinos
- Department of Radiation Oncology, German Oncology Center, Limassol, Cyprus
- School of Medicine, European University Cyprus, Nicosia, Cyprus
| | - Iosif Strouthos
- Department of Radiation Oncology, German Oncology Center, Limassol, Cyprus
- School of Medicine, European University Cyprus, Nicosia, Cyprus
| | - Constantinos Zamboglou
- Department of Radiation Oncology, German Oncology Center, Limassol, Cyprus
- School of Medicine, European University Cyprus, Nicosia, Cyprus
- Department of Radiation Oncology, Medical Center – University of Freiberg, Freiberg, Germany
| | - Efstratios Karagiannis
- Department of Radiation Oncology, German Oncology Center, Limassol, Cyprus
- School of Medicine, European University Cyprus, Nicosia, Cyprus
| |
Collapse
|
8
|
Delaby N, Barateau A, Chiavassa S, Biston MC, Chartier P, Graulières E, Guinement L, Huger S, Lacornerie T, Millardet-Martin C, Sottiaux A, Caron J, Gensanne D, Pointreau Y, Coutte A, Biau J, Serre AA, Castelli J, Tomsej M, Garcia R, Khamphan C, Badey A. Practical and technical key challenges in head and neck adaptive radiotherapy: The GORTEC point of view. Phys Med 2023; 109:102568. [PMID: 37015168 DOI: 10.1016/j.ejmp.2023.102568] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Revised: 02/15/2023] [Accepted: 03/18/2023] [Indexed: 04/05/2023] Open
Abstract
Anatomical variations occur during head and neck (H&N) radiotherapy (RT) treatment. These variations may result in underdosage to the target volume or overdosage to the organ at risk. Replanning during the treatment course can be triggered to overcome this issue. Due to technological, methodological and clinical evolutions, tools for adaptive RT (ART) are becoming increasingly sophisticated. The aim of this paper is to give an overview of the key steps of an H&N ART workflow and tools from the point of view of a group of French-speaking medical physicists and physicians (from GORTEC). Focuses are made on image registration, segmentation, estimation of the delivered dose of the day, workflow and quality assurance for an implementation of H&N offline and online ART. Practical recommendations are given to assist physicians and medical physicists in a clinical workflow.
Collapse
|
9
|
Li W, Song H, Li Z, Lin Y, Shi J, Yang J, Wu W. OrbitNet-A fully automated orbit multi-organ segmentation model based on transformer in CT images. Comput Biol Med 2023; 155:106628. [PMID: 36809695 DOI: 10.1016/j.compbiomed.2023.106628] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2022] [Revised: 01/11/2023] [Accepted: 01/28/2023] [Indexed: 02/18/2023]
Abstract
The delineation of orbital organs is a vital step in orbital diseases diagnosis and preoperative planning. However, an accurate multi-organ segmentation is still a clinical problem which suffers from two limitations. First, the contrast of soft tissue is relatively low. It usually cannot clearly show the boundaries of organs. Second, the optic nerve and the rectus muscle are difficult to distinguish because they are spatially adjacent and have similar geometry. To address these challenges, we propose the OrbitNet model to automatically segment orbital organs in CT images. Specifically, we present a global feature extraction module based on the transformer architecture called FocusTrans encoder, which enhance the ability to extract boundary features. To make the network focus on the extraction of edge features in the optic nerve and rectus muscle, the SA block is used to replace the convolution block in the decoding stage. In addition, we use the structural similarity measure (SSIM) loss as a part of the hybrid loss function to learn the edge differences of the organs better. OrbitNet has been trained and tested on the CT dataset collected by the Eye Hospital of Wenzhou Medical University. The experimental results show that our proposed model achieved superior results. The average Dice Similarity Coefficient (DSC) is 83.9%, the value of average 95% Hausdorff Distance (HD95) is 1.62 mm, and the value of average Symmetric Surface Distance (ASSD) is 0.47 mm. Our model also has good performance on the MICCAI 2015 challenge dataset.
Collapse
Affiliation(s)
- Wentao Li
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing, 100081, China.
| | - Hong Song
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing, 100081, China.
| | - Zongyu Li
- School of Medical and Technology, Beijing Institute of Technology, Beijing, 100081, China.
| | - Yucong Lin
- School of Medical and Technology, Beijing Institute of Technology, Beijing, 100081, China.
| | - Jieliang Shi
- Eye Hospital of Wenzhou Medical University, Wenzhou, 325072, China.
| | - Jian Yang
- School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China.
| | - Wencan Wu
- Eye Hospital of Wenzhou Medical University, Wenzhou, 325072, China.
| |
Collapse
|
10
|
Ginn JS, Gay HA, Hilliard J, Shah J, Mistry N, Möhler C, Hugo GD, Hao Y. A clinical and time savings evaluation of a deep learning automatic contouring algorithm. Med Dosim 2022; 48:55-60. [PMID: 36550000 DOI: 10.1016/j.meddos.2022.11.001] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2022] [Revised: 10/27/2022] [Accepted: 11/22/2022] [Indexed: 12/24/2022]
Abstract
Automatic contouring algorithms may streamline clinical workflows by reducing normal organ-at-risk (OAR) contouring time. Here we report the first comprehensive quantitative and qualitative evaluation, along with time savings assessment for a prototype deep learning segmentation algorithm from Siemens Healthineers. The accuracy of contours generated by the prototype were evaluated quantitatively using the Sorensen-Dice coefficient (Dice), Jaccard index (JC), and Hausdorff distance (Haus). Normal pelvic and head and neck OAR contours were evaluated retrospectively comparing the automatic and manual clinical contours in 100 patient cases. Contouring performance outliers were investigated. To quantify the time savings, a certified medical dosimetrist manually contoured de novo and, separately, edited the generated OARs for 10 head and neck and 10 pelvic patients. The automatic, edited, and manually generated contours were visually evaluated and scored by a practicing radiation oncologist on a scale of 1-4, where a higher score indicated better performance. The quantitative comparison revealed high (> 0.8) Dice and JC performance for relatively large organs such as the lungs, brain, femurs, and kidneys. Smaller elongated structures that had relatively low Dice and JC values tended to have low Hausdorff distances. Poor performing outlier cases revealed common anatomical inconsistencies including overestimation of the bladder and incorrect superior-inferior truncation of the spinal cord and femur contours. In all cases, editing contours was faster than manual contouring with an average time saving of 43.4% or 11.8 minutes per patient. The physician scored 240 structures with > 95% of structures receiving a score of 3 or 4. Of the structures reviewed, only 11 structures needed major revision or to be redone entirely. Our results indicate the evaluated auto-contouring solution has the potential to reduce clinical contouring time. The algorithm's performance is promising, but human review and some editing is required prior to clinical use.
Collapse
Affiliation(s)
- John S Ginn
- Department of Radiation Oncology, Washington University School of Medicine, St. Louis, MO 63110, USA.
| | - Hiram A Gay
- Department of Radiation Oncology, Washington University School of Medicine, St. Louis, MO 63110, USA
| | - Jessica Hilliard
- Department of Radiation Oncology, Washington University School of Medicine, St. Louis, MO 63110, USA
| | | | | | | | - Geoffrey D Hugo
- Department of Radiation Oncology, Washington University School of Medicine, St. Louis, MO 63110, USA
| | - Yao Hao
- Department of Radiation Oncology, Washington University School of Medicine, St. Louis, MO 63110, USA
| |
Collapse
|
11
|
Costea M, Zlate A, Durand M, Baudier T, Grégoire V, Sarrut D, Biston MC. Comparison of atlas-based and deep learning methods for organs at risk delineation on head-and-neck CT images using an automated treatment planning system. Radiother Oncol 2022; 177:61-70. [PMID: 36328093 DOI: 10.1016/j.radonc.2022.10.029] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2022] [Revised: 10/21/2022] [Accepted: 10/23/2022] [Indexed: 11/06/2022]
Abstract
BACKGROUND AND PURPOSE To investigate the performance of head-and-neck (HN) organs-at-risk (OAR) automatic segmentation (AS) using four atlas-based (ABAS) and two deep learning (DL) solutions. MATERIAL AND METHODS All patients underwent iodine contrast-enhanced planning CT. Fourteen OAR were manually delineated. DL.1 and DL.2 solutions were trained with 63 mono-centric patients and > 1000 multi-centric patients, respectively. Ten and 15 patients with varied anatomies were selected for the atlas library and for testing, respectively. The evaluation was based on geometric indices (DICE coefficient and 95th percentile-Hausdorff Distance (HD95%)), time needed for manual corrections and clinical dosimetric endpoints obtained using automated treatment planning. RESULTS Both DICE and HD95% results indicated that DL algorithms generally performed better compared with ABAS algorithms for automatic segmentation of HN OAR. However, the hybrid-ABAS (ABAS.3) algorithm sometimes provided the highest agreement to the reference contours compared with the 2 DL. Compared with DL.2 and ABAS.3, DL.1 contours were the fastest to correct. For the 3 solutions, the differences in dose distributions obtained using AS contours and AS + manually corrected contours were not statistically significant. High dose differences could be observed when OAR contours were at short distances to the targets. However, this was not always interrelated. CONCLUSION DL methods generally showed higher delineation accuracy compared with ABAS methods for AS segmentation of HN OAR. Most ABAS contours had high conformity to the reference but were more time consuming than DL algorithms, especially when considering the computing time and the time spent on manual corrections.
Collapse
Affiliation(s)
- Madalina Costea
- Centre Léon Bérard, 28 rue Laennec, 69373 LYON Cedex 08, France; CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Université Lyon 1, Villeurbanne, France
| | | | - Morgane Durand
- Centre Léon Bérard, 28 rue Laennec, 69373 LYON Cedex 08, France
| | - Thomas Baudier
- Centre Léon Bérard, 28 rue Laennec, 69373 LYON Cedex 08, France; CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Université Lyon 1, Villeurbanne, France
| | | | - David Sarrut
- Centre Léon Bérard, 28 rue Laennec, 69373 LYON Cedex 08, France; CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Université Lyon 1, Villeurbanne, France
| | - Marie-Claude Biston
- Centre Léon Bérard, 28 rue Laennec, 69373 LYON Cedex 08, France; CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Université Lyon 1, Villeurbanne, France.
| |
Collapse
|
12
|
Yan C, Guo B, Tendulkar R, Xia P. Contour similarity and its implication on inverse prostate SBRT treatment planning. J Appl Clin Med Phys 2022; 24:e13809. [PMID: 36300837 PMCID: PMC9924104 DOI: 10.1002/acm2.13809] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2020] [Revised: 08/01/2022] [Accepted: 09/13/2022] [Indexed: 11/11/2022] Open
Abstract
PURPOSE Success of auto-segmentation is measured by the similarity between auto and manual contours that is often quantified by Dice coefficient (DC). The dosimetric impact of contour variability on inverse planning has been rarely reported. The main aim of this study is to investigate whether automatically generated organs-at-risk (OARs) could be used in inverse prostate stereotactic body radiation therapy (SBRT) planning and whether the dosimetric parameters are still clinically acceptable after radiation oncologists modify the OARs. METHODS AND MATERIALS Planning computed tomography images from 10 patients treated with SBRT for prostate cancer were selected and automatically segmented by commercially available atlas-based software. The automatically generated OAR contours were compared with the manually drawn contours. Two volumetric modulated arc therapy (VMAT) plans, autoRec-VMAT (where only automatically generated rectums were used in optimization) and autoAll-VMAT (where automatically generated OARs were used in inverse optimization) were generated. Dosimetric parameters based on the manually drawn PTV and OARs were compared with the clinically approved plans. RESULTS The DCs for the rectum contours varied from 0.55 to 0.74 with a mean value of 0.665. Differences of D95 of the PTV between autoRec-VMAT and manu-VMAT plans varied from 0.03% to -2.85% with a mean value of -0.64%. Differences of D0.03cc of manual rectum between the two plans varied from -0.86% to 9.94% with a mean value of 2.71%. D95 of PTV between autoAll-VMAT and manu-VMAT plans varied from 0.28% to -2.9% with a mean value -0.83%. Differences of D0.03cc of manual rectum between the two plans varied from -0.76% to 6.72% with a mean value of 2.62%. CONCLUSION Our study implies that it is possible to use unedited automatically generated OARs to perform initial inverse prostate SBRT planning. After radiation oncologists modify/approve the OARs, the plan qualities based on the manually drawn OARs are still clinically acceptable, and a re-optimization may not be needed.
Collapse
Affiliation(s)
- Chenyu Yan
- Department of Radiation OncologyCleveland Clinic FoundationClevelandOhioUSA
| | - Bingqi Guo
- Department of Radiation OncologyCleveland Clinic FoundationClevelandOhioUSA
| | - Rahul Tendulkar
- Department of Radiation OncologyCleveland Clinic FoundationClevelandOhioUSA
| | - Ping Xia
- Department of Radiation OncologyCleveland Clinic FoundationClevelandOhioUSA
| |
Collapse
|
13
|
Watkins WT, Qing K, Han C, Hui S, Liu A. Auto-segmentation for total marrow irradiation. Front Oncol 2022; 12:970425. [PMID: 36110933 PMCID: PMC9468379 DOI: 10.3389/fonc.2022.970425] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2022] [Accepted: 07/21/2022] [Indexed: 11/13/2022] Open
Abstract
Purpose To evaluate the accuracy and efficiency of Artificial-Intelligence (AI) segmentation in Total Marrow Irradiation (TMI) including contours throughout the head and neck (H&N), thorax, abdomen, and pelvis. Methods An AI segmentation software was clinically introduced for total body contouring in TMI including 27 organs at risk (OARs) and 4 planning target volumes (PTVs). This work compares the clinically utilized contours to the AI-TMI contours for 21 patients. Structure and image dicom data was used to generate comparisons including volumetric, spatial, and dosimetric variations between the AI- and human-edited contour sets. Conventional volume and surface measures including the Sørensen-Dice coefficient (Dice) and the 95th% Hausdorff Distance (HD95) were used, and novel efficiency metrics were introduced. The clinical efficiency gains were estimated by the percentage of the AI-contour-surface within 1mm of the clinical contour surface. An unedited AI-contour has an efficiency gain=100%, an AI-contour with 70% of its surface<1mm from a clinical contour has an efficiency gain of 70%. The dosimetric deviations were estimated from the clinical dose distribution to compute the dose volume histogram (DVH) for all structures. Results A total of 467 contours were compared in the 21 patients. In PTVs, contour surfaces deviated by >1mm in 38.6% ± 23.1% of structures, an average efficiency gain of 61.4%. Deviations >5mm were detected in 12.0% ± 21.3% of the PTV contours. In OARs, deviations >1mm were detected in 24.4% ± 27.1% of the structure surfaces and >5mm in 7.2% ± 18.0%; an average clinical efficiency gain of 75.6%. In H&N OARs, efficiency gains ranged from 42% in optic chiasm to 100% in eyes (unedited in all cases). In thorax, average efficiency gains were >80% in spinal cord, heart, and both lungs. Efficiency gains ranged from 60-70% in spleen, stomach, rectum, and bowel and 75-84% in liver, kidney, and bladder. DVH differences exceeded 0.05 in 109/467 curves at any dose level. The most common 5%-DVH variations were in esophagus (86%), rectum (48%), and PTVs (22%). Conclusions AI auto-segmentation software offers a powerful solution for enhanced efficiency in TMI treatment planning. Whole body segmentation including PTVs and normal organs was successful based on spatial and dosimetric comparison.
Collapse
Affiliation(s)
- William Tyler Watkins
- Department of Radiation Oncology, City of Hope National Medical Center, Duarte, CA, United States
| | | | | | | | | |
Collapse
|
14
|
Lin H, Dong L, Jimenez RB. Emerging Technologies in Mitigating the Risks of Cardiac Toxicity From Breast Radiotherapy. Semin Radiat Oncol 2022; 32:270-281. [DOI: 10.1016/j.semradonc.2022.01.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
15
|
Huang B, Ye Y, Xu Z, Cai Z, He Y, Zhong Z, Liu L, Chen X, Chen H, Huang B. 3D Lightweight Network for Simultaneous Registration and Segmentation of Organs-at-Risk in CT Images of Head and Neck Cancer. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:951-964. [PMID: 34784272 DOI: 10.1109/tmi.2021.3128408] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Image-guided radiation therapy (IGRT) is the most effective treatment for head and neck cancer. The successful implementation of IGRT requires accurate delineation of organ-at-risk (OAR) in the computed tomography (CT) images. In routine clinical practice, OARs are manually segmented by oncologists, which is time-consuming, laborious, and subjective. To assist oncologists in OAR contouring, we proposed a three-dimensional (3D) lightweight framework for simultaneous OAR registration and segmentation. The registration network was designed to align a selected OAR template to a new image volume for OAR localization. A region of interest (ROI) selection layer then generated ROIs of OARs from the registration results, which were fed into a multiview segmentation network for accurate OAR segmentation. To improve the performance of registration and segmentation networks, a centre distance loss was designed for the registration network, an ROI classification branch was employed for the segmentation network, and further, context information was incorporated to iteratively promote both networks' performance. The segmentation results were further refined with shape information for final delineation. We evaluated registration and segmentation performances of the proposed framework using three datasets. On the internal dataset, the Dice similarity coefficient (DSC) of registration and segmentation was 69.7% and 79.6%, respectively. In addition, our framework was evaluated on two external datasets and gained satisfactory performance. These results showed that the 3D lightweight framework achieved fast, accurate and robust registration and segmentation of OARs in head and neck cancer. The proposed framework has the potential of assisting oncologists in OAR delineation.
Collapse
|
16
|
Kawahara D, Tsuneda M, Ozawa S, Okamoto H, Nakamura M, Nishio T, Saito A, Nagata Y. Stepwise deep neural network (stepwise-net) for head and neck auto-segmentation on CT images. Comput Biol Med 2022; 143:105295. [PMID: 35168082 DOI: 10.1016/j.compbiomed.2022.105295] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2021] [Revised: 01/08/2022] [Accepted: 02/02/2022] [Indexed: 11/20/2022]
Abstract
OBJECTIVE The current study aims to propose the auto-segmentation model on CT images of head and neck cancer using a stepwise deep neural network (stepwise-net). MATERIAL AND METHODS Six normal tissue structures in the head and neck region of 3D CT images: Brainstem, optic nerve, parotid glands (left and right), and submandibular glands (left and right) were segmented with deep learning. In addition to a conventional convolutional neural network (CNN) on U-net, a stepwise neural network (stepwise-network) was developed. The stepwise-network was based on 3D FCN. We designed two networks in the stepwise-network. One is identifying the target region for the segmentation with the low-resolution images. Then, the target region is cropped, which used for the input image for the prediction of the segmentation. These were compared with a clinical used atlas-based segmentation. RESULTS The DSCs of the stepwise-net was significantly higher than the atlas-based method for all organ at risk structures. Similarly, the JSCs of the stepwise-net was significantly higher than the atlas-based methods for all organ at risk structures. The Hausdorff distance (HD) was significantly smaller than the atlas-based method for all organ at-risk structures. For the comparison of the stepwise-net and U-net, the stepwise-net had a higher DSC and JSC and a smaller HD than the conventional U-net. CONCLUSIONS We found that the stepwise-network plays a role is superior to conventional U-net-based and atlas-based segmentation. Our proposed model that is a potentially valuable method for improving the efficiency of head and neck radiotherapy treatment planning.
Collapse
Affiliation(s)
- Daisuke Kawahara
- Department of Radiation Oncology, Graduate School of Biomedical Health Sciences, Hiroshima University, Hiroshima, 734-8551, Japan.
| | - Masato Tsuneda
- Department of Radiation Oncology, MR Linac ART Division, Graduate School of Medicine, Chiba University, Chiba, 260-8670, Japan
| | - Shuichi Ozawa
- Hiroshima High-Precision Radiotherapy Cancer Center, Hiroshima, 732-0057, Japan
| | - Hiroyuki Okamoto
- Department of Medical Physics, National Cancer Center Hospital, Tokyo, 104-0045, Japan
| | - Mitsuhiro Nakamura
- Division of Medical Physics, Department of Information Technology and Medical Engineering, Human Health Sciences, Graduate School of Medicine, Kyoto University, Kyoto, 606-8507, Japan
| | - Teiji Nishio
- Medical Physics Laboratory, Division of Health Science, Graduate School of Medicine, Osaka University, Osaka, 565-0871, Japan
| | - Akito Saito
- Department of Radiation Oncology, Graduate School of Biomedical Health Sciences, Hiroshima University, Hiroshima, 734-8551, Japan
| | - Yasushi Nagata
- Department of Radiation Oncology, Graduate School of Biomedical Health Sciences, Hiroshima University, Hiroshima, 734-8551, Japan; Hiroshima High-Precision Radiotherapy Cancer Center, Hiroshima, 732-0057, Japan
| |
Collapse
|
17
|
Iyer A, Thor M, Onochie I, Hesse J, Zakeri K, LoCastro E, Jiang J, Veeraraghavan H, Elguindi S, Lee NY, Deasy JO, Apte AP. Prospectively-validated deep learning model for segmenting swallowing and chewing structures in CT. Phys Med Biol 2022; 67:10.1088/1361-6560/ac4000. [PMID: 34874302 PMCID: PMC8911366 DOI: 10.1088/1361-6560/ac4000] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2021] [Accepted: 12/03/2021] [Indexed: 01/19/2023]
Abstract
Objective.Delineating swallowing and chewing structures aids in radiotherapy (RT) treatment planning to limit dysphagia, trismus, and speech dysfunction. We aim to develop an accurate and efficient method to automate this process.Approach.CT scans of 242 head and neck (H&N) cancer patients acquired from 2004 to 2009 at our institution were used to develop auto-segmentation models for the masseters, medial pterygoids, larynx, and pharyngeal constrictor muscle using DeepLabV3+. A cascaded framework was used, wherein models were trained sequentially to spatially constrain each structure group based on prior segmentations. Additionally, an ensemble of models, combining contextual information from axial, coronal, and sagittal views was used to improve segmentation accuracy. Prospective evaluation was conducted by measuring the amount of manual editing required in 91 H&N CT scans acquired February-May 2021.Main results. Medians and inter-quartile ranges of Dice similarity coefficients (DSC) computed on the retrospective testing set (N = 24) were 0.87 (0.85-0.89) for the masseters, 0.80 (0.79-0.81) for the medial pterygoids, 0.81 (0.79-0.84) for the larynx, and 0.69 (0.67-0.71) for the constrictor. Auto-segmentations, when compared to two sets of manual segmentations in 10 randomly selected scans, showed better agreement (DSC) with each observer than inter-observer DSC. Prospective analysis showed most manual modifications needed for clinical use were minor, suggesting auto-contouring could increase clinical efficiency. Trained segmentation models are available for research use upon request viahttps://github.com/cerr/CERR/wiki/Auto-Segmentation-models.Significance.We developed deep learning-based auto-segmentation models for swallowing and chewing structures in CT and demonstrated its potential for use in treatment planning to limit complications post-RT. To the best of our knowledge, this is the only prospectively-validated deep learning-based model for segmenting chewing and swallowing structures in CT. Segmentation models have been made open-source to facilitate reproducibility and multi-institutional research.
Collapse
Affiliation(s)
- Aditi Iyer
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, United States of America
| | - Maria Thor
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, United States of America
| | - Ifeanyirochukwu Onochie
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, United States of America
| | - Jennifer Hesse
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, United States of America
| | - Kaveh Zakeri
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, United States of America
| | - Eve LoCastro
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, United States of America
| | - Jue Jiang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, United States of America
| | - Harini Veeraraghavan
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, United States of America
| | - Sharif Elguindi
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, United States of America
| | - Nancy Y Lee
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, United States of America
| | - Joseph O Deasy
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, United States of America
| | - Aditya P Apte
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, United States of America
| |
Collapse
|
18
|
Volpe S, Pepa M, Zaffaroni M, Bellerba F, Santamaria R, Marvaso G, Isaksson LJ, Gandini S, Starzyńska A, Leonardi MC, Orecchia R, Alterio D, Jereczek-Fossa BA. Machine Learning for Head and Neck Cancer: A Safe Bet?-A Clinically Oriented Systematic Review for the Radiation Oncologist. Front Oncol 2021; 11:772663. [PMID: 34869010 PMCID: PMC8637856 DOI: 10.3389/fonc.2021.772663] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2021] [Accepted: 10/25/2021] [Indexed: 12/17/2022] Open
Abstract
BACKGROUND AND PURPOSE Machine learning (ML) is emerging as a feasible approach to optimize patients' care path in Radiation Oncology. Applications include autosegmentation, treatment planning optimization, and prediction of oncological and toxicity outcomes. The purpose of this clinically oriented systematic review is to illustrate the potential and limitations of the most commonly used ML models in solving everyday clinical issues in head and neck cancer (HNC) radiotherapy (RT). MATERIALS AND METHODS Electronic databases were screened up to May 2021. Studies dealing with ML and radiomics were considered eligible. The quality of the included studies was rated by an adapted version of the qualitative checklist originally developed by Luo et al. All statistical analyses were performed using R version 3.6.1. RESULTS Forty-eight studies (21 on autosegmentation, four on treatment planning, 12 on oncological outcome prediction, 10 on toxicity prediction, and one on determinants of postoperative RT) were included in the analysis. The most common imaging modality was computed tomography (CT) (40%) followed by magnetic resonance (MR) (10%). Quantitative image features were considered in nine studies (19%). No significant differences were identified in global and methodological scores when works were stratified per their task (i.e., autosegmentation). DISCUSSION AND CONCLUSION The range of possible applications of ML in the field of HN Radiation Oncology is wide, albeit this area of research is relatively young. Overall, if not safe yet, ML is most probably a bet worth making.
Collapse
Affiliation(s)
- Stefania Volpe
- Division of Radiation Oncology, European Institute of Oncology (IEO) Istituto di Ricovero e Cura a Carattere Scientifico (IRCCS), Milan, Italy
- Department of Oncology and Hemato-Oncology, University of Milan, Milan, Italy
| | - Matteo Pepa
- Division of Radiation Oncology, European Institute of Oncology (IEO) Istituto di Ricovero e Cura a Carattere Scientifico (IRCCS), Milan, Italy
| | - Mattia Zaffaroni
- Division of Radiation Oncology, European Institute of Oncology (IEO) Istituto di Ricovero e Cura a Carattere Scientifico (IRCCS), Milan, Italy
| | - Federica Bellerba
- Molecular and Pharmaco-Epidemiology Unit, Department of Experimental Oncology, European Institute of Oncology (IEO) Istituto di Ricovero e Cura a Carattere Scientifico (IRCCS), Milan, Italy
| | - Riccardo Santamaria
- Division of Radiation Oncology, European Institute of Oncology (IEO) Istituto di Ricovero e Cura a Carattere Scientifico (IRCCS), Milan, Italy
- Department of Oncology and Hemato-Oncology, University of Milan, Milan, Italy
| | - Giulia Marvaso
- Division of Radiation Oncology, European Institute of Oncology (IEO) Istituto di Ricovero e Cura a Carattere Scientifico (IRCCS), Milan, Italy
- Department of Oncology and Hemato-Oncology, University of Milan, Milan, Italy
| | - Lars Johannes Isaksson
- Division of Radiation Oncology, European Institute of Oncology (IEO) Istituto di Ricovero e Cura a Carattere Scientifico (IRCCS), Milan, Italy
| | - Sara Gandini
- Molecular and Pharmaco-Epidemiology Unit, Department of Experimental Oncology, European Institute of Oncology (IEO) Istituto di Ricovero e Cura a Carattere Scientifico (IRCCS), Milan, Italy
| | - Anna Starzyńska
- Department of Oral Surgery, Medical University of Gdańsk, Gdańsk, Poland
| | - Maria Cristina Leonardi
- Division of Radiation Oncology, European Institute of Oncology (IEO) Istituto di Ricovero e Cura a Carattere Scientifico (IRCCS), Milan, Italy
| | - Roberto Orecchia
- Scientific Directorate, European Institute of Oncology (IEO) Istituto di Ricovero e Cura a Carattere Scientifico (IRCCS), Milan, Italy
| | - Daniela Alterio
- Division of Radiation Oncology, European Institute of Oncology (IEO) Istituto di Ricovero e Cura a Carattere Scientifico (IRCCS), Milan, Italy
| | - Barbara Alicja Jereczek-Fossa
- Division of Radiation Oncology, European Institute of Oncology (IEO) Istituto di Ricovero e Cura a Carattere Scientifico (IRCCS), Milan, Italy
- Department of Oncology and Hemato-Oncology, University of Milan, Milan, Italy
| |
Collapse
|
19
|
Korte JC, Hardcastle N, Ng SP, Clark B, Kron T, Jackson P. Cascaded deep learning-based auto-segmentation for head and neck cancer patients: Organs at risk on T2-weighted magnetic resonance imaging. Med Phys 2021; 48:7757-7772. [PMID: 34676555 DOI: 10.1002/mp.15290] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2021] [Revised: 08/30/2021] [Accepted: 09/24/2021] [Indexed: 12/09/2022] Open
Abstract
PURPOSE To investigate multiple deep learning methods for automated segmentation (auto-segmentation) of the parotid glands, submandibular glands, and level II and level III lymph nodes on magnetic resonance imaging (MRI). Outlining radiosensitive organs on images used to assist radiation therapy (radiotherapy) of patients with head and neck cancer (HNC) is a time-consuming task, in which variability between observers may directly impact on patient treatment outcomes. Auto-segmentation on computed tomography imaging has been shown to result in significant time reductions and more consistent outlines of the organs at risk. METHODS Three convolutional neural network (CNN)-based auto-segmentation architectures were developed using manual segmentations and T2-weighted MRI images provided from the American Association of Physicists in Medicine (AAPM) radiotherapy MRI auto-contouring (RT-MAC) challenge dataset (n = 31). Auto-segmentation performance was evaluated with segmentation similarity and surface distance metrics on the RT-MAC dataset with institutional manual segmentations (n = 10). The generalizability of the auto-segmentation methods was assessed on an institutional MRI dataset (n = 10). RESULTS Auto-segmentation performance on the RT-MAC images with institutional segmentations was higher than previously reported MRI methods for the parotid glands (Dice: 0.860 ± 0.067, mean surface distance [MSD]: 1.33 ± 0.40 mm) and the first report of MRI performance for submandibular glands (Dice: 0.830 ± 0.032, MSD: 1.16 ± 0.47 mm). We demonstrate that high-resolution auto-segmentations with improved geometric accuracy can be generated for the parotid and submandibular glands by cascading a localizer CNN and a cropped high-resolution CNN. Improved MSDs were observed between automatic and manual segmentations of the submandibular glands when a low-resolution auto-segmentation was used as prior knowledge in the second-stage CNN. Reduced auto-segmentation performance was observed on our institutional MRI dataset when trained on external RT-MAC images; only the parotid gland auto-segmentations were considered clinically feasible for manual correction (Dice: 0.775 ± 0.105, MSD: 1.20 ± 0.60 mm). CONCLUSIONS This work demonstrates that CNNs are a suitable method to auto-segment the parotid and submandibular glands on MRI images of patients with HNC, and that cascaded CNNs can generate high-resolution segmentations with improved geometric accuracy. Deep learning methods may be suitable for auto-segmentation of the parotid glands on T2-weighted MRI images from different scanners, but further work is required to improve the performance and generalizability of these methods for auto-segmentation of the submandibular glands and lymph nodes.
Collapse
Affiliation(s)
- James C Korte
- Department of Physical Science, Peter MacCallum Cancer Centre, Melbourne, Victoria, Australia.,Department of Biomedical Engineering, University of Melbourne, Melbourne, Victoria, Australia
| | - Nicholas Hardcastle
- Department of Physical Science, Peter MacCallum Cancer Centre, Melbourne, Victoria, Australia.,Centre for Medical Radiation Physics, University of Wollongong, Wollongong, New South Wales, Australia.,Sir Peter MacCallum Department of Oncology, University of Melbourne, Melbourne, Victoria, Australia
| | - Sweet Ping Ng
- Department of Radiation Oncology, Peter MacCallum Cancer Centre, Melbourne, Victoria, Australia.,Department of Radiation Oncology, Olivia Newton-John Cancer and Wellness Centre, Austin Health, Melbourne, Victoria, Australia
| | - Brett Clark
- Department of Physical Science, Peter MacCallum Cancer Centre, Melbourne, Victoria, Australia.,Department of Biomedical Engineering, University of Melbourne, Melbourne, Victoria, Australia
| | - Tomas Kron
- Department of Physical Science, Peter MacCallum Cancer Centre, Melbourne, Victoria, Australia.,Sir Peter MacCallum Department of Oncology, University of Melbourne, Melbourne, Victoria, Australia
| | - Price Jackson
- Department of Physical Science, Peter MacCallum Cancer Centre, Melbourne, Victoria, Australia.,Sir Peter MacCallum Department of Oncology, University of Melbourne, Melbourne, Victoria, Australia
| |
Collapse
|
20
|
Nikolov S, Blackwell S, Zverovitch A, Mendes R, Livne M, De Fauw J, Patel Y, Meyer C, Askham H, Romera-Paredes B, Kelly C, Karthikesalingam A, Chu C, Carnell D, Boon C, D'Souza D, Moinuddin SA, Garie B, McQuinlan Y, Ireland S, Hampton K, Fuller K, Montgomery H, Rees G, Suleyman M, Back T, Hughes CO, Ledsam JR, Ronneberger O. Clinically Applicable Segmentation of Head and Neck Anatomy for Radiotherapy: Deep Learning Algorithm Development and Validation Study. J Med Internet Res 2021; 23:e26151. [PMID: 34255661 PMCID: PMC8314151 DOI: 10.2196/26151] [Citation(s) in RCA: 118] [Impact Index Per Article: 39.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2020] [Revised: 02/10/2021] [Accepted: 04/30/2021] [Indexed: 12/16/2022] Open
Abstract
BACKGROUND Over half a million individuals are diagnosed with head and neck cancer each year globally. Radiotherapy is an important curative treatment for this disease, but it requires manual time to delineate radiosensitive organs at risk. This planning process can delay treatment while also introducing interoperator variability, resulting in downstream radiation dose differences. Although auto-segmentation algorithms offer a potentially time-saving solution, the challenges in defining, quantifying, and achieving expert performance remain. OBJECTIVE Adopting a deep learning approach, we aim to demonstrate a 3D U-Net architecture that achieves expert-level performance in delineating 21 distinct head and neck organs at risk commonly segmented in clinical practice. METHODS The model was trained on a data set of 663 deidentified computed tomography scans acquired in routine clinical practice and with both segmentations taken from clinical practice and segmentations created by experienced radiographers as part of this research, all in accordance with consensus organ at risk definitions. RESULTS We demonstrated the model's clinical applicability by assessing its performance on a test set of 21 computed tomography scans from clinical practice, each with 21 organs at risk segmented by 2 independent experts. We also introduced surface Dice similarity coefficient, a new metric for the comparison of organ delineation, to quantify the deviation between organ at risk surface contours rather than volumes, better reflecting the clinical task of correcting errors in automated organ segmentations. The model's generalizability was then demonstrated on 2 distinct open-source data sets, reflecting different centers and countries to model training. CONCLUSIONS Deep learning is an effective and clinically applicable technique for the segmentation of the head and neck anatomy for radiotherapy. With appropriate validation studies and regulatory approvals, this system could improve the efficiency, consistency, and safety of radiotherapy pathways.
Collapse
Affiliation(s)
| | | | | | - Ruheena Mendes
- University College London Hospitals NHS Foundation Trust, London, United Kingdom
| | | | | | | | | | | | | | | | | | | | - Dawn Carnell
- University College London Hospitals NHS Foundation Trust, London, United Kingdom
| | - Cheng Boon
- Clatterbridge Cancer Centre NHS Foundation Trust, Liverpool, United Kingdom
| | - Derek D'Souza
- University College London Hospitals NHS Foundation Trust, London, United Kingdom
| | - Syed Ali Moinuddin
- University College London Hospitals NHS Foundation Trust, London, United Kingdom
| | | | | | | | | | | | | | - Geraint Rees
- University College London, London, United Kingdom
| | | | | | | | | | | |
Collapse
|
21
|
Liu X, Li KW, Yang R, Geng LS. Review of Deep Learning Based Automatic Segmentation for Lung Cancer Radiotherapy. Front Oncol 2021; 11:717039. [PMID: 34336704 PMCID: PMC8323481 DOI: 10.3389/fonc.2021.717039] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2021] [Accepted: 06/21/2021] [Indexed: 12/14/2022] Open
Abstract
Lung cancer is the leading cause of cancer-related mortality for males and females. Radiation therapy (RT) is one of the primary treatment modalities for lung cancer. While delivering the prescribed dose to tumor targets, it is essential to spare the tissues near the targets-the so-called organs-at-risk (OARs). An optimal RT planning benefits from the accurate segmentation of the gross tumor volume and surrounding OARs. Manual segmentation is a time-consuming and tedious task for radiation oncologists. Therefore, it is crucial to develop automatic image segmentation to relieve radiation oncologists of the tedious contouring work. Currently, the atlas-based automatic segmentation technique is commonly used in clinical routines. However, this technique depends heavily on the similarity between the atlas and the image segmented. With significant advances made in computer vision, deep learning as a part of artificial intelligence attracts increasing attention in medical image automatic segmentation. In this article, we reviewed deep learning based automatic segmentation techniques related to lung cancer and compared them with the atlas-based automatic segmentation technique. At present, the auto-segmentation of OARs with relatively large volume such as lung and heart etc. outperforms the organs with small volume such as esophagus. The average Dice similarity coefficient (DSC) of lung, heart and liver are over 0.9, and the best DSC of spinal cord reaches 0.9. However, the DSC of esophagus ranges between 0.71 and 0.87 with a ragged performance. In terms of the gross tumor volume, the average DSC is below 0.8. Although deep learning based automatic segmentation techniques indicate significant superiority in many aspects compared to manual segmentation, various issues still need to be solved. We discussed the potential issues in deep learning based automatic segmentation including low contrast, dataset size, consensus guidelines, and network design. Clinical limitations and future research directions of deep learning based automatic segmentation were discussed as well.
Collapse
Affiliation(s)
- Xi Liu
- School of Physics, Beihang University, Beijing, China
| | - Kai-Wen Li
- School of Physics, Beihang University, Beijing, China
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, School of Medicine and Engineering, Key Laboratory of Big Data-Based Precision Medicine, Ministry of Industry and Information Technology, Beihang University, Beijing, China
| | - Ruijie Yang
- Department of Radiation Oncology, Peking University Third Hospital, Beijing, China
| | - Li-Sheng Geng
- School of Physics, Beihang University, Beijing, China
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, School of Medicine and Engineering, Key Laboratory of Big Data-Based Precision Medicine, Ministry of Industry and Information Technology, Beihang University, Beijing, China
- Beijing Key Laboratory of Advanced Nuclear Materials and Physics, Beihang University, Beijing, China
- School of Physics and Microelectronics, Zhengzhou University, Zhengzhou, China
| |
Collapse
|
22
|
Qiu B, van der Wel H, Kraeima J, Glas HH, Guo J, Borra RJH, Witjes MJH, van Ooijen PMA. Automatic Segmentation of Mandible from Conventional Methods to Deep Learning-A Review. J Pers Med 2021; 11:629. [PMID: 34357096 PMCID: PMC8307673 DOI: 10.3390/jpm11070629] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2021] [Revised: 06/26/2021] [Accepted: 06/28/2021] [Indexed: 01/05/2023] Open
Abstract
Medical imaging techniques, such as (cone beam) computed tomography and magnetic resonance imaging, have proven to be a valuable component for oral and maxillofacial surgery (OMFS). Accurate segmentation of the mandible from head and neck (H&N) scans is an important step in order to build a personalized 3D digital mandible model for 3D printing and treatment planning of OMFS. Segmented mandible structures are used to effectively visualize the mandible volumes and to evaluate particular mandible properties quantitatively. However, mandible segmentation is always challenging for both clinicians and researchers, due to complex structures and higher attenuation materials, such as teeth (filling) or metal implants that easily lead to high noise and strong artifacts during scanning. Moreover, the size and shape of the mandible vary to a large extent between individuals. Therefore, mandible segmentation is a tedious and time-consuming task and requires adequate training to be performed properly. With the advancement of computer vision approaches, researchers have developed several algorithms to automatically segment the mandible during the last two decades. The objective of this review was to present the available fully (semi)automatic segmentation methods of the mandible published in different scientific articles. This review provides a vivid description of the scientific advancements to clinicians and researchers in this field to help develop novel automatic methods for clinical applications.
Collapse
Affiliation(s)
- Bingjiang Qiu
- 3D Lab, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands; (B.Q.); (H.v.d.W.); (J.K.); (H.H.G.); (M.J.H.W.)
- Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands;
- Data Science Center in Health (DASH), University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| | - Hylke van der Wel
- 3D Lab, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands; (B.Q.); (H.v.d.W.); (J.K.); (H.H.G.); (M.J.H.W.)
- Department of Oral and Maxillofacial Surgery, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| | - Joep Kraeima
- 3D Lab, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands; (B.Q.); (H.v.d.W.); (J.K.); (H.H.G.); (M.J.H.W.)
- Department of Oral and Maxillofacial Surgery, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| | - Haye Hendrik Glas
- 3D Lab, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands; (B.Q.); (H.v.d.W.); (J.K.); (H.H.G.); (M.J.H.W.)
- Department of Oral and Maxillofacial Surgery, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| | - Jiapan Guo
- Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands;
- Data Science Center in Health (DASH), University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| | - Ronald J. H. Borra
- Medical Imaging Center (MIC), University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands;
| | - Max Johannes Hendrikus Witjes
- 3D Lab, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands; (B.Q.); (H.v.d.W.); (J.K.); (H.H.G.); (M.J.H.W.)
- Department of Oral and Maxillofacial Surgery, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| | - Peter M. A. van Ooijen
- Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands;
- Data Science Center in Health (DASH), University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| |
Collapse
|
23
|
Fu Y, Lei Y, Wang T, Curran WJ, Liu T, Yang X. A review of deep learning based methods for medical image multi-organ segmentation. Phys Med 2021; 85:107-122. [PMID: 33992856 PMCID: PMC8217246 DOI: 10.1016/j.ejmp.2021.05.003] [Citation(s) in RCA: 77] [Impact Index Per Article: 25.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/27/2020] [Revised: 03/12/2021] [Accepted: 05/03/2021] [Indexed: 12/12/2022] Open
Abstract
Deep learning has revolutionized image processing and achieved the-state-of-art performance in many medical image segmentation tasks. Many deep learning-based methods have been published to segment different parts of the body for different medical applications. It is necessary to summarize the current state of development for deep learning in the field of medical image segmentation. In this paper, we aim to provide a comprehensive review with a focus on multi-organ image segmentation, which is crucial for radiotherapy where the tumor and organs-at-risk need to be contoured for treatment planning. We grouped the surveyed methods into two broad categories which are 'pixel-wise classification' and 'end-to-end segmentation'. Each category was divided into subgroups according to their network design. For each type, we listed the surveyed works, highlighted important contributions and identified specific challenges. Following the detailed review, we discussed the achievements, shortcomings and future potentials of each category. To enable direct comparison, we listed the performance of the surveyed works that used thoracic and head-and-neck benchmark datasets.
Collapse
Affiliation(s)
- Yabo Fu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA.
| |
Collapse
|
24
|
Xu L, Hu J, Song Y, Bai S, Yi Z. Clinical target volume segmentation for stomach cancer by stochastic width deep neural network. Med Phys 2021; 48:1720-1730. [PMID: 33503270 DOI: 10.1002/mp.14733] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2020] [Revised: 12/17/2020] [Accepted: 01/11/2021] [Indexed: 02/05/2023] Open
Abstract
PURPOSE Precise segmentation of clinical target volume (CTV) is the key to stomach cancer radiotherapy. We proposed a novel stochastic width-deep neural network (SW-DNN) for better automatically contouring stomach CTV. METHODS Stochastic width-deep neural network was an end-to-end approach, of which the core component was a novel SW mechanism that employed shortcut connections between the encoder and decoder in a random manner, and thus the width of the SW-DNN was stochastically adjustable to obtain improved segmentation results. In total, 150 stomach cancer patient computed tomography (CT) cases with the corresponding CTV labels were collected and used to train and evaluate the SW-DNN. Three common quantitative measures: true positive volume fraction (TPVF), positive predictive value (PPV), and Dice similarity coefficient (DSC) were used to evaluate the segmentation accuracy. RESULTS Clinical target volumes calculated by SW-DNN had significant quantitative advantages over three state-of-the-art methods. The average DSC value of SW-DNN was 2.1%, 2.8%, and 3.6% higher than that of three state-of-the-art methods. The average DSC, TPVF, and PPV values of SW-DNN were 2.1%, 4.0%, and 0.3% higher than that of the corresponding constant width DNN. CONCLUSIONS Stochastic width-deep neural network provided better performance for contouring stomach cancer CTV accurately and efficiently. It is a promising solution in clinical radiotherapy planning for stomach cancer.
Collapse
Affiliation(s)
- Lei Xu
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, 610065, P R China
| | - Junjie Hu
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, 610065, P R China
| | - Ying Song
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, 610065, P R China.,Department of Radiotherapy, West China Hospital, Sichuan University, Chengdu, 610065, P R China
| | - Sen Bai
- Department of Radiotherapy, West China Hospital, Sichuan University, Chengdu, 610065, P R China
| | - Zhang Yi
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, 610065, P R China
| |
Collapse
|
25
|
Cao R, Pei X, Ge N, Zheng C. Clinical Target Volume Auto-Segmentation of Esophageal Cancer for Radiotherapy After Radical Surgery Based on Deep Learning. Technol Cancer Res Treat 2021; 20:15330338211034284. [PMID: 34387104 PMCID: PMC8366129 DOI: 10.1177/15330338211034284] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022] Open
Abstract
Radiotherapy plays an important role in controlling the local recurrence of esophageal cancer after radical surgery. Segmentation of the clinical target volume is a key step in radiotherapy treatment planning, but it is time-consuming and operator-dependent. This paper introduces a deep dilated convolutional U-network to achieve fast and accurate clinical target volume auto-segmentation of esophageal cancer after radical surgery. The deep dilated convolutional U-network, which integrates the advantages of dilated convolution and the U-network, is an end-to-end architecture that enables rapid training and testing. A dilated convolution module for extracting multiscale context features containing the original information on fine texture and boundaries is integrated into the U-network architecture to avoid information loss due to down-sampling and improve the segmentation accuracy. In addition, batch normalization is added to the deep dilated convolutional U-network for fast and stable convergence. In the present study, the training and validation loss tended to be stable after 40 training epochs. This deep dilated convolutional U-network model was able to segment the clinical target volume with an overall mean Dice similarity coefficient of 86.7% and a respective 95% Hausdorff distance of 37.4 mm, indicating reasonable volume overlap of the auto-segmented and manual contours. The mean Cohen kappa coefficient was 0.863, indicating that the deep dilated convolutional U-network was robust. Comparisons with the U-network and attention U-network showed that the overall performance of the deep dilated convolutional U-network was best for the Dice similarity coefficient, 95% Hausdorff distance, and Cohen kappa coefficient. The test time for segmentation of the clinical target volume was approximately 25 seconds per patient. This deep dilated convolutional U-network could be applied in the clinical setting to save time in delineation and improve the consistency of contouring.
Collapse
Affiliation(s)
- Ruifen Cao
- College of Computer Science and Technology, 12487Anhui University, Hefei, Anhui, China
- Engineering Research Center of Big Data Application in Private Health Medicine, Fujian Province University, Putian, Fujian, China
| | - Xi Pei
- 12652University of Science and Technology of China, Hefei, Anhui, China
| | - Ning Ge
- The First Affiliated Hospital of USTC West District, 117556Anhui Provincial Cancer Hospital, Hefei, Anhui, China
| | - Chunhou Zheng
- College of Computer Science and Technology, 12487Anhui University, Hefei, Anhui, China
- Engineering Research Center of Big Data Application in Private Health Medicine, Fujian Province University, Putian, Fujian, China
| |
Collapse
|
26
|
Aliotta E, Nourzadeh H, Choi W, Leandro Alves VG, Siebers JV. An Automated Workflow to Improve Efficiency in Radiation Therapy Treatment Planning by Prioritizing Organs at Risk. Adv Radiat Oncol 2020; 5:1324-1333. [PMID: 33305095 PMCID: PMC7718498 DOI: 10.1016/j.adro.2020.06.012] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2020] [Revised: 04/15/2020] [Accepted: 06/16/2020] [Indexed: 11/28/2022] Open
Abstract
PURPOSE Manual delineation (MD) of organs at risk (OAR) is time and labor intensive. Auto-delineation (AD) can reduce the need for MD, but because current algorithms are imperfect, manual review and modification is still typically used. Recognizing that many OARs are sufficiently far from important dose levels that they do not pose a realistic risk, we hypothesize that some OARs can be excluded from MD and manual review with no clinical effect. The purpose of this study was to develop a method that automatically identifies these OARs and enables more efficient workflows that incorporate AD without degrading clinical quality. METHODS AND MATERIALS Preliminary dose map estimates were generated for n = 10 patients with head and neck cancers using only prescription and target-volume information. Conservative estimates of clinical OAR objectives were computed using AD structures with spatial expansion buffers to account for potential delineation uncertainties. OARs with estimated dose metrics below clinical tolerances were deemed low priority and excluded from MD and/or manual review. Final plans were then optimized using high-priority MD OARs and low-priority AD OARs and compared with reference plans generated using all MD OARs. Multiple different spatial buffers were used to accommodate different potential delineation uncertainties. RESULTS Sixty-seven out of 201 total OARs were identified as low-priority using the proposed methodology, which permitted a 33% reduction in structures requiring manual delineation/review. Plans optimized using low-priority AD OARs without review or modification met all planning objectives that were met when all MD OARs were used, indicating clinical equivalence. CONCLUSIONS Prioritizing OARs using estimated dose distributions allowed a substantial reduction in required MD and review without affecting clinically relevant dosimetry.
Collapse
Affiliation(s)
- Eric Aliotta
- Department of Radiation Oncology, University of Virginia, Charlottesville, Virginia
| | - Hamidreza Nourzadeh
- Department of Radiation Oncology, University of Virginia, Charlottesville, Virginia
| | - Wookjin Choi
- Department of Radiation Oncology, University of Virginia, Charlottesville, Virginia
| | | | - Jeffrey V. Siebers
- Department of Radiation Oncology, University of Virginia, Charlottesville, Virginia
| |
Collapse
|
27
|
Cao M, Stiehl B, Yu VY, Sheng K, Kishan AU, Chin RK, Yang Y, Ruan D. Analysis of Geometric Performance and Dosimetric Impact of Using Automatic Contour Segmentation for Radiotherapy Planning. Front Oncol 2020; 10:1762. [PMID: 33102206 PMCID: PMC7546883 DOI: 10.3389/fonc.2020.01762] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2020] [Accepted: 08/06/2020] [Indexed: 11/13/2022] Open
Abstract
Purpose: To analyze geometric discrepancy and dosimetric impact in using contours generated by auto-segmentation (AS) against manually segmented (MS) clinical contours. Methods: A 48-subject prostate atlas was created and another 15 patients were used for testing. Contours were generated using a commercial atlas-based segmentation tool and compared to their clinical MS counterparts. The geometric correlation was evaluated using the Dice similarity coefficient (DSC) and Hausdorff distance (HD). Dosimetric relevance was evaluated for a subset of patients by assessing the DVH differences derived by optimizing plan dose using the AS and MS contours, respectively, and evaluating with respect to each. A paired t-test was employed for statistical comparison. The discrepancy in plan quality with respect to clinical dosimetric endpoints was evaluated. The analysis was repeated for head/neck (HN) with a 31-subject atlas and 15 test cases. Results: Dice agreement between AS and MS differed significantly across structures: from (L:0.92/R: 0.91) for the femoral heads to seminal vesical of 0.38 in the prostate cohort, and from 0.98 for the brain, to 0.36 for the chiasm of the HN group. Despite the geometric disagreement, the paired t-tests showed the lack of statistical evidence for systematic differences in dosimetric plan quality yielded by the AS and MS approach for the prostate cohort. In HN cases, statistically significant differences in dosimetric endpoints were observed in structures with small volumes or elongated shapes such as cord (p = 0.01) and esophagus (p = 0.04). The largest absolute dose difference of 11 Gy was seen in the mean pharynx dose. Conclusion: Varying AS performance among structures suggests a differential approach of using AS on a subset of structures and focus MS on the rest. The discrepancy between geometric and dosimetric-end-point driven evaluation also indicates the clinical utility of AS contours in optimization and evaluating plan quality despite of suboptimal geometrical accuracy.
Collapse
Affiliation(s)
- Minsong Cao
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, United States
| | - Bradley Stiehl
- Physics & Biology in Medicine Graduate Program, University of California, Los Angeles, Los Angeles, CA, United States
| | - Victoria Y Yu
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, United States
| | - Ke Sheng
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, United States
| | - Amar U Kishan
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, United States
| | - Robert K Chin
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, United States
| | - Yingli Yang
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, United States
| | - Dan Ruan
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, United States
| |
Collapse
|
28
|
Gao Y, Huang R, Yang Y, Zhang J, Shao K, Tao C, Chen Y, Metaxas DN, Li H, Chen M. FocusNetv2: Imbalanced large and small organ segmentation with adversarial shape constraint for head and neck CT images. Med Image Anal 2020; 67:101831. [PMID: 33129144 DOI: 10.1016/j.media.2020.101831] [Citation(s) in RCA: 30] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2020] [Revised: 08/13/2020] [Accepted: 08/31/2020] [Indexed: 01/28/2023]
Abstract
Radiotherapy is a treatment where radiation is used to eliminate cancer cells. The delineation of organs-at-risk (OARs) is a vital step in radiotherapy treatment planning to avoid damage to healthy organs. For nasopharyngeal cancer, more than 20 OARs are needed to be precisely segmented in advance. The challenge of this task lies in complex anatomical structure, low-contrast organ contours, and the extremely imbalanced size between large and small organs. Common segmentation methods that treat them equally would generally lead to inaccurate small-organ labeling. We propose a novel two-stage deep neural network, FocusNetv2, to solve this challenging problem by automatically locating, ROI-pooling, and segmenting small organs with specifically designed small-organ localization and segmentation sub-networks while maintaining the accuracy of large organ segmentation. In addition to our original FocusNet, we employ a novel adversarial shape constraint on small organs to ensure the consistency between estimated small-organ shapes and organ shape prior knowledge. Our proposed framework is extensively tested on both self-collected dataset of 1,164 CT scans and the MICCAI Head and Neck Auto Segmentation Challenge 2015 dataset, which shows superior performance compared with state-of-the-art head and neck OAR segmentation methods.
Collapse
Affiliation(s)
- Yunhe Gao
- Cancer Hospital of University of the Chinese Academy of Sciences (Zhejiang Cancer Hospital), China; Department of Computer Science, Rutgers University, Piscataway, NJ, USA; Department of Electronic Engineering, The Chinese University of Hong Kong, Hong Kong SAR, China
| | | | - Yiwei Yang
- Cancer Hospital of University of the Chinese Academy of Sciences (Zhejiang Cancer Hospital), China
| | - Jie Zhang
- Cancer Hospital of University of the Chinese Academy of Sciences (Zhejiang Cancer Hospital), China
| | - Kainan Shao
- Cancer Hospital of University of the Chinese Academy of Sciences (Zhejiang Cancer Hospital), China
| | - Changjuan Tao
- Cancer Hospital of University of the Chinese Academy of Sciences (Zhejiang Cancer Hospital), China
| | - Yuanyuan Chen
- Cancer Hospital of University of the Chinese Academy of Sciences (Zhejiang Cancer Hospital), China
| | | | - Hongsheng Li
- Department of Electronic Engineering, The Chinese University of Hong Kong, Hong Kong SAR, China.
| | - Ming Chen
- Cancer Hospital of University of the Chinese Academy of Sciences (Zhejiang Cancer Hospital), China.
| |
Collapse
|
29
|
Holistic multitask regression network for multiapplication shape regression segmentation. Med Image Anal 2020; 65:101783. [DOI: 10.1016/j.media.2020.101783] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2019] [Revised: 05/31/2020] [Accepted: 07/09/2020] [Indexed: 11/23/2022]
|
30
|
Sultana S, Robinson A, Song DY, Lee J. Automatic multi-organ segmentation in computed tomography images using hierarchical convolutional neural network. JOURNAL OF MEDICAL IMAGING (BELLINGHAM, WASH.) 2020; 7:055001. [PMID: 33102622 DOI: 10.1117/1.jmi.7.5.055001] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Received: 04/13/2020] [Accepted: 09/28/2020] [Indexed: 01/17/2023]
Abstract
Purpose: Accurate segmentation of treatment planning computed tomography (CT) images is important for radiation therapy (RT) planning. However, low soft tissue contrast in CT makes the segmentation task challenging. We propose a two-step hierarchical convolutional neural network (CNN) segmentation strategy to automatically segment multiple organs from CT. Approach: The first step generates a coarse segmentation from which organ-specific regions of interest (ROIs) are produced. The second step produces detailed segmentation of each organ. The ROIs are generated using UNet, which automatically identifies the area of each organ and improves computational efficiency by eliminating irrelevant background information. For the fine segmentation step, we combined UNet with a generative adversarial network. The generator is designed as a UNet that is trained to segment organ structures and the discriminator is a fully convolutional network, which distinguishes whether the segmentation is real or generator-predicted, thus improving the segmentation accuracy. We validated the proposed method on male pelvic and head and neck (H&N) CTs used for RT planning of prostate and H&N cancer, respectively. For the pelvic structure segmentation, the network was trained to segment the prostate, bladder, and rectum. For H&N, the network was trained to segment the parotid glands (PG) and submandibular glands (SMG). Results: The trained segmentation networks were tested on 15 pelvic and 20 H&N independent datasets. The H&N segmentation network was also tested on a public domain dataset ( N = 38 ) and showed similar performance. The average dice similarity coefficients ( mean ± SD ) of pelvic structures are 0.91 ± 0.05 (prostate), 0.95 ± 0.06 (bladder), 0.90 ± 0.09 (rectum), and H&N structures are 0.87 ± 0.04 (PG) and 0.86 ± 0.05 (SMG). The segmentation for each CT takes < 10 s on average. Conclusions: Experimental results demonstrate that the proposed method can produce fast, accurate, and reproducible segmentation of multiple organs of different sizes and shapes and show its potential to be applicable to different disease sites.
Collapse
Affiliation(s)
- Sharmin Sultana
- Johns Hopkins University, Department of Radiation Oncology and Molecular Radiation Sciences, Baltimore, Maryland, United States
| | - Adam Robinson
- Johns Hopkins University, Department of Radiation Oncology and Molecular Radiation Sciences, Baltimore, Maryland, United States
| | - Daniel Y Song
- Johns Hopkins University, Department of Radiation Oncology and Molecular Radiation Sciences, Baltimore, Maryland, United States
| | - Junghoon Lee
- Johns Hopkins University, Department of Radiation Oncology and Molecular Radiation Sciences, Baltimore, Maryland, United States
| |
Collapse
|
31
|
Liang S, Thung KH, Nie D, Zhang Y, Shen D. Multi-View Spatial Aggregation Framework for Joint Localization and Segmentation of Organs at Risk in Head and Neck CT Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2794-2805. [PMID: 32091997 DOI: 10.1109/tmi.2020.2975853] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Accurate segmentation of organs at risk (OARs) from head and neck (H&N) CT images is crucial for effective H&N cancer radiotherapy. However, the existing deep learning methods are often not trained in an end-to-end fashion, i.e., they independently predetermine the regions of target organs before organ segmentation, causing limited information sharing between related tasks and thus leading to suboptimal segmentation results. Furthermore, when conventional segmentation network is used to segment all the OARs simultaneously, the results often favor big OARs over small OARs. Thus, the existing methods often train a specific model for each OAR, ignoring the correlation between different segmentation tasks. To address these issues, we propose a new multi-view spatial aggregation framework for joint localization and segmentation of multiple OARs using H&N CT images. The core of our framework is a proposed region-of-interest (ROI)-based fine-grained representation convolutional neural network (CNN), which is used to generate multi-OAR probability maps from each 2D view (i.e., axial, coronal, and sagittal view) of CT images. Specifically, our ROI-based fine-grained representation CNN (1) unifies the OARs localization and segmentation tasks and trains them in an end-to-end fashion, and (2) improves the segmentation results of various-sized OARs via a novel ROI-based fine-grained representation. Our multi-view spatial aggregation framework then spatially aggregates and assembles the generated multi-view multi-OAR probability maps to segment all the OARs simultaneously. We evaluate our framework using two sets of H&N CT images and achieve competitive and highly robust segmentation performance for OARs of various sizes.
Collapse
|
32
|
Vrtovec T, Močnik D, Strojan P, Pernuš F, Ibragimov B. Auto-segmentation of organs at risk for head and neck radiotherapy planning: From atlas-based to deep learning methods. Med Phys 2020; 47:e929-e950. [PMID: 32510603 DOI: 10.1002/mp.14320] [Citation(s) in RCA: 80] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2019] [Revised: 05/27/2020] [Accepted: 05/29/2020] [Indexed: 02/06/2023] Open
Abstract
Radiotherapy (RT) is one of the basic treatment modalities for cancer of the head and neck (H&N), which requires a precise spatial description of the target volumes and organs at risk (OARs) to deliver a highly conformal radiation dose to the tumor cells while sparing the healthy tissues. For this purpose, target volumes and OARs have to be delineated and segmented from medical images. As manual delineation is a tedious and time-consuming task subjected to intra/interobserver variability, computerized auto-segmentation has been developed as an alternative. The field of medical imaging and RT planning has experienced an increased interest in the past decade, with new emerging trends that shifted the field of H&N OAR auto-segmentation from atlas-based to deep learning-based approaches. In this review, we systematically analyzed 78 relevant publications on auto-segmentation of OARs in the H&N region from 2008 to date, and provided critical discussions and recommendations from various perspectives: image modality - both computed tomography and magnetic resonance image modalities are being exploited, but the potential of the latter should be explored more in the future; OAR - the spinal cord, brainstem, and major salivary glands are the most studied OARs, but additional experiments should be conducted for several less studied soft tissue structures; image database - several image databases with the corresponding ground truth are currently available for methodology evaluation, but should be augmented with data from multiple observers and multiple institutions; methodology - current methods have shifted from atlas-based to deep learning auto-segmentation, which is expected to become even more sophisticated; ground truth - delineation guidelines should be followed and participation of multiple experts from multiple institutions is recommended; performance metrics - the Dice coefficient as the standard volumetric overlap metrics should be accompanied with at least one distance metrics, and combined with clinical acceptability scores and risk assessments; segmentation performance - the best performing methods achieve clinically acceptable auto-segmentation for several OARs, however, the dosimetric impact should be also studied to provide clinically relevant endpoints for RT planning.
Collapse
Affiliation(s)
- Tomaž Vrtovec
- Faculty Electrical Engineering, University of Ljubljana, Tržaška cesta 25, Ljubljana, SI-1000, Slovenia
| | - Domen Močnik
- Faculty Electrical Engineering, University of Ljubljana, Tržaška cesta 25, Ljubljana, SI-1000, Slovenia
| | - Primož Strojan
- Institute of Oncology Ljubljana, Zaloška cesta 2, Ljubljana, SI-1000, Slovenia
| | - Franjo Pernuš
- Faculty Electrical Engineering, University of Ljubljana, Tržaška cesta 25, Ljubljana, SI-1000, Slovenia
| | - Bulat Ibragimov
- Faculty Electrical Engineering, University of Ljubljana, Tržaška cesta 25, Ljubljana, SI-1000, Slovenia.,Department of Computer Science, University of Copenhagen, Universitetsparken 1, Copenhagen, D-2100, Denmark
| |
Collapse
|
33
|
Chen W, Li Y, Dyer BA, Feng X, Rao S, Benedict SH, Chen Q, Rong Y. Deep learning vs. atlas-based models for fast auto-segmentation of the masticatory muscles on head and neck CT images. Radiat Oncol 2020; 15:176. [PMID: 32690103 PMCID: PMC7372849 DOI: 10.1186/s13014-020-01617-0] [Citation(s) in RCA: 43] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2020] [Accepted: 07/13/2020] [Indexed: 12/16/2022] Open
Abstract
BACKGROUND Impaired function of masticatory muscles will lead to trismus. Routine delineation of these muscles during planning may improve dose tracking and facilitate dose reduction resulting in decreased radiation-related trismus. This study aimed to compare a deep learning model with a commercial atlas-based model for fast auto-segmentation of the masticatory muscles on head and neck computed tomography (CT) images. MATERIAL AND METHODS Paired masseter (M), temporalis (T), medial and lateral pterygoid (MP, LP) muscles were manually segmented on 56 CT images. CT images were randomly divided into training (n = 27) and validation (n = 29) cohorts. Two methods were used for automatic delineation of masticatory muscles (MMs): Deep learning auto-segmentation (DLAS) and atlas-based auto-segmentation (ABAS). The automatic algorithms were evaluated using Dice similarity coefficient (DSC), recall, precision, Hausdorff distance (HD), HD95, and mean surface distance (MSD). A consolidated score was calculated by normalizing the metrics against interobserver variability and averaging over all patients. Differences in dose (∆Dose) to MMs for DLAS and ABAS segmentations were assessed. A paired t-test was used to compare the geometric and dosimetric difference between DLAS and ABAS methods. RESULTS DLAS outperformed ABAS in delineating all MMs (p < 0.05). The DLAS mean DSC for M, T, MP, and LP ranged from 0.83 ± 0.03 to 0.89 ± 0.02, the ABAS mean DSC ranged from 0.79 ± 0.05 to 0.85 ± 0.04. The mean value for recall, HD, HD95, MSD also improved with DLAS for auto-segmentation. Interobserver variation revealed the highest variability in DSC and MSD for both T and MP, and the highest scores were achieved for T by both automatic algorithms. With few exceptions, the mean ∆D98%, ∆D95%, ∆D50%, and ∆D2% for all structures were below 10% for DLAS and ABAS and had no detectable statistical difference (P > 0.05). DLAS based contours had dose endpoints more closely matched with that of the manually segmented when compared with ABAS. CONCLUSIONS DLAS auto-segmentation of masticatory muscles for the head and neck radiotherapy had improved segmentation accuracy compared with ABAS with no qualitative difference in dosimetric endpoints compared to manually segmented contours.
Collapse
Affiliation(s)
- Wen Chen
- Department of Radiation Oncology, Xiangya Hospital, Central South University, Changsha, China.,Department of Radiation Oncology, University of California Davis Medical Center, 4501 X Street, Suite 0152, Sacramento, California, 95817, USA
| | - Yimin Li
- Department of Radiation Oncology, Xiamen Cancer Center, The First Affiliated Hospital of Xiamen University, Xiamen, Fujian, China
| | - Brandon A Dyer
- Department of Radiation Oncology, University of California Davis Medical Center, 4501 X Street, Suite 0152, Sacramento, California, 95817, USA.,Department of Radiation Oncology, University of Washington, Seattle, WA, USA
| | - Xue Feng
- Carina Medical LLC, 145 Graham Ave, A168, Lexington, KY, 40536, USA
| | - Shyam Rao
- Department of Radiation Oncology, University of California Davis Medical Center, 4501 X Street, Suite 0152, Sacramento, California, 95817, USA
| | - Stanley H Benedict
- Department of Radiation Oncology, University of California Davis Medical Center, 4501 X Street, Suite 0152, Sacramento, California, 95817, USA
| | - Quan Chen
- Carina Medical LLC, 145 Graham Ave, A168, Lexington, KY, 40536, USA. .,Department of Radiation Oncology, Markey Cancer Center, University of Kentucky, RM CC063, 800 Rose St, Lexington, KY, 40536, USA.
| | - Yi Rong
- Department of Radiation Oncology, University of California Davis Medical Center, 4501 X Street, Suite 0152, Sacramento, California, 95817, USA.
| |
Collapse
|
34
|
Theek B, Magnuska Z, Gremse F, Hahn H, Schulz V, Kiessling F. Automation of data analysis in molecular cancer imaging and its potential impact on future clinical practice. Methods 2020; 188:30-36. [PMID: 32615232 DOI: 10.1016/j.ymeth.2020.06.019] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2020] [Accepted: 06/23/2020] [Indexed: 12/11/2022] Open
Abstract
Digitalization, especially the use of machine learning and computational intelligence, is considered to dramatically shape medical procedures in the near future. In the field of cancer diagnostics, radiomics, the extraction of multiple quantitative image features and their clustered analysis, is gaining increasing attention to obtain more detailed, reproducible, and meaningful information about the disease entity, its prognosis and the ideal therapeutic option. In this context, automation of diagnostic procedures can improve the entire pipeline, which comprises patient registration, planning and performing an imaging examination at the scanner, image reconstruction, image analysis, and feeding the diagnostic information from various sources into decision support systems. With a focus on cancer diagnostics, this review article reports and discusses how computer-assistance can be integrated into diagnostic procedures and which benefits and challenges arise from it. Besides a strong view on classical imaging modalities like x-ray, CT, MRI, ultrasound, PET, SPECT and hybrid imaging devices thereof, it is outlined how imaging data can be combined with data deriving from patient anamnesis, clinical chemistry, pathology, and different omics. In this context, the article also discusses IT infrastructures that are required to realize this integration in the clinical routine. Although there are still many challenges to comprehensively implement automated and integrated data analysis in molecular cancer imaging, the authors conclude that we are entering a new era of medical diagnostics and precision medicine.
Collapse
Affiliation(s)
- Benjamin Theek
- Institute for Experimental Molecular Imaging, University Clinic and Helmholtz Institute for Biomedical Engineering, RWTH Aachen University, Forckenbeckstrasse 55, 52074 Aachen, Germany; Fraunhofer Institute for Digital Medicine MEVIS, Am Fallturm 1, 28359 Bremen, Germany
| | - Zuzanna Magnuska
- Institute for Experimental Molecular Imaging, University Clinic and Helmholtz Institute for Biomedical Engineering, RWTH Aachen University, Forckenbeckstrasse 55, 52074 Aachen, Germany
| | - Felix Gremse
- Institute for Experimental Molecular Imaging, University Clinic and Helmholtz Institute for Biomedical Engineering, RWTH Aachen University, Forckenbeckstrasse 55, 52074 Aachen, Germany; Institute of Medical Informatics, RWTH Aachen University, Pauwelsstrasse 30, 52074 Aachen, Germany
| | - Horst Hahn
- Fraunhofer Institute for Digital Medicine MEVIS, Am Fallturm 1, 28359 Bremen, Germany
| | - Volkmar Schulz
- Institute for Experimental Molecular Imaging, University Clinic and Helmholtz Institute for Biomedical Engineering, RWTH Aachen University, Forckenbeckstrasse 55, 52074 Aachen, Germany; Fraunhofer Institute for Digital Medicine MEVIS, Am Fallturm 1, 28359 Bremen, Germany; Physics of Molecular Imaging Systems, Institute for Experimental Molecular Imaging, RWTH Aachen University, Forckenbeckstrasse 55, 52074 Aachen, Germany
| | - Fabian Kiessling
- Institute for Experimental Molecular Imaging, University Clinic and Helmholtz Institute for Biomedical Engineering, RWTH Aachen University, Forckenbeckstrasse 55, 52074 Aachen, Germany; Fraunhofer Institute for Digital Medicine MEVIS, Am Fallturm 1, 28359 Bremen, Germany.
| |
Collapse
|
35
|
Opposits G, Aranyi C, Glavák C, Cselik Z, Trón L, Sipos D, Hadjiev J, Berényi E, Repa I, Emri M, Kovács Á. OAR sparing 3D radiotherapy planning supported by fMRI brain mapping investigations. Med Dosim 2020; 45:e1-e8. [PMID: 32505630 DOI: 10.1016/j.meddos.2020.04.003] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2019] [Revised: 03/21/2020] [Accepted: 04/08/2020] [Indexed: 11/28/2022]
Abstract
The human brain as an organ has numerous functions; some of them can be visualized by functional imaging techniques (e.g., functional MRI [fMRI] or positron emission tomography). The localization of the appropriate activity clusters requires sophisticated instrumentation and complex measuring protocol. As the inclusion of the activation pattern in modern self-tailored 3D based radiotherapy has notable advantages, this method is applied frequently. Unfortunately, no standardized method has been published yet for the integration of the fMRI data into the planning process and the detailed description of the individual applications is usually missing. Thirteen patients with brain tumors, receiving fMRI based RT planning were enrolled in this study. The delivered dose maps were exported from the treatment planning system and processed for further statistical analysis. Two parameters were introduced to measure the geometrical distance Hausdorff Distance (HD), and volumetric overlap Dice Similarity Coefficient (DSC) of fMRI corrected and not corrected dose matrices as calculated by 3D planning to characterize similarity and/or dissimilarity of these dose matrices. Statistical analysis of bootstrapped HD and DSC data was performed to determine confidence intervals of these parameters. The calculated confidence intervals for HD and DSC were (5.04, 7.09), (0.79, 0.86), respectively for the 40 Gy and (5.2, 7.85), (0.74, 0.83), respectively for the 60 Gy dose volumes. These data indicate that in the case of HD < 5.04 and/or DSC > 0.86, the 40 Gy dose volumes obtained with and without fMRI activation pattern do not show a significant difference (5% significance level). The same conditions for the 60 Gy dose volumes were HD < 5.2 and/or DSC > 0.83. At the same time, with HD > 7.09 and/or DSC < 0.79 for 40 Gy and HD > 7.85 and/or DSC < 0.74 for 60 Gy the impact of fMRI utilization in RT planning is excessive. The fMRI activation clusters can be used in daily RT planning routine to spare activation clusters as critical areas in the brain and avoid their high dose irradiation. Parameters HD (as distance) and DSC (as overlap) can be used to characterize the difference and similarity between the radiotherapy planning target volumes and indicate whether the fMRI delivered activation patterns and consequent fMRI corrected planning volumes are reliable or not.
Collapse
Affiliation(s)
- Gábor Opposits
- University of Debrecen, Faculty of Medicine, Department of Medical Imaging, Division of Nuclear Medicine and Translational Imaging, Nagyerdei krt. 98., Debrecen 4032, Hungary.
| | - Csaba Aranyi
- University of Debrecen, Faculty of Medicine, Department of Medical Imaging, Division of Nuclear Medicine and Translational Imaging, Nagyerdei krt. 98., Debrecen 4032, Hungary
| | - Csaba Glavák
- Kaposi Somogy County Teaching Hospital Dr. József Baka Diagnostic, Radiation Oncology, Research and Teaching Center, Kaposvár, Hungary
| | - Zsolt Cselik
- Veszprém County Hospital, Oncoradiology, Veszprém, Hungary
| | - Lajos Trón
- University of Debrecen, Faculty of Medicine, Department of Medical Imaging, Division of Nuclear Medicine and Translational Imaging, Nagyerdei krt. 98., Debrecen 4032, Hungary
| | - Dávid Sipos
- Kaposi Somogy County Teaching Hospital Dr. József Baka Diagnostic, Radiation Oncology, Research and Teaching Center, Kaposvár, Hungary; University of Pécs Doctoral School of Health Sciences, Pécs, Hungary
| | - Janaki Hadjiev
- Kaposi Somogy County Teaching Hospital Dr. József Baka Diagnostic, Radiation Oncology, Research and Teaching Center, Kaposvár, Hungary
| | - Ervin Berényi
- University of Debrecen, Faculty of Medicine, Department of Medical Imaging, Division of Nuclear Medicine and Translational Imaging, Nagyerdei krt. 98., Debrecen 4032, Hungary
| | - Imre Repa
- Kaposi Somogy County Teaching Hospital Dr. József Baka Diagnostic, Radiation Oncology, Research and Teaching Center, Kaposvár, Hungary
| | - Miklós Emri
- University of Debrecen, Faculty of Medicine, Department of Medical Imaging, Division of Nuclear Medicine and Translational Imaging, Nagyerdei krt. 98., Debrecen 4032, Hungary
| | - Árpád Kovács
- University of Debrecen, Faculty of Medicine, Department of Oncoradiology, Debrecen, Hungary; Kaposi Somogy County Teaching Hospital Dr. József Baka Diagnostic, Radiation Oncology, Research and Teaching Center, Kaposvár, Hungary; University of Pécs Doctoral School of Health Sciences, Pécs, Hungary
| |
Collapse
|
36
|
Li S, Xiao J, He L, Peng X, Yuan X. The Tumor Target Segmentation of Nasopharyngeal Cancer in CT Images Based on Deep Learning Methods. Technol Cancer Res Treat 2020; 18:1533033819884561. [PMID: 31736433 PMCID: PMC6862777 DOI: 10.1177/1533033819884561] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023] Open
Abstract
Radiotherapy is the main treatment strategy for nasopharyngeal carcinoma. A major factor affecting radiotherapy outcome is the accuracy of target delineation. Target delineation is time-consuming, and the results can vary depending on the experience of the oncologist. Using deep learning methods to automate target delineation may increase its efficiency. We used a modified deep learning model called U-Net to automatically segment and delineate tumor targets in patients with nasopharyngeal carcinoma. Patients were randomly divided into a training set (302 patients), validation set (100 patients), and test set (100 patients). The U-Net model was trained using labeled computed tomography images from the training set. The U-Net was able to delineate nasopharyngeal carcinoma tumors with an overall dice similarity coefficient of 65.86% for lymph nodes and 74.00% for primary tumor, with respective Hausdorff distances of 32.10 and 12.85 mm. Delineation accuracy decreased with increasing cancer stage. Automatic delineation took approximately 2.6 hours, compared to 3 hours, using an entirely manual procedure. Deep learning models can therefore improve accuracy, consistency, and efficiency of target delineation in T stage, but additional physician input may be required for lymph nodes.
Collapse
Affiliation(s)
- Shihao Li
- National Key Laboratory of Fundamental Science on Synthetic Vision, College of Computer Science, Sichuan University, Chengdu, Sichuan, China
| | - Jianghong Xiao
- Department of Radiation Oncology, Cancer Center, West China Hospital, Sichuan University, Chengdu, Sichuan, China
| | - Ling He
- Department of Biotherapy, Cancer Center, West China Hospital, Sichuan University, Chengdu, Sichuan, China
| | - Xingchen Peng
- Department of Biotherapy, Cancer Center, West China Hospital, Sichuan University, Chengdu, Sichuan, China
| | - Xuedong Yuan
- College of Computer Science, Sichuan University, Chengdu, Sichuan, China
| |
Collapse
|
37
|
Sultana S, Robinson A, Song DY, Lee J. CNN-based hierarchical coarse-to-fine segmentation of pelvic CT images for prostate cancer radiotherapy. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2020; 11315. [PMID: 32341620 DOI: 10.1117/12.2549979] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Accurate segmentation of organs-at-risk is important inprostate cancer radiation therapy planning. However, poor soft tissue contrast in CT makes the segmentation task very challenging. We propose a deep convolutional neural network approach to automatically segment the prostate, bladder, and rectum from pelvic CT. A hierarchical coarse-to-fine segmentation strategy is used where the first step generates a coarse segmentation from which an organ-specific region of interest (ROI) localization map is produced. The second step produces detailed and accurate segmentation of the organs. The ROI localization map is generated using a 3D U-net. The localization map helps adjusting the ROI of each organ that needs to be segmented and hence improves computational efficiency by eliminating irrelevant background information. For the fine segmentation step, we designed a fully convolutional network (FCN) by combining a generative adversarial network (GAN) with a U-net. Specifically, the generator is a 3D U-net that is trained to predict individual pelvic structures, and the discriminator is an FCN which fine-tunes the generator predicted segmentation map by comparing it with the ground truth. The network was trained using 100 CT datasets and tested on 15 datasets to segment the prostate, bladder and rectum. The average Dice similarity (mean±SD) of the prostate, bladder and rectum are 0.90±0.05, 0.96±0.06 and 0.91±0.09, respectively, and Hausdorff distances of these three structures are 5.21±1.17, 4.37±0.56 and 6.11±1.47(mm), respectively. The proposed method produces accurate and reproducible segmentation of pelvic structures, which can be potentially valuable for prostate cancer radiotherapy treatment planning.
Collapse
Affiliation(s)
- Sharmin Sultana
- Department of Radiation Oncology and Molecular Radiation Sciences, Johns Hopkins University, Baltimore, MD 21287, USA
| | - Adam Robinson
- Department of Radiation Oncology and Molecular Radiation Sciences, Johns Hopkins University, Baltimore, MD 21287, USA
| | - Daniel Y Song
- Department of Radiation Oncology and Molecular Radiation Sciences, Johns Hopkins University, Baltimore, MD 21287, USA
| | - Junghoon Lee
- Department of Radiation Oncology and Molecular Radiation Sciences, Johns Hopkins University, Baltimore, MD 21287, USA
| |
Collapse
|
38
|
Ahn SH, Yeo AU, Kim KH, Kim C, Goh Y, Cho S, Lee SB, Lim YK, Kim H, Shin D, Kim T, Kim TH, Youn SH, Oh ES, Jeong JH. Comparative clinical evaluation of atlas and deep-learning-based auto-segmentation of organ structures in liver cancer. Radiat Oncol 2019; 14:213. [PMID: 31775825 PMCID: PMC6880380 DOI: 10.1186/s13014-019-1392-z] [Citation(s) in RCA: 46] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2019] [Accepted: 10/09/2019] [Indexed: 12/14/2022] Open
Abstract
BACKGROUND Accurate and standardized descriptions of organs at risk (OARs) are essential in radiation therapy for treatment planning and evaluation. Traditionally, physicians have contoured patient images manually, which, is time-consuming and subject to inter-observer variability. This study aims to a) investigate whether customized, deep-learning-based auto-segmentation could overcome the limitations of manual contouring and b) compare its performance against a typical, atlas-based auto-segmentation method organ structures in liver cancer. METHODS On-contrast computer tomography image sets of 70 liver cancer patients were used, and four OARs (heart, liver, kidney, and stomach) were manually delineated by three experienced physicians as reference structures. Atlas and deep learning auto-segmentations were respectively performed with MIM Maestro 6.5 (MIM Software Inc., Cleveland, OH) and, with a deep convolution neural network (DCNN). The Hausdorff distance (HD) and, dice similarity coefficient (DSC), volume overlap error (VOE), and relative volume difference (RVD) were used to quantitatively evaluate the four different methods in the case of the reference set of the four OAR structures. RESULTS The atlas-based method yielded the following average DSC and standard deviation values (SD) for the heart, liver, right kidney, left kidney, and stomach: 0.92 ± 0.04 (DSC ± SD), 0.93 ± 0.02, 0.86 ± 0.07, 0.85 ± 0.11, and 0.60 ± 0.13 respectively. The deep-learning-based method yielded corresponding values for the OARs of 0.94 ± 0.01, 0.93 ± 0.01, 0.88 ± 0.03, 0.86 ± 0.03, and 0.73 ± 0.09. The segmentation results show that the deep learning framework is superior to the atlas-based framwork except in the case of the liver. Specifically, in the case of the stomach, the DSC, VOE, and RVD showed a maximum difference of 21.67, 25.11, 28.80% respectively. CONCLUSIONS In this study, we demonstrated that a deep learning framework could be used more effectively and efficiently compared to atlas-based auto-segmentation for most OARs in human liver cancer. Extended use of the deep-learning-based framework is anticipated for auto-segmentations of other body sites.
Collapse
Affiliation(s)
- Sang Hee Ahn
- Department of Radiation Oncology, Proton Therapy Center, National Cancer Center, 323, Ilsan-ro, Ilsandong-gu, Goyang-si, Gyeonggi-do, 10408, South Korea
| | - Adam Unjin Yeo
- Peter MacCallum Cancer Centre, Melbourne, VIC, Australia
| | - Kwang Hyeon Kim
- Department of Radiation Oncology, Proton Therapy Center, National Cancer Center, 323, Ilsan-ro, Ilsandong-gu, Goyang-si, Gyeonggi-do, 10408, South Korea
| | - Chankyu Kim
- Department of Radiation Oncology, Proton Therapy Center, National Cancer Center, 323, Ilsan-ro, Ilsandong-gu, Goyang-si, Gyeonggi-do, 10408, South Korea
| | - Youngmoon Goh
- Department of Radiation Oncology, Asan Medical Center, Seoul, South Korea
| | - Shinhaeng Cho
- Department of Radiation Oncology, Chonnam National University Medical School, Gwangju, South Korea
| | - Se Byeong Lee
- Department of Radiation Oncology, Proton Therapy Center, National Cancer Center, 323, Ilsan-ro, Ilsandong-gu, Goyang-si, Gyeonggi-do, 10408, South Korea
| | - Young Kyung Lim
- Department of Radiation Oncology, Proton Therapy Center, National Cancer Center, 323, Ilsan-ro, Ilsandong-gu, Goyang-si, Gyeonggi-do, 10408, South Korea
| | - Haksoo Kim
- Department of Radiation Oncology, Proton Therapy Center, National Cancer Center, 323, Ilsan-ro, Ilsandong-gu, Goyang-si, Gyeonggi-do, 10408, South Korea
| | - Dongho Shin
- Department of Radiation Oncology, Proton Therapy Center, National Cancer Center, 323, Ilsan-ro, Ilsandong-gu, Goyang-si, Gyeonggi-do, 10408, South Korea
| | - Taeyoon Kim
- Department of Radiation Oncology, Proton Therapy Center, National Cancer Center, 323, Ilsan-ro, Ilsandong-gu, Goyang-si, Gyeonggi-do, 10408, South Korea
| | - Tae Hyun Kim
- Department of Radiation Oncology, Proton Therapy Center, National Cancer Center, 323, Ilsan-ro, Ilsandong-gu, Goyang-si, Gyeonggi-do, 10408, South Korea
| | - Sang Hee Youn
- Department of Radiation Oncology, Proton Therapy Center, National Cancer Center, 323, Ilsan-ro, Ilsandong-gu, Goyang-si, Gyeonggi-do, 10408, South Korea
| | - Eun Sang Oh
- Department of Radiation Oncology, Proton Therapy Center, National Cancer Center, 323, Ilsan-ro, Ilsandong-gu, Goyang-si, Gyeonggi-do, 10408, South Korea
| | - Jong Hwi Jeong
- Department of Radiation Oncology, Proton Therapy Center, National Cancer Center, 323, Ilsan-ro, Ilsandong-gu, Goyang-si, Gyeonggi-do, 10408, South Korea.
| |
Collapse
|
39
|
Dong X, Lei Y, Tian S, Wang T, Patel P, Curran WJ, Jani AB, Liu T, Yang X. Synthetic MRI-aided multi-organ segmentation on male pelvic CT using cycle consistent deep attention network. Radiother Oncol 2019; 141:192-199. [PMID: 31630868 DOI: 10.1016/j.radonc.2019.09.028] [Citation(s) in RCA: 72] [Impact Index Per Article: 14.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2019] [Revised: 09/24/2019] [Accepted: 09/29/2019] [Indexed: 11/17/2022]
Abstract
BACKGROUND AND PURPOSE Manual contouring is labor intensive, and subject to variations in operator knowledge, experience and technique. This work aims to develop an automated computed tomography (CT) multi-organ segmentation method for prostate cancer treatment planning. METHODS AND MATERIALS The proposed method exploits the superior soft-tissue information provided by synthetic MRI (sMRI) to aid the multi-organ segmentation on pelvic CT images. A cycle generative adversarial network (CycleGAN) was used to estimate sMRIs from CT images. A deep attention U-Net (DAUnet) was trained on sMRI and corresponding multi-organ contours for auto-segmentation. The deep attention strategy was introduced to identify the most relevant features to differentiate different organs. Deep supervision was incorporated into the DAUnet to enhance the features' discriminative ability. Segmented contours of a patient were obtained by feeding CT image into the trained CycleGAN to generate sMRI, which was then fed to the trained DAUnet to generate organ contours. We trained and evaluated our model with 140 datasets from prostate patients. RESULTS The Dice similarity coefficient and mean surface distance between our segmented and bladder, prostate, and rectum manual contours were 0.95 ± 0.03, 0.52 ± 0.22 mm; 0.87 ± 0.04, 0.93 ± 0.51 mm; and 0.89 ± 0.04, 0.92 ± 1.03 mm, respectively. CONCLUSION We proposed a sMRI-aided multi-organ automatic segmentation method on pelvic CT images. By integrating deep attention and deep supervision strategy, the proposed network provides accurate and consistent prostate, bladder and rectum segmentation, and has the potential to facilitate routine prostate-cancer radiotherapy treatment planning.
Collapse
Affiliation(s)
- Xue Dong
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, GA, United States
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, GA, United States
| | - Sibo Tian
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, GA, United States
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, GA, United States
| | - Pretesh Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, GA, United States
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, GA, United States
| | - Ashesh B Jani
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, GA, United States
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, GA, United States
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, GA, United States.
| |
Collapse
|
40
|
Tang H, Chen X, Liu Y, Lu Z, You J, Yang M, Yao S, Zhao G, Xu Y, Chen T, Liu Y, Xie X. Clinically applicable deep learning framework for organs at risk delineation in CT images. NAT MACH INTELL 2019. [DOI: 10.1038/s42256-019-0099-z] [Citation(s) in RCA: 63] [Impact Index Per Article: 12.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
|
41
|
Ayyalusamy A, Vellaiyan S, Subramanian S, Ilamurugu A, Satpathy S, Nauman M, Katta G, Madineni A. Auto-segmentation of head and neck organs at risk in radiotherapy and its dependence on anatomic similarity. Radiat Oncol J 2019; 37:134-142. [PMID: 31266293 PMCID: PMC6610007 DOI: 10.3857/roj.2019.00038] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2019] [Accepted: 04/15/2019] [Indexed: 01/27/2023] Open
Abstract
Purpose The aim is to study the dependence of deformable based auto-segmentation of head and neck organs-at-risks (OAR) on anatomy matching for a single atlas based system and generate an acceptable set of contours. Methods A sample of ten patients in neutral neck position and three atlas sets consisting of ten patients each in different head and neck positions were utilized to generate three scenarios representing poor, average and perfect anatomy matching respectively and auto-segmentation was carried out for each scenario. Brainstem, larynx, mandible, cervical oesophagus, oral cavity, pharyngeal muscles, parotids, spinal cord, and trachea were the structures selected for the study. Automatic and oncologist reference contours were compared using the dice similarity index (DSI), Hausdroff distance and variation in the centre of mass (COM). Results The mean DSI scores for brainstem was good irrespective of the anatomy matching scenarios. The scores for mandible, oral cavity, larynx, parotids, spinal cord, and trachea were unacceptable with poor matching but improved with enhanced bony matching whereas cervical oesophagus and pharyngeal muscles had less than acceptable scores for even perfect matching scenario. HD value and variation in COM decreased with better matching for all the structures. Conclusion Improved anatomy matching resulted in better segmentation. At least a similar setup can help generate an acceptable set of automatic contours in systems employing single atlas method. Automatic contours from average matching scenario were acceptable for most structures. Importance should be given to head and neck position during atlas generation for a single atlas based system.
Collapse
Affiliation(s)
- Anantharaman Ayyalusamy
- Department of Radiation Oncology, Yashoda Hospitals, Hyderabad, India.,All India Institute of Medical Sciences, New Delhi, India
| | - Subramani Vellaiyan
- All India Institute of Medical Sciences, New Delhi, India.,Department of Radiation Oncology, Research and Development Centre, Bharathiar University, Coimbatore, India
| | - Shanmuga Subramanian
- Department of Radiation Oncology, Yashoda Hospitals, Hyderabad, India.,All India Institute of Medical Sciences, New Delhi, India
| | | | - Shyama Satpathy
- Department of Radiation Oncology, Yashoda Hospitals, Hyderabad, India
| | - Mohammed Nauman
- Department of Radiation Oncology, Yashoda Hospitals, Hyderabad, India
| | - Gowtham Katta
- Department of Radiation Oncology, Yashoda Hospitals, Hyderabad, India
| | - Aneesha Madineni
- Department of Radiation Oncology, Yashoda Hospitals, Hyderabad, India
| |
Collapse
|
42
|
Kosmin M, Ledsam J, Romera-Paredes B, Mendes R, Moinuddin S, de Souza D, Gunn L, Kelly C, Hughes C, Karthikesalingam A, Nutting C, Sharma R. Rapid advances in auto-segmentation of organs at risk and target volumes in head and neck cancer. Radiother Oncol 2019; 135:130-140. [DOI: 10.1016/j.radonc.2019.03.004] [Citation(s) in RCA: 52] [Impact Index Per Article: 10.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2018] [Revised: 02/10/2019] [Accepted: 03/04/2019] [Indexed: 11/25/2022]
|
43
|
Abstract
Manual image segmentation is a time-consuming task routinely performed in radiotherapy to identify each patient's targets and anatomical structures. The efficacy and safety of the radiotherapy plan requires accurate segmentations as these regions of interest are generally used to optimize and assess the quality of the plan. However, reports have shown that this process can be subject to significant inter- and intraobserver variability. Furthermore, the quality of the radiotherapy treatment, and subsequent analyses (ie, radiomics, dosimetric), can be subject to the accuracy of these manual segmentations. Automatic segmentation (or auto-segmentation) of targets and normal tissues is, therefore, preferable as it would address these challenges. Previously, auto-segmentation techniques have been clustered into 3 generations of algorithms, with multiatlas based and hybrid techniques (third generation) being considered the state-of-the-art. More recently, however, the field of medical image segmentation has seen accelerated growth driven by advances in computer vision, particularly through the application of deep learning algorithms, suggesting we have entered the fourth generation of auto-segmentation algorithm development. In this paper, the authors review traditional (nondeep learning) algorithms particularly relevant for applications in radiotherapy. Concepts from deep learning are introduced focusing on convolutional neural networks and fully-convolutional networks which are generally used for segmentation tasks. Furthermore, the authors provide a summary of deep learning auto-segmentation radiotherapy applications reported in the literature. Lastly, considerations for clinical deployment (commissioning and QA) of auto-segmentation software are provided.
Collapse
|
44
|
Gardner SJ, Mao W, Liu C, Aref I, Elshaikh M, Lee JK, Pradhan D, Movsas B, Chetty IJ, Siddiqui F. Improvements in CBCT Image Quality Using a Novel Iterative Reconstruction Algorithm: A Clinical Evaluation. Adv Radiat Oncol 2019; 4:390-400. [PMID: 31011685 PMCID: PMC6460237 DOI: 10.1016/j.adro.2018.12.003] [Citation(s) in RCA: 42] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2018] [Accepted: 12/31/2018] [Indexed: 11/03/2022] Open
Abstract
PURPOSE This study aimed to evaluate the clinical utility of a novel iterative cone beam computed tomography (CBCT) reconstruction algorithm for prostate and head and neck (HN) cancer. METHODS AND MATERIALS A total of 10 patients with HN and 10 patients with prostate cancer were analyzed. For each patient, raw CBCT acquisition data were used to reconstruct images with a currently available algorithm (FDK_CBCT) and novel iterative algorithm (Iterative_CBCT). Quantitative contouring variation analysis was performed using structures delineated by several radiation oncologists. For prostate, observers contoured the prostate, proximal 2 cm seminal vesicles, bladder, and rectum. For HN, observers contoured the brain stem, spinal canal, right-left parotid glands, and right-left submandibular glands. Observer contours were combined to form a reference consensus contour using the simultaneous truth and performance level estimation method. All observer contours then were compared with the reference contour to calculate the Dice coefficient, Hausdorff distance, and mean contour distance (prostate contour only). Qualitative image quality analysis was performed using a 5-point scale ranging from 1 (much superior image quality for Iterative_CBCT) to 5 (much inferior image quality for Iterative_CBCT). RESULTS The Iterative_CBCT data sets resulted in a prostate contour Dice coefficient improvement of approximately 2.4% (P = .029). The average prostate contour Dice coefficient for the Iterative_CBCT data sets was improved for all patients, with improvements up to approximately 10% for 1 patient. The mean contour distance results indicate an approximate 15% reduction in mean contouring error for all prostate regions. For the parotid contours, Iterative_CBCT data sets resulted in a Hausdorff distance improvement of approximately 2 mm (P < .01) and an approximate 2% improvement in Dice coefficient (P = .03). The Iterative_CBCT data sets were scored as equivalent or of better image quality for 97.3% (prostate) and 90.0% (HN) of the patient data sets. CONCLUSIONS Observers noted an improvement in image uniformity, noise level, and overall image quality for Iterative_CBCT data sets. In addition, expert observers displayed an improved ability to consistently delineate soft tissue structures, such as the prostate and parotid glands. Thus, the novel iterative reconstruction algorithm analyzed in this study is capable of improving the visualization for prostate and HN cancer image guided radiation therapy.
Collapse
Affiliation(s)
- Stephen J. Gardner
- Department of Radiation Oncology, Josephine Ford Cancer Institute, Henry Ford Health System, Detroit, Michigan
| | | | | | | | | | | | | | | | | | | |
Collapse
|
45
|
Giraud P, Giraud P, Gasnier A, El Ayachy R, Kreps S, Foy JP, Durdux C, Huguet F, Burgun A, Bibault JE. Radiomics and Machine Learning for Radiotherapy in Head and Neck Cancers. Front Oncol 2019; 9:174. [PMID: 30972291 PMCID: PMC6445892 DOI: 10.3389/fonc.2019.00174] [Citation(s) in RCA: 60] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2018] [Accepted: 02/28/2019] [Indexed: 12/13/2022] Open
Abstract
Introduction: An increasing number of parameters can be considered when making decisions in oncology. Tumor characteristics can also be extracted from imaging through the use of radiomics and add to this wealth of clinical data. Machine learning can encompass these parameters and thus enhance clinical decision as well as radiotherapy workflow. Methods: We performed a description of machine learning applications at each step of treatment by radiotherapy in head and neck cancers. We then performed a systematic review on radiomics and machine learning outcome prediction models in head and neck cancers. Results: Machine Learning has several promising applications in treatment planning with automatic organ at risk delineation improvements and adaptative radiotherapy workflow automation. It may also provide new approaches for Normal Tissue Complication Probability models. Radiomics may provide additional data on tumors for improved machine learning powered predictive models, not only on survival, but also on risk of distant metastasis, in field recurrence, HPV status and extra nodal spread. However, most studies provide preliminary data requiring further validation. Conclusion: Promising perspectives arise from machine learning applications and radiomics based models, yet further data are necessary for their implementation in daily care.
Collapse
Affiliation(s)
- Paul Giraud
- Radiation Oncology Department, Georges Pompidou European Hospital, Assistance Publique-Hôpitaux de Paris, Paris Descartes University, Paris Sorbonne Cité, Paris, France.,Cancer Research and Personalized Medicine-Integrated Cancer Research Center (SIRIC), Georges Pompidou European Hospital, Assistance Publique-Hôitaux de Paris, Paris Descartes University, Paris Sorbonne Cité, Paris, France
| | - Philippe Giraud
- Radiation Oncology Department, Georges Pompidou European Hospital, Assistance Publique-Hôpitaux de Paris, Paris Descartes University, Paris Sorbonne Cité, Paris, France.,Cancer Research and Personalized Medicine-Integrated Cancer Research Center (SIRIC), Georges Pompidou European Hospital, Assistance Publique-Hôitaux de Paris, Paris Descartes University, Paris Sorbonne Cité, Paris, France
| | - Anne Gasnier
- Radiation Oncology Department, Georges Pompidou European Hospital, Assistance Publique-Hôpitaux de Paris, Paris Descartes University, Paris Sorbonne Cité, Paris, France.,Cancer Research and Personalized Medicine-Integrated Cancer Research Center (SIRIC), Georges Pompidou European Hospital, Assistance Publique-Hôitaux de Paris, Paris Descartes University, Paris Sorbonne Cité, Paris, France
| | - Radouane El Ayachy
- Radiation Oncology Department, Georges Pompidou European Hospital, Assistance Publique-Hôpitaux de Paris, Paris Descartes University, Paris Sorbonne Cité, Paris, France.,Cancer Research and Personalized Medicine-Integrated Cancer Research Center (SIRIC), Georges Pompidou European Hospital, Assistance Publique-Hôitaux de Paris, Paris Descartes University, Paris Sorbonne Cité, Paris, France
| | - Sarah Kreps
- Radiation Oncology Department, Georges Pompidou European Hospital, Assistance Publique-Hôpitaux de Paris, Paris Descartes University, Paris Sorbonne Cité, Paris, France.,Cancer Research and Personalized Medicine-Integrated Cancer Research Center (SIRIC), Georges Pompidou European Hospital, Assistance Publique-Hôitaux de Paris, Paris Descartes University, Paris Sorbonne Cité, Paris, France
| | - Jean-Philippe Foy
- Department of Oral and Maxillo-Facial Surgery, Sorbonne University, Pitié-Salpêtriére Hospital, Paris, France.,Univ Lyon, Université Claude Bernard Lyon 1, INSERM 1052, CNRS 5286, Centre Léon Bérard, Centre de Recherche en Cancérologie de Lyon, Lyon, France
| | - Catherine Durdux
- Radiation Oncology Department, Georges Pompidou European Hospital, Assistance Publique-Hôpitaux de Paris, Paris Descartes University, Paris Sorbonne Cité, Paris, France.,Cancer Research and Personalized Medicine-Integrated Cancer Research Center (SIRIC), Georges Pompidou European Hospital, Assistance Publique-Hôitaux de Paris, Paris Descartes University, Paris Sorbonne Cité, Paris, France
| | - Florence Huguet
- Department of Radiation Oncology, Tenon University Hospital, Hôpitaux Universitaires Est Parisien, Sorbonne University Medical Faculty, Paris, France
| | - Anita Burgun
- Cancer Research and Personalized Medicine-Integrated Cancer Research Center (SIRIC), Georges Pompidou European Hospital, Assistance Publique-Hôitaux de Paris, Paris Descartes University, Paris Sorbonne Cité, Paris, France.,INSERM UMR 1138 Team 22: Information Sciences to support Personalized Medicine, Paris Descartes University, Sorbonne Paris Cité, Paris, France
| | - Jean-Emmanuel Bibault
- Radiation Oncology Department, Georges Pompidou European Hospital, Assistance Publique-Hôpitaux de Paris, Paris Descartes University, Paris Sorbonne Cité, Paris, France.,Cancer Research and Personalized Medicine-Integrated Cancer Research Center (SIRIC), Georges Pompidou European Hospital, Assistance Publique-Hôitaux de Paris, Paris Descartes University, Paris Sorbonne Cité, Paris, France.,INSERM UMR 1138 Team 22: Information Sciences to support Personalized Medicine, Paris Descartes University, Sorbonne Paris Cité, Paris, France
| |
Collapse
|
46
|
Dong X, Lei Y, Wang T, Thomas M, Tang L, Curran WJ, Liu T, Yang X. Automatic multiorgan segmentation in thorax CT images using U-net-GAN. Med Phys 2019; 46:2157-2168. [PMID: 30810231 DOI: 10.1002/mp.13458] [Citation(s) in RCA: 155] [Impact Index Per Article: 31.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2018] [Revised: 02/18/2019] [Accepted: 02/18/2019] [Indexed: 12/19/2022] Open
Abstract
PURPOSE Accurate and timely organs-at-risk (OARs) segmentation is key to efficient and high-quality radiation therapy planning. The purpose of this work is to develop a deep learning-based method to automatically segment multiple thoracic OARs on chest computed tomography (CT) for radiotherapy treatment planning. METHODS We propose an adversarial training strategy to train deep neural networks for the segmentation of multiple organs on thoracic CT images. The proposed design of adversarial networks, called U-Net-generative adversarial network (U-Net-GAN), jointly trains a set of U-Nets as generators and fully convolutional networks (FCNs) as discriminators. Specifically, the generator, composed of U-Net, produces an image segmentation map of multiple organs by an end-to-end mapping learned from CT image to multiorgan-segmented OARs. The discriminator, structured as an FCN, discriminates between the ground truth and segmented OARs produced by the generator. The generator and discriminator compete against each other in an adversarial learning process to produce the optimal segmentation map of multiple organs. Our segmentation results were compared with manually segmented OARs (ground truth) for quantitative evaluations in geometric difference, as well as dosimetric performance by investigating the dose-volume histogram in 20 stereotactic body radiation therapy (SBRT) lung plans. RESULTS This segmentation technique was applied to delineate the left and right lungs, spinal cord, esophagus, and heart using 35 patients' chest CTs. The averaged dice similarity coefficient for the above five OARs are 0.97, 0.97, 0.90, 0.75, and 0.87, respectively. The mean surface distance of the five OARs obtained with proposed method ranges between 0.4 and 1.5 mm on average among all 35 patients. The mean dose differences on the 20 SBRT lung plans are -0.001 to 0.155 Gy for the five OARs. CONCLUSION We have investigated a novel deep learning-based approach with a GAN strategy to segment multiple OARs in the thorax using chest CT images and demonstrated its feasibility and reliability. This is a potentially valuable method for improving the efficiency of chest radiotherapy treatment planning.
Collapse
Affiliation(s)
- Xue Dong
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Matthew Thomas
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Leonardo Tang
- Department of Undeclared Engineering, University of California, Berkeley, CA, 94720, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| |
Collapse
|
47
|
Chen H, Lu W, Chen M, Zhou L, Timmerman R, Tu D, Nedzi L, Wardak Z, Jiang S, Zhen X, Gu X. A recursive ensemble organ segmentation (REOS) framework: application in brain radiotherapy. ACTA ACUST UNITED AC 2019; 64:025015. [DOI: 10.1088/1361-6560/aaf83c] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
48
|
Zhu W, Huang Y, Zeng L, Chen X, Liu Y, Qian Z, Du N, Fan W, Xie X. AnatomyNet: Deep learning for fast and fully automated whole-volume segmentation of head and neck anatomy. Med Phys 2018; 46:576-589. [PMID: 30480818 DOI: 10.1002/mp.13300] [Citation(s) in RCA: 222] [Impact Index Per Article: 37.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2018] [Revised: 11/06/2018] [Accepted: 11/07/2018] [Indexed: 12/20/2022] Open
Abstract
PURPOSE Radiation therapy (RT) is a common treatment option for head and neck (HaN) cancer. An important step involved in RT planning is the delineation of organs-at-risks (OARs) based on HaN computed tomography (CT). However, manually delineating OARs is time-consuming as each slice of CT images needs to be individually examined and a typical CT consists of hundreds of slices. Automating OARs segmentation has the benefit of both reducing the time and improving the quality of RT planning. Existing anatomy autosegmentation algorithms use primarily atlas-based methods, which require sophisticated atlas creation and cannot adequately account for anatomy variations among patients. In this work, we propose an end-to-end, atlas-free three-dimensional (3D) convolutional deep learning framework for fast and fully automated whole-volume HaN anatomy segmentation. METHODS Our deep learning model, called AnatomyNet, segments OARs from head and neck CT images in an end-to-end fashion, receiving whole-volume HaN CT images as input and generating masks of all OARs of interest in one shot. AnatomyNet is built upon the popular 3D U-net architecture, but extends it in three important ways: (a) a new encoding scheme to allow autosegmentation on whole-volume CT images instead of local patches or subsets of slices, (b) incorporating 3D squeeze-and-excitation residual blocks in encoding layers for better feature representation, and (c) a new loss function combining Dice scores and focal loss to facilitate the training of the neural model. These features are designed to address two main challenges in deep learning-based HaN segmentation: (a) segmenting small anatomies (i.e., optic chiasm and optic nerves) occupying only a few slices, and (b) training with inconsistent data annotations with missing ground truth for some anatomical structures. RESULTS We collected 261 HaN CT images to train AnatomyNet and used MICCAI Head and Neck Auto Segmentation Challenge 2015 as a benchmark dataset to evaluate the performance of AnatomyNet. The objective is to segment nine anatomies: brain stem, chiasm, mandible, optic nerve left, optic nerve right, parotid gland left, parotid gland right, submandibular gland left, and submandibular gland right. Compared to previous state-of-the-art results from the MICCAI 2015 competition, AnatomyNet increases Dice similarity coefficient by 3.3% on average. AnatomyNet takes about 0.12 s to fully segment a head and neck CT image of dimension 178 × 302 × 225, significantly faster than previous methods. In addition, the model is able to process whole-volume CT images and delineate all OARs in one pass, requiring little pre- or postprocessing. CONCLUSION Deep learning models offer a feasible solution to the problem of delineating OARs from CT images. We demonstrate that our proposed model can improve segmentation accuracy and simplify the autosegmentation pipeline. With this method, it is possible to delineate OARs of a head and neck CT within a fraction of a second.
Collapse
Affiliation(s)
- Wentao Zhu
- Department of Computer Science, University of California, Irvine, CA, USA
| | | | | | - Xuming Chen
- Department of Radiation Oncology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Yong Liu
- Department of Radiation Oncology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Zhen Qian
- Tencent Medical AI Lab, Palo Alto, CA, USA
| | - Nan Du
- Tencent Medical AI Lab, Palo Alto, CA, USA
| | - Wei Fan
- Tencent Medical AI Lab, Palo Alto, CA, USA
| | - Xiaohui Xie
- Department of Computer Science, University of California, Irvine, CA, USA
| |
Collapse
|
49
|
Men K, Geng H, Cheng C, Zhong H, Huang M, Fan Y, Plastaras JP, Lin A, Xiao Y. Technical Note: More accurate and efficient segmentation of organs-at-risk in radiotherapy with convolutional neural networks cascades. Med Phys 2018; 46:286-292. [PMID: 30450825 DOI: 10.1002/mp.13296] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2018] [Revised: 11/12/2018] [Accepted: 11/13/2018] [Indexed: 01/17/2023] Open
Abstract
PURPOSE Manual delineation of organs-at-risk (OARs) in radiotherapy is both time-consuming and subjective. Automated and more accurate segmentation is of the utmost importance in clinical application. The purpose of this study is to further improve the segmentation accuracy and efficiency with a novel network named convolutional neural networks (CNN) Cascades. METHODS CNN Cascades was a two-step, coarse-to-fine approach that consisted of a simple region detector (SRD) and a fine segmentation unit (FSU). The SRD first used a relative shallow network to define the region of interest (ROI) where the organ was located, and then, the FSU took the smaller ROI as input and adopted a deep network for fine segmentation. The imaging data (14,651 slices) of 100 head-and-neck patients with segmentations were used for this study. The performance was compared with the state-of-the-art single CNN in terms of accuracy with metrics of Dice similarity coefficient (DSC) and Hausdorff distance (HD) values. RESULTS The proposed CNN Cascades outperformed the single CNN on accuracy for each OAR. Similarly, for the average of all OARs, it was also the best with mean DSC of 0.90 (SRD: 0.86, FSU: 0.87, and U-Net: 0.85) and the mean HD of 3.0 mm (SRD: 4.0, FSU: 3.6, and U-Net: 4.4). Meanwhile, the CNN Cascades reduced the mean segmentation time per patient by 48% (FSU) and 5% (U-Net), respectively. CONCLUSIONS The proposed two-step network demonstrated superior performance by reducing the input region. This potentially can be an effective segmentation method that provides accurate and consistent delineation with reduced clinician interventions for clinical applications as well as for quality assurance of a multicenter clinical trial.
Collapse
Affiliation(s)
- Kuo Men
- University of Pennsylvania, Philadelphia, PA, 19104, USA
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China
| | - Huaizhi Geng
- University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Chingyun Cheng
- University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Haoyu Zhong
- University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Mi Huang
- University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Yong Fan
- University of Pennsylvania, Philadelphia, PA, 19104, USA
| | | | - Alexander Lin
- University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Ying Xiao
- University of Pennsylvania, Philadelphia, PA, 19104, USA
| |
Collapse
|
50
|
Liang S, Tang F, Huang X, Yang K, Zhong T, Hu R, Liu S, Yuan X, Zhang Y. Deep-learning-based detection and segmentation of organs at risk in nasopharyngeal carcinoma computed tomographic images for radiotherapy planning. Eur Radiol 2018; 29:1961-1967. [PMID: 30302589 DOI: 10.1007/s00330-018-5748-9] [Citation(s) in RCA: 76] [Impact Index Per Article: 12.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2018] [Revised: 08/16/2018] [Accepted: 09/10/2018] [Indexed: 12/13/2022]
Abstract
OBJECTIVE Accurate detection and segmentation of organs at risks (OARs) in CT image is the key step for efficient planning of radiation therapy for nasopharyngeal carcinoma (NPC) treatment. We develop a fully automated deep-learning-based method (termed organs-at-risk detection and segmentation network (ODS net)) on CT images and investigate ODS net performance in automated detection and segmentation of OARs. METHODS The ODS net consists of two convolutional neural networks (CNNs). The first CNN proposes organ bounding boxes along with their scores, and then a second CNN utilizes the proposed bounding boxes to predict segmentation masks for each organ. A total of 185 subjects were included in this study for statistical comparison. Sensitivity and specificity were performed to determine the performance of the detection and the Dice coefficient was used to quantitatively measure the overlap between automated segmentation results and manual segmentation. Paired samples t tests and analysis of variance were employed for statistical analysis. RESULTS ODS net provides an accurate detection result with a sensitivity of 0.997 to 1 for most organs and a specificity of 0.983 to 0.999. Furthermore, segmentation results from ODS net correlated strongly with manual segmentation with a Dice coefficient of more than 0.85 in most organs. A significantly higher Dice coefficient for all organs together (p = 0.0003 < 0.01) was obtained in ODS net (0.861 ± 0.07) than in fully convolutional neural network (FCN) (0.8 ± 0.07). The Dice coefficients of each OAR did not differ significantly between different T-staging patients. CONCLUSION The ODS net yielded accurate automated detection and segmentation of OARs in CT images and thereby may improve and facilitate radiotherapy planning for NPC. KEY POINTS • A fully automated deep-learning method (ODS net) is developed to detect and segment OARs in clinical CT images. • This deep-learning-based framework produces reliable detection and segmentation results and thus can be useful in delineating OARs in NPC radiotherapy planning. • This deep-learning-based framework delineating a single image requires approximately 30 s, which is suitable for clinical workflows.
Collapse
Affiliation(s)
- Shujun Liang
- Guangdong Provincial Key Laboratory of Medical Image Processing, School of Biomedical Engineering, Southern Medical University, No. 1838 Guangzhou Northern Avenue, Baiyun District, Guangzhou, 510515, Guangdong, China
| | - Fan Tang
- Guangdong Provincial Key Laboratory of Medical Image Processing, School of Biomedical Engineering, Southern Medical University, No. 1838 Guangzhou Northern Avenue, Baiyun District, Guangzhou, 510515, Guangdong, China.,Department of Radiation Oncology, Nanfang Hospital, Southern Medical University, Guangzhou, 510515, Guangdong, China
| | - Xia Huang
- Guangdong Provincial Key Laboratory of Medical Image Processing, School of Biomedical Engineering, Southern Medical University, No. 1838 Guangzhou Northern Avenue, Baiyun District, Guangzhou, 510515, Guangdong, China
| | - Kaifan Yang
- Department of Medical Imaging Center, Nanfang Hospital, Southern Medical University, Guangzhou, 510515, Guangdong, China
| | - Tao Zhong
- Guangdong Provincial Key Laboratory of Medical Image Processing, School of Biomedical Engineering, Southern Medical University, No. 1838 Guangzhou Northern Avenue, Baiyun District, Guangzhou, 510515, Guangdong, China
| | - Runyue Hu
- Guangdong Provincial Key Laboratory of Medical Image Processing, School of Biomedical Engineering, Southern Medical University, No. 1838 Guangzhou Northern Avenue, Baiyun District, Guangzhou, 510515, Guangdong, China
| | - Shangqing Liu
- Guangdong Provincial Key Laboratory of Medical Image Processing, School of Biomedical Engineering, Southern Medical University, No. 1838 Guangzhou Northern Avenue, Baiyun District, Guangzhou, 510515, Guangdong, China
| | - Xinrui Yuan
- Guangdong Provincial Key Laboratory of Medical Image Processing, School of Biomedical Engineering, Southern Medical University, No. 1838 Guangzhou Northern Avenue, Baiyun District, Guangzhou, 510515, Guangdong, China
| | - Yu Zhang
- Guangdong Provincial Key Laboratory of Medical Image Processing, School of Biomedical Engineering, Southern Medical University, No. 1838 Guangzhou Northern Avenue, Baiyun District, Guangzhou, 510515, Guangdong, China.
| |
Collapse
|