101
|
Tian X, Li C, Liu H, Li P, He J, Gao W. Applications of artificial intelligence in radiophysics. J Cancer Res Ther 2021; 17:1603-1607. [DOI: 10.4103/jcrt.jcrt_1438_21] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
|
102
|
Lei Y, Tian Z, Wang T, Higgins K, Bradley JD, Curran WJ, Liu T, Yang X. Deep learning-based real-time volumetric imaging for lung stereotactic body radiation therapy: a proof of concept study. Phys Med Biol 2020; 65:235003. [PMID: 33080578 DOI: 10.1088/1361-6560/abc303] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
Abstract
Due to the inter- and intra- variation of respiratory motion, it is highly desired to provide real-time volumetric images during the treatment delivery of lung stereotactic body radiation therapy (SBRT) for accurate and active motion management. In this proof-of-concept study, we propose a novel generative adversarial network integrated with perceptual supervision to derive instantaneous volumetric images from a single 2D projection. Our proposed network, named TransNet, consists of three modules, i.e. encoding, transformation and decoding modules. Rather than only using image distance loss between the generated 3D images and the ground truth 3D CT images to supervise the network, perceptual loss in feature space is integrated into loss function to force the TransNet to yield accurate lung boundary. Adversarial supervision is also used to improve the realism of generated 3D images. We conducted a simulation study on 20 patient cases, who had received lung SBRT treatments in our institution and undergone 4D-CT simulation, and evaluated the efficacy and robustness of our method for four different projection angles, i.e. 0°, 30°, 60° and 90°. For each 3D CT image set of a breathing phase, we simulated its 2D projections at these angles. For each projection angle, a patient's 3D CT images of 9 phases and the corresponding 2D projection data were used to train our network for that specific patient, with the remaining phase used for testing. The mean absolute error of the 3D images obtained by our method are 99.3 ± 14.1 HU. The peak signal-to-noise ratio and structural similarity index metric within the tumor region of interest are 15.4 ± 2.5 dB and 0.839 ± 0.090, respectively. The center of mass distance between the manual tumor contours on the 3D images obtained by our method and the manual tumor contours on the corresponding 3D phase CT images are within 2.6 mm, with a mean value of 1.26 mm averaged over all the cases. Our method has also been validated in a simulated challenging scenario with increased respiratory motion amplitude and tumor shrinkage, and achieved acceptable results. Our experimental results demonstrate the feasibility and efficacy of our 2D-to-3D method for lung cancer patients, which provides a potential solution for in-treatment real-time on-board volumetric imaging for tumor tracking and dose delivery verification to ensure the effectiveness of lung SBRT treatment.
Collapse
Affiliation(s)
- Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, United States of America
- Co-first author
| | - Zhen Tian
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, United States of America
- Co-first author
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, United States of America
| | - Kristin Higgins
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, United States of America
| | - Jeffrey D Bradley
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, United States of America
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, United States of America
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, United States of America
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, United States of America
| |
Collapse
|
103
|
Diniz JOB, Ferreira JL, Diniz PHB, Silva AC, de Paiva AC. Esophagus segmentation from planning CT images using an atlas-based deep learning approach. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 197:105685. [PMID: 32798976 DOI: 10.1016/j.cmpb.2020.105685] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/24/2020] [Accepted: 07/28/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND AND OBJECTIVE One of the main steps in the planning of radiotherapy (RT) is the segmentation of organs at risk (OARs) in Computed Tomography (CT). The esophagus is one of the most difficult OARs to segment. The boundaries between the esophagus and other surrounding tissues are not well-defined, and it is presented in several slices of the CT. Thus, manually segment the esophagus requires a lot of experience and takes time. This difficulty in manual segmentation combined with fatigue due to the number of slices to segment can cause human errors. To address these challenges, computational solutions for analyzing medical images and proposing automated segmentation have been developed and explored in recent years. In this work, we propose a fully automatic method for esophagus segmentation for better planning of radiotherapy in CT. METHODS The proposed method is a fully automated segmentation of the esophagus, consisting of 5 main steps: (a) image acquisition; (b) VOI segmentation; (c) preprocessing; (d) esophagus segmentation; and (e) segmentation refinement. RESULTS The method was applied in a database of 36 CT acquired from 3 different institutes. It achieved the best results in literature so far: Dice coefficient value of 82.15%, Jaccard Index of 70.21%, accuracy of 99.69%, sensitivity of 90.61%, specificity of 99.76%, and Hausdorff Distance of 6.1030 mm. CONCLUSIONS With the achieved results, we were able to show how promising the method is, and that applying it in large medical centers, where esophagus segmentation is still an arduous and challenging task, can be of great help to the specialists.
Collapse
Affiliation(s)
| | - Jonnison Lima Ferreira
- Federal University of Maranho, Brazil; Federal Institute of Amazonas - IFAM, Manaus, AM, Brazil
| | | | | | | |
Collapse
|
104
|
Wang Z, Chang Y, Peng Z, Lv Y, Shi W, Wang F, Pei X, Xu XG. Evaluation of deep learning-based auto-segmentation algorithms for delineating clinical target volume and organs at risk involving data for 125 cervical cancer patients. J Appl Clin Med Phys 2020; 21:272-279. [PMID: 33238060 PMCID: PMC7769393 DOI: 10.1002/acm2.13097] [Citation(s) in RCA: 32] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2020] [Revised: 10/03/2020] [Accepted: 10/21/2020] [Indexed: 12/15/2022] Open
Abstract
Objective To evaluate the accuracy of a deep learning‐based auto‐segmentation mode to that of manual contouring by one medical resident, where both entities tried to mimic the delineation "habits" of the same clinical senior physician. Methods This study included 125 cervical cancer patients whose clinical target volumes (CTVs) and organs at risk (OARs) were delineated by the same senior physician. Of these 125 cases, 100 were used for model training and the remaining 25 for model testing. In addition, the medical resident instructed by the senior physician for approximately 8 months delineated the CTVs and OARs for the testing cases. The dice similarity coefficient (DSC) and the Hausdorff Distance (HD) were used to evaluate the delineation accuracy for CTV, bladder, rectum, small intestine, femoral‐head‐left, and femoral‐head‐right. Results The DSC values of the auto‐segmentation model and manual contouring by the resident were, respectively, 0.86 and 0.83 for the CTV (P < 0.05), 0.91 and 0.91 for the bladder (P > 0.05), 0.88 and 0.84 for the femoral‐head‐right (P < 0.05), 0.88 and 0.84 for the femoral‐head‐left (P < 0.05), 0.86 and 0.81 for the small intestine (P < 0.05), and 0.81 and 0.84 for the rectum (P > 0.05). The HD (mm) values were, respectively, 14.84 and 18.37 for the CTV (P < 0.05), 7.82 and 7.63 for the bladder (P > 0.05), 6.18 and 6.75 for the femoral‐head‐right (P > 0.05), 6.17 and 6.31 for the femoral‐head‐left (P > 0.05), 22.21 and 26.70 for the small intestine (P > 0.05), and 7.04 and 6.13 for the rectum (P > 0.05). The auto‐segmentation model took approximately 2 min to delineate the CTV and OARs while the resident took approximately 90 min to complete the same task. Conclusion The auto‐segmentation model was as accurate as the medical resident but with much better efficiency in this study. Furthermore, the auto‐segmentation approach offers additional perceivable advantages of being consistent and ever improving when compared with manual approaches.
Collapse
Affiliation(s)
- Zhi Wang
- Center of Radiological Medical Physics, University of Science and Technology of China, Hefei, China.,Department of Radiation Oncology, First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Yankui Chang
- Center of Radiological Medical Physics, University of Science and Technology of China, Hefei, China
| | - Zhao Peng
- Center of Radiological Medical Physics, University of Science and Technology of China, Hefei, China
| | - Yin Lv
- Department of Radiation Oncology, First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Weijiong Shi
- Department of Radiation Oncology, First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Fan Wang
- Department of Radiation Oncology, First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Xi Pei
- Center of Radiological Medical Physics, University of Science and Technology of China, Hefei, China.,Anhui Wisdom Technology Co., Ltd., Hefei, Anhui, China
| | - X George Xu
- Center of Radiological Medical Physics, University of Science and Technology of China, Hefei, China
| |
Collapse
|
105
|
Kiser KJ, Ahmed S, Stieb S, Mohamed ASR, Elhalawani H, Park PYS, Doyle NS, Wang BJ, Barman A, Li Z, Zheng WJ, Fuller CD, Giancardo L. PleThora: Pleural effusion and thoracic cavity segmentations in diseased lungs for benchmarking chest CT processing pipelines. Med Phys 2020; 47:5941-5952. [PMID: 32749075 PMCID: PMC7722027 DOI: 10.1002/mp.14424] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2020] [Revised: 07/22/2020] [Accepted: 07/27/2020] [Indexed: 12/19/2022] Open
Abstract
This manuscript describes a dataset of thoracic cavity segmentations and discrete pleural effusion segmentations we have annotated on 402 computed tomography (CT) scans acquired from patients with non-small cell lung cancer. The segmentation of these anatomic regions precedes fundamental tasks in image analysis pipelines such as lung structure segmentation, lesion detection, and radiomics feature extraction. Bilateral thoracic cavity volumes and pleural effusion volumes were manually segmented on CT scans acquired from The Cancer Imaging Archive "NSCLC Radiomics" data collection. Four hundred and two thoracic segmentations were first generated automatically by a U-Net based algorithm trained on chest CTs without cancer, manually corrected by a medical student to include the complete thoracic cavity (normal, pathologic, and atelectatic lung parenchyma, lung hilum, pleural effusion, fibrosis, nodules, tumor, and other anatomic anomalies), and revised by a radiation oncologist or a radiologist. Seventy-eight pleural effusions were manually segmented by a medical student and revised by a radiologist or radiation oncologist. Interobserver agreement between the radiation oncologist and radiologist corrections was acceptable. All expert-vetted segmentations are publicly available in NIfTI format through The Cancer Imaging Archive at https://doi.org/10.7937/tcia.2020.6c7y-gq39. Tabular data detailing clinical and technical metadata linked to segmentation cases are also available. Thoracic cavity segmentations will be valuable for developing image analysis pipelines on pathologic lungs - where current automated algorithms struggle most. In conjunction with gross tumor volume segmentations already available from "NSCLC Radiomics," pleural effusion segmentations may be valuable for investigating radiomics profile differences between effusion and primary tumor or training algorithms to discriminate between them.
Collapse
Affiliation(s)
- Kendall J. Kiser
- John P. and Kathrine G. McGovern Medical SchoolHoustonTXUSA
- Center for Precision HealthUTHealth School of Biomedical InformaticsHoustonTXUSA
- Department of Radiation OncologyUniversity of Texas MD Anderson Cancer CenterHoustonTXUSA
| | - Sara Ahmed
- Department of Radiation OncologyUniversity of Texas MD Anderson Cancer CenterHoustonTXUSA
| | - Sonja Stieb
- Department of Radiation OncologyUniversity of Texas MD Anderson Cancer CenterHoustonTXUSA
| | - Abdallah S. R. Mohamed
- Department of Radiation OncologyUniversity of Texas MD Anderson Cancer CenterHoustonTXUSA
- MD Anderson Cancer Center‐UTHealth Graduate School of Biomedical SciencesHoustonTXUSA
| | - Hesham Elhalawani
- Department of Radiation OncologyCleveland Clinic Taussig Cancer CenterClevelandOHUSA
| | - Peter Y. S. Park
- Department of Diagnostic and Interventional ImagingJohn P. and Kathrine G. McGovern Medical SchoolHoustonTXUSA
| | - Nathan S. Doyle
- Department of Diagnostic and Interventional ImagingJohn P. and Kathrine G. McGovern Medical SchoolHoustonTXUSA
| | - Brandon J. Wang
- Department of Diagnostic and Interventional ImagingJohn P. and Kathrine G. McGovern Medical SchoolHoustonTXUSA
| | - Arko Barman
- Center for Precision HealthUTHealth School of Biomedical InformaticsHoustonTXUSA
| | - Zhao Li
- Center for Precision HealthUTHealth School of Biomedical InformaticsHoustonTXUSA
| | - W. Jim Zheng
- Center for Precision HealthUTHealth School of Biomedical InformaticsHoustonTXUSA
| | - Clifton D. Fuller
- Department of Radiation OncologyUniversity of Texas MD Anderson Cancer CenterHoustonTXUSA
- MD Anderson Cancer Center‐UTHealth Graduate School of Biomedical SciencesHoustonTXUSA
| | - Luca Giancardo
- Center for Precision HealthUTHealth School of Biomedical InformaticsHoustonTXUSA
- Department of Radiation OncologyCleveland Clinic Taussig Cancer CenterClevelandOHUSA
| |
Collapse
|
106
|
Multi-Atlas Based Adaptive Active Contour Model with Application to Organs at Risk Segmentation in Brain MR Images. Ing Rech Biomed 2020. [DOI: 10.1016/j.irbm.2020.10.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
107
|
Brouwer CL, Boukerroui D, Oliveira J, Looney P, Steenbakkers RJ, Langendijk JA, Both S, Gooding MJ. Assessment of manual adjustment performed in clinical practice following deep learning contouring for head and neck organs at risk in radiotherapy. Phys Imaging Radiat Oncol 2020; 16:54-60. [PMID: 33458344 PMCID: PMC7807591 DOI: 10.1016/j.phro.2020.10.001] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2020] [Revised: 09/30/2020] [Accepted: 10/01/2020] [Indexed: 12/30/2022] Open
Abstract
BACKGROUND AND PURPOSE Auto-contouring performance has been widely studied in development and commissioning studies in radiotherapy, and its impact on clinical workflow assessed in that context. This study aimed to evaluate the manual adjustment of auto-contouring in routine clinical practice and to identify improvements regarding the auto-contouring model and clinical user interaction, to improve the efficiency of auto-contouring. MATERIALS AND METHODS A total of 103 clinical head and neck cancer cases, contoured using a commercial deep-learning contouring system and subsequently checked and edited for clinical use were retrospectively taken from clinical data over a twelve-month period (April 2019-April 2020). The amount of adjustment performed was calculated, and all cases were registered to a common reference frame for assessment purposes. The median, 10th and 90th percentile of adjustment were calculated and displayed using 3D renderings of structures to visually assess systematic and random adjustment. Results were also compared to inter-observer variation reported previously. Assessment was performed for both the whole structures and for regional sub-structures, and according to the radiation therapy technologist (RTT) who edited the contour. RESULTS The median amount of adjustment was low for all structures (<2 mm), although large local adjustment was observed for some structures. The median was systematically greater or equal to zero, indicating that the auto-contouring tends to under-segment the desired contour. CONCLUSION Auto-contouring performance assessment in routine clinical practice has identified systematic improvements required technically, but also highlighted the need for continued RTT training to ensure adherence to guidelines.
Collapse
Affiliation(s)
- Charlotte L. Brouwer
- University of Groningen, University Medical Center Groningen, Department of Radiation Oncology, Groningen, The Netherlands
| | | | | | | | - Roel J.H.M. Steenbakkers
- University of Groningen, University Medical Center Groningen, Department of Radiation Oncology, Groningen, The Netherlands
| | - Johannes A. Langendijk
- University of Groningen, University Medical Center Groningen, Department of Radiation Oncology, Groningen, The Netherlands
| | - Stefan Both
- University of Groningen, University Medical Center Groningen, Department of Radiation Oncology, Groningen, The Netherlands
| | | |
Collapse
|
108
|
Zhu J, Chen X, Yang B, Bi N, Zhang T, Men K, Dai J. Evaluation of Automatic Segmentation Model With Dosimetric Metrics for Radiotherapy of Esophageal Cancer. Front Oncol 2020; 10:564737. [PMID: 33117694 PMCID: PMC7550908 DOI: 10.3389/fonc.2020.564737] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2020] [Accepted: 08/17/2020] [Indexed: 12/11/2022] Open
Abstract
Background and Purpose: Automatic segmentation model is proven to be efficient in delineation of organs at risk (OARs) in radiotherapy; its performance is usually evaluated with geometric differences between automatic and manual delineations. However, dosimetric differences attract more interests than geometric differences in the clinic. Therefore, this study aimed to evaluate the performance of automatic segmentation with dosimetric metrics for volumetric modulated arc therapy of esophageal cancer patients. Methods: Nineteen esophageal cancer cases were included in this study. Clinicians manually delineated the target volumes and the OARs for each case. Another set of OARs was automatically generated using convolutional neural network models. The radiotherapy plans were optimized with the manually delineated targets and the automatically delineated OARs separately. Segmentation accuracy was evaluated by Dice similarity coefficient (DSC) and mean distance to agreement (MDA). Dosimetric metrics of manually and automatically delineated OARs were obtained and compared. The clinically acceptable dose difference and volume difference of OARs between manual and automatic delineations are supposed to be within 1 Gy and 1%, respectively. Results: Average DSC values were greater than 0.92 except for the spinal cord (0.82), and average MDA values were <0.90 mm except for the heart (1.74 mm). Eleven of the 20 dosimetric metrics of the OARs were not significant (P > 0.05). Although there were significant differences (P < 0.05) for the spinal cord (D2%), left lung (V10, V20, V30, and mean dose), and bilateral lung (V10, V20, V30, and mean dose), their absolute differences were small and acceptable for the clinic. The maximum dosimetric metrics differences of OARs between manual and automatic delineations were ΔD2% = 0.35 Gy for the spinal cord and ΔV30 = 0.4% for the bilateral lung, which were within the clinical criteria in this study. Conclusion: Dosimetric metrics were proposed to evaluate the automatic delineation in radiotherapy planning of esophageal cancer. Consequently, the automatic delineation could substitute the manual delineation for esophageal cancer radiotherapy planning based on the dosimetric evaluation in this study.
Collapse
Affiliation(s)
- Ji Zhu
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Xinyuan Chen
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Bining Yang
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Nan Bi
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Tao Zhang
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Kuo Men
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Jianrong Dai
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| |
Collapse
|
109
|
Hofmanninger J, Prayer F, Pan J, Röhrich S, Prosch H, Langs G. Automatic lung segmentation in routine imaging is primarily a data diversity problem, not a methodology problem. Eur Radiol Exp 2020; 4:50. [PMID: 32814998 PMCID: PMC7438418 DOI: 10.1186/s41747-020-00173-2] [Citation(s) in RCA: 220] [Impact Index Per Article: 55.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2020] [Accepted: 06/30/2020] [Indexed: 12/21/2022] Open
Abstract
BACKGROUND Automated segmentation of anatomical structures is a crucial step in image analysis. For lung segmentation in computed tomography, a variety of approaches exists, involving sophisticated pipelines trained and validated on different datasets. However, the clinical applicability of these approaches across diseases remains limited. METHODS We compared four generic deep learning approaches trained on various datasets and two readily available lung segmentation algorithms. We performed evaluation on routine imaging data with more than six different disease patterns and three published data sets. RESULTS Using different deep learning approaches, mean Dice similarity coefficients (DSCs) on test datasets varied not over 0.02. When trained on a diverse routine dataset (n = 36), a standard approach (U-net) yields a higher DSC (0.97 ± 0.05) compared to training on public datasets such as the Lung Tissue Research Consortium (0.94 ± 0.13, p = 0.024) or Anatomy 3 (0.92 ± 0.15, p = 0.001). Trained on routine data (n = 231) covering multiple diseases, U-net compared to reference methods yields a DSC of 0.98 ± 0.03 versus 0.94 ± 0.12 (p = 0.024). CONCLUSIONS The accuracy and reliability of lung segmentation algorithms on demanding cases primarily relies on the diversity of the training data, highlighting the importance of data diversity compared to model choice. Efforts in developing new datasets and providing trained models to the public are critical. By releasing the trained model under General Public License 3.0, we aim to foster research on lung diseases by providing a readily available tool for segmentation of pathological lungs.
Collapse
Affiliation(s)
- Johannes Hofmanninger
- Department of Biomedical Imaging and Image-guided Therapy, Medical University of Vienna, Waehringer Guertel, 18-20, Vienna, Austria.
| | - Forian Prayer
- Department of Biomedical Imaging and Image-guided Therapy, Medical University of Vienna, Waehringer Guertel, 18-20, Vienna, Austria
| | - Jeanny Pan
- Department of Biomedical Imaging and Image-guided Therapy, Medical University of Vienna, Waehringer Guertel, 18-20, Vienna, Austria
| | - Sebastian Röhrich
- Department of Biomedical Imaging and Image-guided Therapy, Medical University of Vienna, Waehringer Guertel, 18-20, Vienna, Austria
| | - Helmut Prosch
- Department of Biomedical Imaging and Image-guided Therapy, Medical University of Vienna, Waehringer Guertel, 18-20, Vienna, Austria
| | - Georg Langs
- Department of Biomedical Imaging and Image-guided Therapy, Medical University of Vienna, Waehringer Guertel, 18-20, Vienna, Austria.
| |
Collapse
|
110
|
Yang WC, Hsu FM, Yang PC. Precision radiotherapy for non-small cell lung cancer. J Biomed Sci 2020; 27:82. [PMID: 32693792 PMCID: PMC7374898 DOI: 10.1186/s12929-020-00676-5] [Citation(s) in RCA: 39] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2020] [Accepted: 07/17/2020] [Indexed: 02/07/2023] Open
Abstract
Precision medicine is becoming the standard of care in anti-cancer treatment. The personalized precision management of cancer patients highly relies on the improvement of new technology in next generation sequencing and high-throughput big data processing for biological and radiographic information. Systemic precision cancer therapy has been developed for years. However, the role of precision medicine in radiotherapy has not yet been fully implemented. Emerging evidence has shown that precision radiotherapy for cancer patients is possible with recent advances in new radiotherapy technologies, panomics, radiomics and dosiomics. This review focused on the role of precision radiotherapy in non-small cell lung cancer and demonstrated the current landscape.
Collapse
Affiliation(s)
- Wen-Chi Yang
- Division of Radiation Oncology, Department of Oncology, National Taiwan University Hospital, No. 7, Chung-Shan South Rd, Taipei, Taiwan.,Graduate Institute of Oncology, National Taiwan University College of Medicine, Taipei, Taiwan
| | - Feng-Ming Hsu
- Division of Radiation Oncology, Department of Oncology, National Taiwan University Hospital, No. 7, Chung-Shan South Rd, Taipei, Taiwan. .,Graduate Institute of Oncology, National Taiwan University College of Medicine, Taipei, Taiwan.
| | - Pan-Chyr Yang
- Graduate Institute of Oncology, National Taiwan University College of Medicine, Taipei, Taiwan. .,Department of Internal Medicine, National Taiwan University Hospital, No.1 Sec 1, Jen-Ai Rd, Taipei, 100, Taiwan.
| |
Collapse
|
111
|
Chen W, Li Y, Dyer BA, Feng X, Rao S, Benedict SH, Chen Q, Rong Y. Deep learning vs. atlas-based models for fast auto-segmentation of the masticatory muscles on head and neck CT images. Radiat Oncol 2020; 15:176. [PMID: 32690103 PMCID: PMC7372849 DOI: 10.1186/s13014-020-01617-0] [Citation(s) in RCA: 43] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2020] [Accepted: 07/13/2020] [Indexed: 12/16/2022] Open
Abstract
BACKGROUND Impaired function of masticatory muscles will lead to trismus. Routine delineation of these muscles during planning may improve dose tracking and facilitate dose reduction resulting in decreased radiation-related trismus. This study aimed to compare a deep learning model with a commercial atlas-based model for fast auto-segmentation of the masticatory muscles on head and neck computed tomography (CT) images. MATERIAL AND METHODS Paired masseter (M), temporalis (T), medial and lateral pterygoid (MP, LP) muscles were manually segmented on 56 CT images. CT images were randomly divided into training (n = 27) and validation (n = 29) cohorts. Two methods were used for automatic delineation of masticatory muscles (MMs): Deep learning auto-segmentation (DLAS) and atlas-based auto-segmentation (ABAS). The automatic algorithms were evaluated using Dice similarity coefficient (DSC), recall, precision, Hausdorff distance (HD), HD95, and mean surface distance (MSD). A consolidated score was calculated by normalizing the metrics against interobserver variability and averaging over all patients. Differences in dose (∆Dose) to MMs for DLAS and ABAS segmentations were assessed. A paired t-test was used to compare the geometric and dosimetric difference between DLAS and ABAS methods. RESULTS DLAS outperformed ABAS in delineating all MMs (p < 0.05). The DLAS mean DSC for M, T, MP, and LP ranged from 0.83 ± 0.03 to 0.89 ± 0.02, the ABAS mean DSC ranged from 0.79 ± 0.05 to 0.85 ± 0.04. The mean value for recall, HD, HD95, MSD also improved with DLAS for auto-segmentation. Interobserver variation revealed the highest variability in DSC and MSD for both T and MP, and the highest scores were achieved for T by both automatic algorithms. With few exceptions, the mean ∆D98%, ∆D95%, ∆D50%, and ∆D2% for all structures were below 10% for DLAS and ABAS and had no detectable statistical difference (P > 0.05). DLAS based contours had dose endpoints more closely matched with that of the manually segmented when compared with ABAS. CONCLUSIONS DLAS auto-segmentation of masticatory muscles for the head and neck radiotherapy had improved segmentation accuracy compared with ABAS with no qualitative difference in dosimetric endpoints compared to manually segmented contours.
Collapse
Affiliation(s)
- Wen Chen
- Department of Radiation Oncology, Xiangya Hospital, Central South University, Changsha, China.,Department of Radiation Oncology, University of California Davis Medical Center, 4501 X Street, Suite 0152, Sacramento, California, 95817, USA
| | - Yimin Li
- Department of Radiation Oncology, Xiamen Cancer Center, The First Affiliated Hospital of Xiamen University, Xiamen, Fujian, China
| | - Brandon A Dyer
- Department of Radiation Oncology, University of California Davis Medical Center, 4501 X Street, Suite 0152, Sacramento, California, 95817, USA.,Department of Radiation Oncology, University of Washington, Seattle, WA, USA
| | - Xue Feng
- Carina Medical LLC, 145 Graham Ave, A168, Lexington, KY, 40536, USA
| | - Shyam Rao
- Department of Radiation Oncology, University of California Davis Medical Center, 4501 X Street, Suite 0152, Sacramento, California, 95817, USA
| | - Stanley H Benedict
- Department of Radiation Oncology, University of California Davis Medical Center, 4501 X Street, Suite 0152, Sacramento, California, 95817, USA
| | - Quan Chen
- Carina Medical LLC, 145 Graham Ave, A168, Lexington, KY, 40536, USA. .,Department of Radiation Oncology, Markey Cancer Center, University of Kentucky, RM CC063, 800 Rose St, Lexington, KY, 40536, USA.
| | - Yi Rong
- Department of Radiation Oncology, University of California Davis Medical Center, 4501 X Street, Suite 0152, Sacramento, California, 95817, USA.
| |
Collapse
|
112
|
Men K, Geng H, Biswas T, Liao Z, Xiao Y. Automated Quality Assurance of OAR Contouring for Lung Cancer Based on Segmentation With Deep Active Learning. Front Oncol 2020; 10:986. [PMID: 32719742 PMCID: PMC7350536 DOI: 10.3389/fonc.2020.00986] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2019] [Accepted: 05/19/2020] [Indexed: 12/25/2022] Open
Abstract
Purpose: Ensuring high-quality data for clinical trials in radiotherapy requires the generation of contours that comply with protocol definitions. The current workflow includes a manual review of the submitted contours, which is time-consuming and subjective. In this study, we developed an automated quality assurance (QA) system for lung cancer based on a segmentation model trained with deep active learning. Methods: The data included a gold atlas with 36 cases and 110 cases from the "NRG Oncology/RTOG 1308 Trial". The first 70 cases enrolled to the RTOG 1308 formed the candidate set, and the remaining 40 cases were randomly assigned to validation and test sets (each with 20 cases). The organs-at-risk included the heart, esophagus, spinal cord, and lungs. A preliminary convolutional neural network segmentation model was trained with the gold standard atlas. To address the deficiency of the limited training data, we selected quality images from the candidate set to be added to the training set for fine-tuning of the model with deep active learning. The trained robust segmentation models were used for QA purposes. The segmentation evaluation metrics derived from the validation set, including the Dice and Hausdorff distance, were used to develop the criteria for QA decision making. The performance of the strategy was assessed using the test set. Results: The QA method achieved promising contouring error detection, with the following metrics for the heart, esophagus, spinal cord, left lung, and right lung: balanced accuracy, 0.96, 0.95, 0.96, 0.97, and 0.97, respectively; sensitivity, 0.95, 0.98, 0.96, 1.0, and 1.0, respectively; specificity, 0.98, 0.92, 0.97, 0.94, and 0.94, respectively; and area under the receiving operator characteristic curve, 0.96, 0.95, 0.96, 0.97, and 0.94, respectively. Conclusions: The proposed system automatically detected contour errors for QA. It could provide consistent and objective evaluations with much reduced investigator intervention in multicenter clinical trials.
Collapse
Affiliation(s)
- Kuo Men
- University of Pennsylvania, Philadelphia, PA, United States
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Huaizhi Geng
- University of Pennsylvania, Philadelphia, PA, United States
| | - Tithi Biswas
- UH Cleveland Medical Center, Cleveland, OH, United States
| | - Zhongxing Liao
- MD Anderson Cancer Center, The University of Texas, Houston, TX, United States
| | - Ying Xiao
- University of Pennsylvania, Philadelphia, PA, United States
| |
Collapse
|
113
|
Brunenberg EJ, Steinseifer IK, van den Bosch S, Kaanders JH, Brouwer CL, Gooding MJ, van Elmpt W, Monshouwer R. External validation of deep learning-based contouring of head and neck organs at risk. Phys Imaging Radiat Oncol 2020; 15:8-15. [PMID: 33458320 PMCID: PMC7807543 DOI: 10.1016/j.phro.2020.06.006] [Citation(s) in RCA: 36] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/29/2020] [Revised: 05/29/2020] [Accepted: 06/27/2020] [Indexed: 11/24/2022] Open
Abstract
BACKGROUND AND PURPOSE Head and neck (HN) radiotherapy can benefit from automatic delineation of tumor and surrounding organs because of the complex anatomy and the regular need for adaptation. The aim of this study was to assess the performance of a commercially available deep learning contouring (DLC) model on an external validation set. MATERIALS AND METHODS The CT-based DLC model, trained at the University Medical Center Groningen (UMCG), was applied to an independent set of 58 patients from the Radboud University Medical Center (RUMC). DLC results were compared to the RUMC manual reference using the Dice similarity coefficient (DSC) and 95th percentile of Hausdorff distance (HD95). Craniocaudal spatial information was added by calculating binned measures. In addition, a qualitative evaluation compared the acceptance of manual and DLC contours in both groups of observers. RESULTS Good correspondence was shown for the mandible (DSC 0.90; HD95 3.6 mm). Performance was reasonable for the glandular OARs, brainstem and oral cavity (DSC 0.78-0.85, HD95 3.7-7.3 mm). The other aerodigestive tract OARs showed only moderate agreement (DSC 0.53-0.65, HD95 around 9 mm). The binned measures displayed the largest deviations caudally and/or cranially. CONCLUSIONS This study demonstrates that the DLC model can provide a reasonable starting point for delineation when applied to an independent patient cohort. The qualitative evaluation did not reveal large differences in the interpretation of contouring guidelines between RUMC and UMCG observers.
Collapse
Affiliation(s)
- Ellen J.L. Brunenberg
- Department of Radiation Oncology, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Isabell K. Steinseifer
- Department of Radiation Oncology, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Sven van den Bosch
- Department of Radiation Oncology, Radboud University Medical Center, Nijmegen, The Netherlands
| | | | - Charlotte L. Brouwer
- Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen, The Netherlands
| | | | - Wouter van Elmpt
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, The Netherlands
| | - René Monshouwer
- Department of Radiation Oncology, Radboud University Medical Center, Nijmegen, The Netherlands
| |
Collapse
|
114
|
Peng Z, Fang X, Yan P, Shan H, Liu T, Pei X, Wang G, Liu B, Kalra MK, Xu XG. A method of rapid quantification of patient-specific organ doses for CT using deep-learning-based multi-organ segmentation and GPU-accelerated Monte Carlo dose computing. Med Phys 2020; 47:2526-2536. [PMID: 32155670 DOI: 10.1002/mp.14131] [Citation(s) in RCA: 36] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2019] [Revised: 02/06/2020] [Accepted: 02/29/2020] [Indexed: 12/31/2022] Open
Abstract
PURPOSE One technical barrier to patient-specific computed tomography (CT) dosimetry has been the lack of computational tools for the automatic patient-specific multi-organ segmentation of CT images and rapid organ dose quantification. When previous CT images are available for the same body region of the patient, the ability to obtain patient-specific organ doses for CT - in a similar manner as radiation therapy treatment planning - will open the door to personalized and prospective CT scan protocols. This study aims to demonstrate the feasibility of combining deep-learning algorithms for automatic segmentation of multiple radiosensitive organs from CT images with the GPU-based Monte Carlo rapid organ dose calculation. METHODS A deep convolutional neural network (CNN) based on the U-Net for organ segmentation is developed and trained to automatically delineate multiple radiosensitive organs from CT images. Two databases are used: The lung CT segmentation challenge 2017 (LCTSC) dataset that contains 60 thoracic CT scan patients, each consisting of five segmented organs, and the Pancreas-CT (PCT) dataset, which contains 43 abdominal CT scan patients each consisting of eight segmented organs. A fivefold cross-validation method is performed on both sets of data. Dice similarity coefficients (DSCs) are used to evaluate the segmentation performance against the ground truth. A GPU-based Monte Carlo dose code, ARCHER, is used to calculate patient-specific CT organ doses. The proposed method is evaluated in terms of relative dose errors (RDEs). To demonstrate the potential improvement of the new method, organ dose results are compared against those obtained for population-average patient phantoms used in an off-line dose reporting software, VirtualDose, at Massachusetts General Hospital. RESULTS The median DSCs are found to be 0.97 (right lung), 0.96 (left lung), 0.92 (heart), 0.86 (spinal cord), 0.76 (esophagus) for the LCTSC dataset, along with 0.96 (spleen), 0.96 (liver), 0.95 (left kidney), 0.90 (stomach), 0.87 (gall bladder), 0.80 (pancreas), 0.75 (esophagus), and 0.61 (duodenum) for the PCT dataset. Comparing with organ dose results from population-averaged phantoms, the new patient-specific method achieved smaller absolute RDEs (mean ± standard deviation) for all organs: 1.8% ± 1.4% (vs 16.0% ± 11.8%) for the lung, 0.8% ± 0.7% (vs 34.0% ± 31.1%) for the heart, 1.6% ± 1.7% (vs 45.7% ± 29.3%) for the esophagus, 0.6% ± 1.2% (vs 15.8% ± 12.7%) for the spleen, 1.2% ± 1.0% (vs 18.1% ± 15.7%) for the pancreas, 0.9% ± 0.6% (vs 20.0% ± 15.2%) for the left kidney, 1.7% ± 3.1% (vs 19.1% ± 9.8%) for the gallbladder, 0.3% ± 0.3% (vs 24.2% ± 18.7%) for the liver, and 1.6% ± 1.7% (vs 19.3% ± 13.6%) for the stomach. The trained automatic segmentation tool takes <5 s per patient for all 103 patients in the dataset. The Monte Carlo radiation dose calculations performed in parallel to the segmentation process using the GPU-accelerated ARCHER code take <4 s per patient to achieve <0.5% statistical uncertainty in all organ doses for all 103 patients in the database. CONCLUSION This work shows the feasibility to perform combined automatic patient-specific multi-organ segmentation of CT images and rapid GPU-based Monte Carlo dose quantification with clinically acceptable accuracy and efficiency.
Collapse
Affiliation(s)
- Zhao Peng
- Department of Engineering and Applied Physics, University of Science and Technology of China, Hefei, Anhui, 230026, China
| | - Xi Fang
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, 12180, USA
| | - Pingkun Yan
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, 12180, USA
| | - Hongming Shan
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, 12180, USA
| | - Tianyu Liu
- Department of Mechanical, Aerospace and Nuclear Engineering, Rensselaer Polytechnic Institute, Troy, NY, 12180, USA
| | - Xi Pei
- Department of Engineering and Applied Physics, University of Science and Technology of China, Hefei, Anhui, 230026, China.,Anhui Wisdom Technology Company Limited, Hefei, Anhui, 238000, China
| | - Ge Wang
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, 12180, USA
| | - Bob Liu
- Department of Radiology, Massachusetts General Hospital, Boston, MA, 02114, USA
| | - Mannudeep K Kalra
- Department of Radiology, Massachusetts General Hospital, Boston, MA, 02114, USA
| | - X George Xu
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, 12180, USA.,Department of Mechanical, Aerospace and Nuclear Engineering, Rensselaer Polytechnic Institute, Troy, NY, 12180, USA
| |
Collapse
|
115
|
Feng X, Bernard ME, Hunter T, Chen Q. Improving accuracy and robustness of deep convolutional neural network based thoracic OAR segmentation. Phys Med Biol 2020; 65:07NT01. [PMID: 32079002 DOI: 10.1088/1361-6560/ab7877] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2023]
Abstract
Deep convolutional neural network (DCNN) has shown great success in various medical image segmentation tasks, including organ-at-risk (OAR) segmentation from computed tomography (CT) images. However, most studies use the dataset from the same source(s) for training and testing so that the ability of a trained DCNN to generalize to a different dataset is not well studied, as well as the strategy to address the issue of performance drop on a different dataset. In this study we investigated the performance of a well-trained DCNN model from a public dataset for thoracic OAR segmentation on a local dataset and explored the systematic differences between the datasets. We observed that a subtle shift of organs inside patient body due to the abdominal compression technique during image acquisition caused significantly worse performance on the local dataset. Furthermore, we developed an optimal strategy via incorporating different numbers of new cases from the local institution and using transfer learning to improve the accuracy and robustness of the trained DCNN model. We found that by adding as few as 10 cases from the local institution, the performance can reach the same level as in the original dataset. With transfer learning, the training time can be significantly shortened with slightly worse performance for heart segmentation.
Collapse
Affiliation(s)
- Xue Feng
- Department of Biomedical Engineering, University of Virginia, Charlottesville, VA 22903, United States of America. Carina Medical LLC, 145 Graham Ave, A168, Lexington, KY 40536, United States of America
| | | | | | | |
Collapse
|
116
|
Yang J, Veeraraghavan H, van Elmpt W, Dekker A, Gooding M, Sharp G. CT images with expert manual contours of thoracic cancer for benchmarking auto-segmentation accuracy. Med Phys 2020; 47:3250-3255. [PMID: 32128809 DOI: 10.1002/mp.14107] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2019] [Revised: 02/17/2020] [Accepted: 02/22/2020] [Indexed: 12/16/2022] Open
Abstract
PURPOSE Automatic segmentation offers many benefits for radiotherapy treatment planning; however, the lack of publicly available benchmark datasets limits the clinical use of automatic segmentation. In this work, we present a well-curated computed tomography (CT) dataset of high-quality manually drawn contours from patients with thoracic cancer that can be used to evaluate the accuracy of thoracic normal tissue auto-segmentation systems. ACQUISITION AND VALIDATION METHODS Computed tomography scans of 60 patients undergoing treatment simulation for thoracic radiotherapy were acquired from three institutions: MD Anderson Cancer Center, Memorial Sloan Kettering Cancer Center, and the MAASTRO clinic. Each institution provided CT scans from 20 patients, including mean intensity projection four-dimensional CT (4D CT), exhale phase (4D CT), or free-breathing CT scans depending on their clinical practice. All CT scans covered the entire thoracic region with a 50-cm field of view and slice spacing of 1, 2.5, or 3 mm. Manual contours of left/right lungs, esophagus, heart, and spinal cord were retrieved from the clinical treatment plans. These contours were checked for quality and edited if necessary to ensure adherence to RTOG 1106 contouring guidelines. DATA FORMAT AND USAGE NOTES The CT images and RTSTRUCT files are available in DICOM format. The regions of interest were named according to the nomenclature recommended by American Association of Physicists in Medicine Task Group 263 as Lung_L, Lung_R, Esophagus, Heart, and SpinalCord. This dataset is available on The Cancer Imaging Archive (funded by the National Cancer Institute) under Lung CT Segmentation Challenge 2017 (http://doi.org/10.7937/K9/TCIA.2017.3r3fvz08). POTENTIAL APPLICATIONS This dataset provides CT scans with well-delineated manually drawn contours from patients with thoracic cancer that can be used to evaluate auto-segmentation systems. Additional anatomies could be supplied in the future to enhance the existing library of contours.
Collapse
Affiliation(s)
- Jinzhong Yang
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Harini Veeraraghavan
- Department of Medical Physics, Memorial Sloan Kettering Cancer Centre, New York, NY, USA
| | - Wouter van Elmpt
- Department of Radiation Oncology (MAASTRO), GROW - School for Oncology and Developmental Biology, Maastricht University Medical Center, Maastricht, The Netherlands
| | - Andre Dekker
- Department of Radiation Oncology (MAASTRO), GROW - School for Oncology and Developmental Biology, Maastricht University Medical Center, Maastricht, The Netherlands
| | | | - Greg Sharp
- Department of Radiation Oncology, Massachusetts General Hospital, Boston, MA, USA
| |
Collapse
|
117
|
Nemoto T, Futakami N, Yagi M, Kumabe A, Takeda A, Kunieda E, Shigematsu N. Efficacy evaluation of 2D, 3D U-Net semantic segmentation and atlas-based segmentation of normal lungs excluding the trachea and main bronchi. JOURNAL OF RADIATION RESEARCH 2020; 61:257-264. [PMID: 32043528 PMCID: PMC7246058 DOI: 10.1093/jrr/rrz086] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/12/2019] [Revised: 09/23/2019] [Accepted: 12/28/2019] [Indexed: 05/29/2023]
Abstract
This study aimed to examine the efficacy of semantic segmentation implemented by deep learning and to confirm whether this method is more effective than a commercially dominant auto-segmentation tool with regards to delineating normal lung excluding the trachea and main bronchi. A total of 232 non-small-cell lung cancer cases were examined. The computed tomography (CT) images of these cases were converted from Digital Imaging and Communications in Medicine (DICOM) Radiation Therapy (RT) formats to arrays of 32 × 128 × 128 voxels and input into both 2D and 3D U-Net, which are deep learning networks for semantic segmentation. The number of training, validation and test sets were 160, 40 and 32, respectively. Dice similarity coefficients (DSCs) of the test set were evaluated employing Smart SegmentationⓇ Knowledge Based Contouring (Smart segmentation is an atlas-based segmentation tool), as well as the 2D and 3D U-Net. The mean DSCs of the test set were 0.964 [95% confidence interval (CI), 0.960-0.968], 0.990 (95% CI, 0.989-0.992) and 0.990 (95% CI, 0.989-0.991) with Smart segmentation, 2D and 3D U-Net, respectively. Compared with Smart segmentation, both U-Nets presented significantly higher DSCs by the Wilcoxon signed-rank test (P < 0.01). There was no difference in mean DSC between the 2D and 3D U-Net systems. The newly-devised 2D and 3D U-Net approaches were found to be more effective than a commercial auto-segmentation tool. Even the relatively shallow 2D U-Net which does not require high-performance computational resources was effective enough for the lung segmentation. Semantic segmentation using deep learning was useful in radiation treatment planning for lung cancers.
Collapse
Affiliation(s)
- Takafumi Nemoto
- Division of Radiation Oncology, Saiseikai Yokohamashi Tobu-Hospital, Shimosueyoshi 3-6-1, Tsurumi-ku, Yokohama-shi, Kanagawa, 230-8765, Japan
- Department of Radiology, Keio University School of Medicine, Shinanomachi 35, Shinjyuku-ku, Tokyo, 160-8582, Japan
| | - Natsumi Futakami
- Department of Radiation Oncology, Tokai University School of Medicine, Shimokasuya 143, Isehara-shi, Kanagawa, 259-1143, Japan
| | - Masamichi Yagi
- HPC&AI Business Dept., Platform Technical Engineer Div., System Platform Solution Unit, Fujitsu Limited, World Trade Center Building, 4-1, Hamamatsucho 2-chome, Minato-ku, Tokyo, 105-6125, Japan
| | - Atsuhiro Kumabe
- Department of Radiology, Keio University School of Medicine, Shinanomachi 35, Shinjyuku-ku, Tokyo, 160-8582, Japan
| | - Atsuya Takeda
- Radiation Oncology Center, Ofuna Chuo Hospital, Kamakura, 247-0056, Japan
| | - Etsuo Kunieda
- Department of Radiation Oncology, Tokai University School of Medicine, Shimokasuya 143, Isehara-shi, Kanagawa, 259-1143, Japan
| | - Naoyuki Shigematsu
- Department of Radiology, Keio University School of Medicine, Shinanomachi 35, Shinjyuku-ku, Tokyo, 160-8582, Japan
| |
Collapse
|
118
|
Armato SG, Farahani K, Zaidi H. Biomedical image analysis challenges should be considered as an academic exercise, not an instrument that will move the field forward in a real, practical way. Med Phys 2020; 47:2325-2328. [PMID: 32040865 DOI: 10.1002/mp.14081] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2020] [Revised: 02/05/2020] [Accepted: 02/05/2020] [Indexed: 11/09/2022] Open
Affiliation(s)
- Samuel G Armato
- Department of Radiology, The University of Chicago, 5841 S. Maryland Ave., Chicago, IL, 60637, USA
| | - Keyvan Farahani
- Center for Biomedical Imaging and Information Technology, National Cancer Institute, Bethesda, Maryland, USA
| | | |
Collapse
|
119
|
El Naqa I, Haider MA, Giger ML, Ten Haken RK. Artificial Intelligence: reshaping the practice of radiological sciences in the 21st century. Br J Radiol 2020; 93:20190855. [PMID: 31965813 PMCID: PMC7055429 DOI: 10.1259/bjr.20190855] [Citation(s) in RCA: 47] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2019] [Revised: 01/12/2020] [Accepted: 01/13/2020] [Indexed: 12/15/2022] Open
Abstract
Advances in computing hardware and software platforms have led to the recent resurgence in artificial intelligence (AI) touching almost every aspect of our daily lives by its capability for automating complex tasks or providing superior predictive analytics. AI applications are currently spanning many diverse fields from economics to entertainment, to manufacturing, as well as medicine. Since modern AI's inception decades ago, practitioners in radiological sciences have been pioneering its development and implementation in medicine, particularly in areas related to diagnostic imaging and therapy. In this anniversary article, we embark on a journey to reflect on the learned lessons from past AI's chequered history. We further summarize the current status of AI in radiological sciences, highlighting, with examples, its impressive achievements and effect on re-shaping the practice of medical imaging and radiotherapy in the areas of computer-aided detection, diagnosis, prognosis, and decision support. Moving beyond the commercial hype of AI into reality, we discuss the current challenges to overcome, for AI to achieve its promised hope of providing better precision healthcare for each patient while reducing cost burden on their families and the society at large.
Collapse
Affiliation(s)
- Issam El Naqa
- Department of Radiation Oncology, University of Michigan, Ann Arbor, MI, USA
| | - Masoom A Haider
- Department of Medical Imaging and Lunenfeld-Tanenbaum Research Institute, University of Toronto, Toronto, ON, Canada
| | | | - Randall K Ten Haken
- Department of Radiation Oncology, University of Michigan, Ann Arbor, MI, USA
| |
Collapse
|
120
|
Hagan M, Kapoor R, Michalski J, Sandler H, Movsas B, Chetty I, Lally B, Rengan R, Robinson C, Rimner A, Simone C, Timmerman R, Zelefsky M, DeMarco J, Hamstra D, Lawton C, Potters L, Valicenti R, Mutic S, Bosch W, Abraham C, Caruthers D, Brame R, Palta JR, Sleeman W, Nalluri J. VA-Radiation Oncology Quality Surveillance Program. Int J Radiat Oncol Biol Phys 2020; 106:639-647. [PMID: 31983560 DOI: 10.1016/j.ijrobp.2019.08.064] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2019] [Revised: 08/08/2019] [Accepted: 08/21/2019] [Indexed: 12/18/2022]
Abstract
PURPOSE We sought to develop a quality surveillance program for approximately 15,000 US veterans treated at the 40 radiation oncology facilities at the Veterans Affairs (VA) hospitals each year. METHODS AND MATERIALS State-of-the-art technologies were used with the goal to improve clinical outcomes while providing the best possible care to veterans. To measure quality of care and service rendered to veterans, the Veterans Health Administration established the VA Radiation Oncology Quality Surveillance program. The program carries forward the American College of Radiology Quality Research in Radiation Oncology project methodology of assessing the wide variation in practice pattern and quality of care in radiation therapy by developing clinical quality measures (QM) used as quality indices. These QM data provide feedback to physicians by identifying areas for improvement in the process of care and identifying the adoption of evidence-based recommendations for radiation therapy. RESULTS Disease-site expert panels organized by the American Society for Radiation Oncology (ASTRO) defined quality measures and established scoring criteria for prostate cancer (intermediate and high risk), non-small cell lung cancer (IIIA/B stage), and small cell lung cancer (limited stage) case presentations. Data elements for 1567 patients from the 40 VA radiation oncology practices were abstracted from the electronic medical records and treatment management and planning systems. Overall, the 1567 assessed cases passed 82.4% of all QM. Pass rates for QM for the 773 lung and 794 prostate cases were 78.0% and 87.2%, respectively. Marked variations, however, were noted in the pass rates for QM when tumor site, clinical pathway, or performing centers were separately examined. CONCLUSIONS The peer-review protected VA-Radiation Oncology Surveillance program based on clinical quality measures allows providers to compare their clinical practice to peers and to make meaningful adjustments in their personal patterns of care unobtrusively.
Collapse
Affiliation(s)
- Michael Hagan
- VHA National Radiation Oncology Program Office, Richmond, Virginia.
| | - Rishabh Kapoor
- VHA National Radiation Oncology Program Office, Richmond, Virginia
| | - Jeff Michalski
- Washington University in Saint Louis, Saint Louis, Missouri
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - Sasa Mutic
- Washington University in Saint Louis, Saint Louis, Missouri
| | - Walter Bosch
- Washington University in Saint Louis, Saint Louis, Missouri
| | | | | | - Ryan Brame
- Washington University in Saint Louis, Saint Louis, Missouri
| | - Jatinder R Palta
- VHA National Radiation Oncology Program Office, Richmond, Virginia
| | - William Sleeman
- Department of Radiation Oncology, Virginia Commonwealth University, Rcihmond, Virginia
| | - Joseph Nalluri
- Department of Radiation Oncology, Virginia Commonwealth University, Rcihmond, Virginia
| |
Collapse
|
121
|
Liu Z, Liu X, Xiao B, Wang S, Miao Z, Sun Y, Zhang F. Segmentation of organs-at-risk in cervical cancer CT images with a convolutional neural network. Phys Med 2020; 69:184-191. [PMID: 31918371 DOI: 10.1016/j.ejmp.2019.12.008] [Citation(s) in RCA: 50] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/16/2019] [Revised: 11/12/2019] [Accepted: 12/08/2019] [Indexed: 02/06/2023] Open
Abstract
PURPOSE We introduced and evaluated an end-to-end organs-at-risk (OARs) segmentation model that can provide accurate and consistent OARs segmentation results in much less time. METHODS We collected 105 patients' Computed Tomography (CT) scans that diagnosed locally advanced cervical cancer and treated with radiotherapy in one hospital. Seven organs, including the bladder, bone marrow, left femoral head, right femoral head, rectum, small intestine and spinal cord were defined as OARs. The annotated contours of the OARs previously delineated manually by the patient's radiotherapy oncologist and confirmed by the professional committee consisted of eight experienced oncologists before the radiotherapy were used as the ground truth masks. A multi-class segmentation model based on U-Net was designed to fulfil the OARs segmentation task. The Dice Similarity Coefficient (DSC) and 95th Hausdorff Distance (HD) are used as quantitative evaluation metrics to evaluate the proposed method. RESULTS The mean DSC values of the proposed method are 0.924, 0.854, 0.906, 0.900, 0.791, 0.833 and 0.827 for the bladder, bone marrow, femoral head left, femoral head right, rectum, small intestine, and spinal cord, respectively. The mean HD values are 5.098, 1.993, 1.390, 1.435, 5.949, 5.281 and 3.269 for the above OARs respectively. CONCLUSIONS Our proposed method can help reduce the inter-observer and intra-observer variability of manual OARs delineation and lessen oncologists' efforts. The experimental results demonstrate that our model outperforms the benchmark U-Net model and the oncologists' evaluations show that the segmentation results are highly acceptable to be used in radiation therapy planning.
Collapse
Affiliation(s)
- Zhikai Liu
- Department of Radiation Oncology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing 100730, China
| | - Xia Liu
- Department of Radiation Oncology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing 100730, China
| | - Bin Xiao
- MedMind Technology Co., Ltd., Beijing 100080, China
| | - Shaobin Wang
- MedMind Technology Co., Ltd., Beijing 100080, China
| | - Zheng Miao
- Department of Radiation Oncology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing 100730, China
| | - Yuliang Sun
- Department of Radiation Oncology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing 100730, China
| | - Fuquan Zhang
- Department of Radiation Oncology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing 100730, China
| |
Collapse
|
122
|
Ibtehaz N, Rahman MS. MultiResUNet : Rethinking the U-Net architecture for multimodal biomedical image segmentation. Neural Netw 2020; 121:74-87. [DOI: 10.1016/j.neunet.2019.08.025] [Citation(s) in RCA: 320] [Impact Index Per Article: 80.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2019] [Revised: 08/17/2019] [Accepted: 08/22/2019] [Indexed: 10/26/2022]
|
123
|
Vaassen F, Hazelaar C, Vaniqui A, Gooding M, van der Heyden B, Canters R, van Elmpt W. Evaluation of measures for assessing time-saving of automatic organ-at-risk segmentation in radiotherapy. PHYSICS & IMAGING IN RADIATION ONCOLOGY 2019; 13:1-6. [PMID: 33458300 PMCID: PMC7807544 DOI: 10.1016/j.phro.2019.12.001] [Citation(s) in RCA: 92] [Impact Index Per Article: 18.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/18/2019] [Revised: 12/02/2019] [Accepted: 12/02/2019] [Indexed: 12/01/2022]
Abstract
Automatic delineation software shows promising results in terms of time-saving. Standard geometry measures do not have a high correlation with delineation time. New evaluation measures were introduced: added path length (APL) and surface DSC. (Added) path length showed the highest correlation with time-recordings. This makes APL the most representative measure for clinical usefulness.
Background and purpose In radiotherapy, automatic organ-at-risk segmentation algorithms allow faster delineation times, but clinically relevant contour evaluation remains challenging. Commonly used measures to assess automatic contours, such as volumetric Dice Similarity Coefficient (DSC) or Hausdorff distance, have shown to be good measures for geometric similarity, but do not always correlate with clinical applicability of the contours, or time needed to adjust them. This study aimed to evaluate the correlation of new and commonly used evaluation measures with time-saving during contouring. Materials and methods Twenty lung cancer patients were used to compare user-adjustments after atlas-based and deep-learning contouring with manual contouring. The absolute time needed (s) of adjusting the auto-contour compared to manual contouring was recorded, from this relative time-saving (%) was calculated. New evaluation measures (surface DSC and added path length, APL) and conventional evaluation measures (volumetric DSC and Hausdorff distance) were correlated with time-recordings and time-savings, quantified with the Pearson correlation coefficient, R. Results The highest correlation (R = 0.87) was found between APL and absolute adaption time. Lower correlations were found for APL with relative time-saving (R = −0.38), for surface DSC with absolute adaption time (R = −0.69) and relative time-saving (R = 0.57). Volumetric DSC and Hausdorff distance also showed lower correlation coefficients for absolute adaptation time (R = −0.32 and 0.64, respectively) and relative time-saving (R = 0.44 and −0.64, respectively). Conclusion Surface DSC and APL are better indicators for contour adaptation time and time-saving when using auto-segmentation and provide more clinically relevant and better quantitative measures for automatically-generated contour quality, compared to commonly-used geometry-based measures.
Collapse
Affiliation(s)
- Femke Vaassen
- Department of Radiation Oncology (MAASTRO), GROW - School for Oncology and Developmental Biology, Maastricht University Medical Centre, Maastricht, The Netherlands
| | - Colien Hazelaar
- Department of Radiation Oncology (MAASTRO), GROW - School for Oncology and Developmental Biology, Maastricht University Medical Centre, Maastricht, The Netherlands
| | - Ana Vaniqui
- Department of Radiation Oncology (MAASTRO), GROW - School for Oncology and Developmental Biology, Maastricht University Medical Centre, Maastricht, The Netherlands
| | | | - Brent van der Heyden
- Department of Radiation Oncology (MAASTRO), GROW - School for Oncology and Developmental Biology, Maastricht University Medical Centre, Maastricht, The Netherlands
| | - Richard Canters
- Department of Radiation Oncology (MAASTRO), GROW - School for Oncology and Developmental Biology, Maastricht University Medical Centre, Maastricht, The Netherlands
| | - Wouter van Elmpt
- Department of Radiation Oncology (MAASTRO), GROW - School for Oncology and Developmental Biology, Maastricht University Medical Centre, Maastricht, The Netherlands
| |
Collapse
|
124
|
Nomura Y, Xu Q, Peng H, Takao S, Shimizu S, Xing L, Shirato H. Modified fast adaptive scatter kernel superposition (mfASKS) correction and its dosimetric impact on CBCT‐based proton therapy dose calculation. Med Phys 2019; 47:190-200. [DOI: 10.1002/mp.13878] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2019] [Revised: 10/11/2019] [Accepted: 10/12/2019] [Indexed: 12/27/2022] Open
Affiliation(s)
- Yusuke Nomura
- Department of Radiation Oncology Graduate School of Medicine Hokkaido University Sapporo 060‐8638 Japan
| | - Qiong Xu
- Global Station for Quantum Medical Science and Engineering Global Institution for Collaborative Research and Education (GI‐CoRE) Hokkaido University Sapporo 060‐8648 Japan
| | - Hao Peng
- Global Station for Quantum Medical Science and Engineering Global Institution for Collaborative Research and Education (GI‐CoRE) Hokkaido University Sapporo 060‐8648 Japan
- Department of Radiation Oncology Stanford University Stanford CA 94305‐5847 USA
| | - Seishin Takao
- Global Station for Quantum Medical Science and Engineering Global Institution for Collaborative Research and Education (GI‐CoRE) Hokkaido University Sapporo 060‐8648 Japan
- Department of Radiation Oncology Hokkaido University Hospital Sapporo 060‐8648 Japan
| | - Shinichi Shimizu
- Global Station for Quantum Medical Science and Engineering Global Institution for Collaborative Research and Education (GI‐CoRE) Hokkaido University Sapporo 060‐8648 Japan
- Department of Radiation Medical Science and Engineering Faculty of Medicine and Graduate School of Medicine Hokkaido University Sapporo 060‐8638 Japan
| | - Lei Xing
- Global Station for Quantum Medical Science and Engineering Global Institution for Collaborative Research and Education (GI‐CoRE) Hokkaido University Sapporo 060‐8648 Japan
- Department of Radiation Oncology Stanford University Stanford CA 94305‐5847 USA
| | - Hiroki Shirato
- Global Station for Quantum Medical Science and Engineering Global Institution for Collaborative Research and Education (GI‐CoRE) Hokkaido University Sapporo 060‐8648 Japan
- Department of Proton Beam Therapy Research Center for Cooperative Projects Faculty of Medicine Hokkaido University Sapporo 060‐8638 Japan
| |
Collapse
|
125
|
Bi N, Wang J, Zhang T, Chen X, Xia W, Miao J, Xu K, Wu L, Fan Q, Wang L, Li Y, Zhou Z, Dai J. Deep Learning Improved Clinical Target Volume Contouring Quality and Efficiency for Postoperative Radiation Therapy in Non-small Cell Lung Cancer. Front Oncol 2019; 9:1192. [PMID: 31799181 PMCID: PMC6863957 DOI: 10.3389/fonc.2019.01192] [Citation(s) in RCA: 32] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2019] [Accepted: 10/21/2019] [Indexed: 12/13/2022] Open
Abstract
Purpose: To investigate whether a deep learning-assisted contour (DLAC) could provide greater accuracy, inter-observer consistency, and efficiency compared with a manual contour (MC) of the clinical target volume (CTV) for non-small cell lung cancer (NSCLC) receiving postoperative radiotherapy (PORT). Materials and Methods: A deep dilated residual network was used to achieve the effective automatic contour of the CTV. Eleven junior physicians contoured CTVs on 19 patients by using both MC and DLAC methods independently. Compared with the ground truth, the accuracy of the contour was evaluated by using the Dice coefficient and mean distance to agreement (MDTA). The coefficient of variation (CV) and standard distance deviation (SDD) were rendered to measure the inter-observer variability or consistency. The time consumed for each of the two contouring methods was also compared. Results: A total of 418 CTV sets were generated. DLAC improved contour accuracy when compared with MC and was associated with a larger Dice coefficient (mean ± SD: 0.75 ± 0.06 vs. 0.72 ± 0.07, p < 0.001) and smaller MDTA (mean ± SD: 2.97 ± 0.91 mm vs. 3.07 ± 0.98 mm, p < 0.001). The DLAC was also associated with decreased inter-observer variability, with a smaller CV (mean ± SD: 0.129 ± 0.040 vs. 0.183 ± 0.043, p < 0.001) and SDD (mean ± SD: 0.47 ± 0.22 mm vs. 0.72 ± 0.41 mm, p < 0.001). In addition, a value of 35% of time saving was provided by the DLAC (median: 14.81 min vs. 9.59 min, p < 0.001). Conclusions: Compared with MC, the DLAC is a promising strategy to obtain superior accuracy, consistency, and efficiency for the PORT-CTV in NSCLC.
Collapse
Affiliation(s)
- Nan Bi
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Jingbo Wang
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Tao Zhang
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Xinyuan Chen
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Wenlong Xia
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Junjie Miao
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Kunpeng Xu
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Linfang Wu
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Quanrong Fan
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Luhua Wang
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China.,National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital and Shenzhen Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Shenzhen, China
| | - Yexiong Li
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Zongmei Zhou
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Jianrong Dai
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| |
Collapse
|
126
|
Zhu J, Liu Y, Zhang J, Wang Y, Chen L. Preliminary Clinical Study of the Differences Between Interobserver Evaluation and Deep Convolutional Neural Network-Based Segmentation of Multiple Organs at Risk in CT Images of Lung Cancer. Front Oncol 2019; 9:627. [PMID: 31334129 PMCID: PMC6624788 DOI: 10.3389/fonc.2019.00627] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2018] [Accepted: 06/25/2019] [Indexed: 12/25/2022] Open
Abstract
Background: In this study, publicly datasets with organs at risk (OAR) structures were used as reference data to compare the differences of several observers. Convolutional neural network (CNN)-based auto-contouring was also used in the analysis. We evaluated the variations among observers and the effect of CNN-based auto-contouring in clinical applications. Materials and methods: A total of 60 publicly available lung cancer CT with structures were used; 48 cases were used for training, and the other 12 cases were used for testing. The structures of the datasets were used as reference data. Three observers and a CNN-based program performed contouring for 12 testing cases, and the 3D dice similarity coefficient (DSC) and mean surface distance (MSD) were used to evaluate differences from the reference data. The three observers edited the CNN-based contours, and the results were compared to those of manual contouring. A value of P<0.05 was considered statistically significant. Results: Compared to the reference data, no statistically significant differences were observed for the DSCs and MSDs among the manual contouring performed by the three observers at the same institution for the heart, esophagus, spinal cord, and left and right lungs. The 95% confidence interval (CI) and P-values of the CNN-based auto-contouring results comparing to the manual results for the heart, esophagus, spinal cord, and left and right lungs were as follows: the DSCs were CNN vs. A: 0.914~0.939(P = 0.004), 0.746~0.808(P = 0.002), 0.866~0.887(P = 0.136), 0.952~0.966(P = 0.158) and 0.960~0.972 (P = 0.136); CNN vs. B: 0.913~0.936 (P = 0.002), 0.745~0.807 (P = 0.005), 0.864~0.894 (P = 0.239), 0.952~0.964 (P = 0.308), and 0.959~0.971 (P = 0.272); and CNN vs. C: 0.912~0.933 (P = 0.004), 0.748~0.804(P = 0.002), 0.867~0.890 (P = 0.530), 0.952~0.964 (P = 0.308), and 0.958~0.970 (P = 0.480), respectively. The P-values of MSDs are similar to DSCs. The P-values of heart and esophagus is smaller than 0.05. No significant differences were found between the edited CNN-based auto-contouring results and the manual results. Conclusion: For the spinal cord, both lungs, no statistically significant differences were found between CNN-based auto-contouring and manual contouring. Further modifications to contouring of the heart and esophagus are necessary. Overall, editing based on CNN-based auto-contouring can effectively shorten the contouring time without affecting the results. CNNs have considerable potential for automatic contouring applications.
Collapse
Affiliation(s)
| | | | | | | | - Lixin Chen
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
| |
Collapse
|
127
|
Kumar A, Fulham M, Feng D, Kim J. Co-Learning Feature Fusion Maps from PET-CT Images of Lung Cancer. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 39:204-217. [PMID: 31217099 DOI: 10.1109/tmi.2019.2923601] [Citation(s) in RCA: 73] [Impact Index Per Article: 14.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
The analysis of multi-modality positron emission tomography and computed tomography (PET-CT) images for computer aided diagnosis applications (e.g., detection and segmentation) requires combining the sensitivity of PET to detect abnormal regions with anatomical localization from CT. Current methods for PET-CT image analysis either process the modalities separately or fuse information from each modality based on knowledge about the image analysis task. These methods generally do not consider the spatially varying visual characteristics that encode different information across the different modalities, which have different priorities at different locations. For example, a high abnormal PET uptake in the lungs is more meaningful for tumor detection than physiological PET uptake in the heart. Our aim is to improve fusion of the complementary information in multi-modality PET-CT with a new supervised convolutional neural network (CNN) that learns to fuse complementary information for multi-modality medical image analysis. Our CNN first encodes modality-specific features and then uses them to derive a spatially varying fusion map that quantifies the relative importance of each modality's features across different spatial locations. These fusion maps are then multiplied with the modality-specific feature maps to obtain a representation of the complementary multi-modality information at different locations, which can then be used for image analysis. We evaluated the ability of our CNN to detect and segment multiple regions (lungs, mediastinum, tumors) with different fusion requirements using a dataset of PET-CT images of lung cancer. We compared our method to baseline techniques for multi-modality image fusion (fused inputs (FS), multi-branch (MB) techniques, and multichannel (MC) techniques) and segmentation. Our findings show that our CNN had a significantly higher foreground detection accuracy (99.29%, p < 0:05) than the fusion baselines (FS: 99.00%, MB: 99.08%, TC: 98.92%) and a significantly higher Dice score (63.85%) than recent PET-CT tumor segmentation methods.
Collapse
|
128
|
Nomura Y, Xu Q, Shirato H, Shimizu S, Xing L. Projection-domain scatter correction for cone beam computed tomography using a residual convolutional neural network. Med Phys 2019; 46:3142-3155. [PMID: 31077390 DOI: 10.1002/mp.13583] [Citation(s) in RCA: 49] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2018] [Revised: 04/08/2019] [Accepted: 04/24/2019] [Indexed: 11/06/2022] Open
Abstract
PURPOSE Scatter is a major factor degrading the image quality of cone beam computed tomography (CBCT). Conventional scatter correction strategies require handcrafted analytical models with ad hoc assumptions, which often leads to less accurate scatter removal. This study aims to develop an effective scatter correction method using a residual convolutional neural network (CNN). METHODS A U-net based 25-layer CNN was constructed for CBCT scatter correction. The establishment of the model consists of three steps: model training, validation, and testing. For model training, a total of 1800 pairs of x-ray projection and the corresponding scatter-only distribution in nonanthropomorphic phantoms taken in full-fan scan were generated using Monte Carlo simulation of a CBCT scanner installed with a proton therapy system. An end-to-end CNN training was implemented with two major loss functions for 100 epochs with a mini-batch size of 10. Image rotations and flips were randomly applied to augment the training datasets during training. For validation, 200 projections of a digital head phantom were collected. The proposed CNN-based method was compared to a conventional projection-domain scatter correction method named fast adaptive scatter kernel superposition (fASKS) method using 360 projections of an anthropomorphic head phantom. Two different loss functions were applied for the same CNN to evaluate the impact of loss functions on the final results. Furthermore, the CNN model trained with full-fan projections was fine-tuned for scatter correction in half-fan scan by using transfer learning with additional 360 half-fan projection pairs of nonanthropomorphic phantoms. The tuned-CNN model for half-fan scan was compared with the fASKS method as well as the CNN-based method without the fine-tuning using additional lung phantom projections. RESULTS The CNN-based method provides projections with significantly reduced scatter and CBCT images with more accurate Hounsfield Units (HUs) than that of the fASKS-based method. Root mean squared error of the CNN-corrected projections was improved to 0.0862 compared to 0.278 for uncorrected projections or 0.117 for the fASKS-corrected projections. The CNN-corrected reconstruction provided better HU quantification, especially in regions near the air or bone interfaces. All four image quality measures, which include mean absolute error (MAE), mean squared error (MSE), peak signal-to-noise ratio (PSNR), and structural similarity (SSIM), indicated that the CNN-corrected images were significantly better than that of the fASKS-corrected images. Moreover, the proposed transfer learning technique made it possible for the CNN model trained with full-fan projections to be applicable to remove scatters in half-fan projections after fine-tuning with only a small number of additional half-fan training datasets. SSIM value of the tuned-CNN-corrected images was 0.9993 compared to 0.9984 for the non-tuned-CNN-corrected images or 0.9990 for the fASKS-corrected images. Finally, the CNN-based method is computationally efficient - the correction time for the 360 projections only took less than 5 s in the reported experiments on a PC (4.20 GHz Intel Core-i7 CPU) with a single NVIDIA GTX 1070 GPU. CONCLUSIONS The proposed deep learning-based method provides an effective tool for CBCT scatter correction and holds significant value for quantitative imaging and image-guided radiation therapy.
Collapse
Affiliation(s)
- Yusuke Nomura
- Department of Radiation Oncology, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, Sapporo, 060-8638, Japan
| | - Qiong Xu
- Global Station for Quantum Medical Science and Engineering, Global Institution for Collaborative Research and Education (GI-CoRE), Hokkaido University, Sapporo, 060-8648, Japan
| | - Hiroki Shirato
- Global Station for Quantum Medical Science and Engineering, Global Institution for Collaborative Research and Education (GI-CoRE), Hokkaido University, Sapporo, 060-8648, Japan.,Department of Radiation Medicine, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, Sapporo, 060-8638, Japan
| | - Shinichi Shimizu
- Global Station for Quantum Medical Science and Engineering, Global Institution for Collaborative Research and Education (GI-CoRE), Hokkaido University, Sapporo, 060-8648, Japan.,Department of Radiation Medical Science and Engineering, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, Sapporo, 060-8638, Japan
| | - Lei Xing
- Global Station for Quantum Medical Science and Engineering, Global Institution for Collaborative Research and Education (GI-CoRE), Hokkaido University, Sapporo, 060-8648, Japan.,Department of Radiation Oncology, Stanford University, Stanford, CA, USA
| |
Collapse
|
129
|
Abstract
Manual image segmentation is a time-consuming task routinely performed in radiotherapy to identify each patient's targets and anatomical structures. The efficacy and safety of the radiotherapy plan requires accurate segmentations as these regions of interest are generally used to optimize and assess the quality of the plan. However, reports have shown that this process can be subject to significant inter- and intraobserver variability. Furthermore, the quality of the radiotherapy treatment, and subsequent analyses (ie, radiomics, dosimetric), can be subject to the accuracy of these manual segmentations. Automatic segmentation (or auto-segmentation) of targets and normal tissues is, therefore, preferable as it would address these challenges. Previously, auto-segmentation techniques have been clustered into 3 generations of algorithms, with multiatlas based and hybrid techniques (third generation) being considered the state-of-the-art. More recently, however, the field of medical image segmentation has seen accelerated growth driven by advances in computer vision, particularly through the application of deep learning algorithms, suggesting we have entered the fourth generation of auto-segmentation algorithm development. In this paper, the authors review traditional (nondeep learning) algorithms particularly relevant for applications in radiotherapy. Concepts from deep learning are introduced focusing on convolutional neural networks and fully-convolutional networks which are generally used for segmentation tasks. Furthermore, the authors provide a summary of deep learning auto-segmentation radiotherapy applications reported in the literature. Lastly, considerations for clinical deployment (commissioning and QA) of auto-segmentation software are provided.
Collapse
|
130
|
Yang J, Zhang Y, Zhang Z, Zhang L, Balter P, Court L. Technical Note: Density correction to improve CT number mapping in thoracic deformable image registration. Med Phys 2019; 46:2330-2336. [PMID: 30896047 DOI: 10.1002/mp.13502] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2018] [Revised: 03/11/2019] [Accepted: 03/11/2019] [Indexed: 12/11/2022] Open
Abstract
PURPOSE To improve the accuracy of computed tomography (CT) number mapping inside the lung in deformable image registration with large differences in lung volume for applications in vertical CT imaging and adaptive radiotherapy. METHODS The deep inspiration breath hold (DIBH) CT image and the end of exhalation (EE) phase image in four-dimensional CT of 14 thoracic cancer patients were used in this study. Lung volumes were manually delineated. A Demons-based deformable registration was first applied to register the EE CT to the DIBH CT for each patient, and the resulting deformation vector field deformed the EE CT image to the DIBH CT space. Given that the mass of the lung remains the same during respiration, we created a mass-preserving model to correlate lung density variations with volumetric changes, which were characterized by the Jacobian derived from the deformation field. The Jacobian determinant was used to correct the lung CT numbers transferred from the EE CT image. The absolute intensity differences created by subtracting the deformed EE CT from the DIBH CT with and without density correction were compared. RESULTS The ratio of DIBH CT to EE CT lung volumes was 1.6 on average. The deformable registration registered the lung shape well, but the appearance of voxel intensities inside the lung was different, demonstrating the need for density correction. Without density correction, the mean and standard deviation of the absolute intensity difference between the deformed EE CT and the DIBH CT inside the lung were 54.5 ± 45.5 for all cases. After density correction, these numbers decreased to 18.1 ± 34.9, demonstrating greater accuracy. The cumulative histogram of the intensity difference also showed that density correction improved CT number mapping greatly. CONCLUSION Density correction improves CT number mapping inside the lung in deformable image registration for difficult cases with large lung volume differences.
Collapse
Affiliation(s)
- Jinzhong Yang
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Yongbin Zhang
- Proton Therapy Center, University of Cincinnati Medical Center, Liberty Township, OH, USA
| | - Zijian Zhang
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA.,Central South University Xiangya Hospital, Changsha, Hunan, China
| | - Lifei Zhang
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Peter Balter
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Laurence Court
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| |
Collapse
|
131
|
Dong X, Lei Y, Wang T, Thomas M, Tang L, Curran WJ, Liu T, Yang X. Automatic multiorgan segmentation in thorax CT images using U-net-GAN. Med Phys 2019; 46:2157-2168. [PMID: 30810231 DOI: 10.1002/mp.13458] [Citation(s) in RCA: 155] [Impact Index Per Article: 31.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2018] [Revised: 02/18/2019] [Accepted: 02/18/2019] [Indexed: 12/19/2022] Open
Abstract
PURPOSE Accurate and timely organs-at-risk (OARs) segmentation is key to efficient and high-quality radiation therapy planning. The purpose of this work is to develop a deep learning-based method to automatically segment multiple thoracic OARs on chest computed tomography (CT) for radiotherapy treatment planning. METHODS We propose an adversarial training strategy to train deep neural networks for the segmentation of multiple organs on thoracic CT images. The proposed design of adversarial networks, called U-Net-generative adversarial network (U-Net-GAN), jointly trains a set of U-Nets as generators and fully convolutional networks (FCNs) as discriminators. Specifically, the generator, composed of U-Net, produces an image segmentation map of multiple organs by an end-to-end mapping learned from CT image to multiorgan-segmented OARs. The discriminator, structured as an FCN, discriminates between the ground truth and segmented OARs produced by the generator. The generator and discriminator compete against each other in an adversarial learning process to produce the optimal segmentation map of multiple organs. Our segmentation results were compared with manually segmented OARs (ground truth) for quantitative evaluations in geometric difference, as well as dosimetric performance by investigating the dose-volume histogram in 20 stereotactic body radiation therapy (SBRT) lung plans. RESULTS This segmentation technique was applied to delineate the left and right lungs, spinal cord, esophagus, and heart using 35 patients' chest CTs. The averaged dice similarity coefficient for the above five OARs are 0.97, 0.97, 0.90, 0.75, and 0.87, respectively. The mean surface distance of the five OARs obtained with proposed method ranges between 0.4 and 1.5 mm on average among all 35 patients. The mean dose differences on the 20 SBRT lung plans are -0.001 to 0.155 Gy for the five OARs. CONCLUSION We have investigated a novel deep learning-based approach with a GAN strategy to segment multiple OARs in the thorax using chest CT images and demonstrated its feasibility and reliability. This is a potentially valuable method for improving the efficiency of chest radiotherapy treatment planning.
Collapse
Affiliation(s)
- Xue Dong
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Matthew Thomas
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Leonardo Tang
- Department of Undeclared Engineering, University of California, Berkeley, CA, 94720, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| |
Collapse
|
132
|
Feng X, Qing K, Tustison NJ, Meyer CH, Chen Q. Deep convolutional neural network for segmentation of thoracic organs-at-risk using cropped 3D images. Med Phys 2019; 46:2169-2180. [PMID: 30830685 DOI: 10.1002/mp.13466] [Citation(s) in RCA: 61] [Impact Index Per Article: 12.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2018] [Revised: 01/20/2019] [Accepted: 02/18/2019] [Indexed: 12/25/2022] Open
Abstract
PURPOSE Automatic segmentation of organs-at-risk (OARs) is a key step in radiation treatment planning to reduce human efforts and bias. Deep convolutional neural networks (DCNN) have shown great success in many medical image segmentation applications but there are still challenges in dealing with large 3D images for optimal results. The purpose of this study is to develop a novel DCNN method for thoracic OARs segmentation using cropped 3D images. METHODS To segment the five organs (left and right lungs, heart, esophagus and spinal cord) from the thoracic CT scans, preprocessing to unify the voxel spacing and intensity was first performed, a 3D U-Net was then trained on resampled thoracic images to localize each organ, then the original images were cropped to only contain one organ and served as the input to each individual organ segmentation network. The segmentation maps were then merged to get the final results. The network structures were optimized for each step, as well as the training and testing strategies. A novel testing augmentation with multiple iterations of image cropping was used. The networks were trained on 36 thoracic CT scans with expert annotations provided by the organizers of the 2017 AAPM Thoracic Auto-segmentation Challenge and tested on the challenge testing dataset as well as a private dataset. RESULTS The proposed method earned second place in the live phase of the challenge and first place in the subsequent ongoing phase using a newly developed testing augmentation approach. It showed superior-than-human performance on average in terms of Dice scores (spinal cord: 0.893 ± 0.044, right lung: 0.972 ± 0.021, left lung: 0.979 ± 0.008, heart: 0.925 ± 0.015, esophagus: 0.726 ± 0.094), mean surface distance (spinal cord: 0.662 ± 0.248 mm, right lung: 0.933 ± 0.574 mm, left lung: 0.586 ± 0.285 mm, heart: 2.297 ± 0.492 mm, esophagus: 2.341 ± 2.380 mm) and 95% Hausdorff distance (spinal cord: 1.893 ± 0.627 mm, right lung: 3.958 ± 2.845 mm, left lung: 2.103 ± 0.938 mm, heart: 6.570 ± 1.501 mm, esophagus: 8.714 ± 10.588 mm). It also achieved good performance in the private dataset and reduced the editing time to 7.5 min per patient following automatic segmentation. CONCLUSIONS The proposed DCNN method demonstrated good performance in automatic OAR segmentation from thoracic CT scans. It has the potential for eventual clinical adoption of deep learning in radiation treatment planning due to improved accuracy and reduced cost for OAR segmentation.
Collapse
Affiliation(s)
- Xue Feng
- Department of Biomedical Engineering, University of Virginia, Charlottesville, VA, 22903, USA
| | - Kun Qing
- Department of Radiology and Medical Imaging, University of Virginia, Charlottesville, VA, 22903, USA
| | - Nicholas J Tustison
- Department of Radiology and Medical Imaging, University of Virginia, Charlottesville, VA, 22903, USA
| | - Craig H Meyer
- Department of Biomedical Engineering, University of Virginia, Charlottesville, VA, 22903, USA.,Department of Radiology and Medical Imaging, University of Virginia, Charlottesville, VA, 22903, USA
| | - Quan Chen
- Department of Radiation Medicine, University of Kentucky, Lexington, KY, 40536, USA
| |
Collapse
|
133
|
Dual-energy CT for automatic organs-at-risk segmentation in brain-tumor patients using a multi-atlas and deep-learning approach. Sci Rep 2019; 9:4126. [PMID: 30858409 PMCID: PMC6411877 DOI: 10.1038/s41598-019-40584-9] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2018] [Accepted: 02/13/2019] [Indexed: 01/08/2023] Open
Abstract
In radiotherapy, computed tomography (CT) datasets are mostly used for radiation treatment planning to achieve a high-conformal tumor coverage while optimally sparing healthy tissue surrounding the tumor, referred to as organs-at-risk (OARs). Based on CT scan and/or magnetic resonance images, OARs have to be manually delineated by clinicians, which is one of the most time-consuming tasks in the clinical workflow. Recent multi-atlas (MA) or deep-learning (DL) based methods aim to improve the clinical routine by an automatic segmentation of OARs on a CT dataset. However, so far no studies investigated the performance of these MA or DL methods on dual-energy CT (DECT) datasets, which have been shown to improve the image quality compared to conventional 120 kVp single-energy CT. In this study, the performance of an in-house developed MA and a DL method (two-step three-dimensional U-net) was quantitatively and qualitatively evaluated on various DECT-derived pseudo-monoenergetic CT datasets ranging from 40 keV to 170 keV. At lower energies, the MA method resulted in more accurate OAR segmentations. Both the qualitative and quantitative metric analysis showed that the DL approach often performed better than the MA method.
Collapse
|
134
|
Zhu J, Zhang J, Qiu B, Liu Y, Liu X, Chen L. Comparison of the automatic segmentation of multiple organs at risk in CT images of lung cancer between deep convolutional neural network-based and atlas-based techniques. Acta Oncol 2019; 58:257-264. [PMID: 30398090 DOI: 10.1080/0284186x.2018.1529421] [Citation(s) in RCA: 33] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
BACKGROUND In this study, a deep convolutional neural network (CNN)-based automatic segmentation technique was applied to multiple organs at risk (OARs) depicted in computed tomography (CT) images of lung cancer patients, and the results were compared with those generated through atlas-based automatic segmentation. MATERIALS AND METHODS An encoder-decoder U-Net neural network was produced. The trained deep CNN performed the automatic segmentation of CT images for 36 cases of lung cancer. The Dice similarity coefficient (DSC), the mean surface distance (MSD) and the 95% Hausdorff distance (95% HD) were calculated, with manual segmentation results used as the standard, and were compared with the results obtained through atlas-based segmentation. RESULTS For the heart, lungs and liver, both the deep CNN-based and atlas-based techniques performed satisfactorily (average values: 0.87 < DSC < 0.95, 1.8 mm < MSD < 3.8 mm, 7.9 mm < 95% HD <11 mm). For the spinal cord and the oesophagus, the two methods had statistically significant differences. For the atlas-based technique, the average values were 0.54 < DSC < 0.71, 2.6 mm < MSD < 3.1 mm and 9.4 mm < 95% HD <12 mm. For the deep CNN-based technique, the average values were 0.71 < DSC < 0.79, 1.2 mm < MSD <2.2 mm and 4.0 mm < 95% HD < 7.9 mm. CONCLUSION Our results showed that automatic segmentation based on a deep convolutional neural network enabled us to complete automatic segmentation tasks rapidly. Deep convolutional neural networks can be satisfactorily adapted to segment OARs during radiation treatment planning for lung cancer patients.
Collapse
Affiliation(s)
- Jinhan Zhu
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Jun Zhang
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Bo Qiu
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Yimei Liu
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Xiaowei Liu
- School of Physics, Sun Yat-sen University, Guangzhou, China
| | - Lixin Chen
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
| |
Collapse
|
135
|
Trullo R, Petitjean C, Dubray B, Ruan S. Multiorgan segmentation using distance-aware adversarial networks. J Med Imaging (Bellingham) 2019; 6:014001. [PMID: 30662925 PMCID: PMC6328005 DOI: 10.1117/1.jmi.6.1.014001] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2018] [Accepted: 12/03/2018] [Indexed: 11/14/2022] Open
Abstract
Segmentation of organs at risk (OAR) in computed tomography (CT) is of vital importance in radiotherapy treatment. This task is time consuming and for some organs, it is very challenging due to low-intensity contrast in CT. We propose a framework to perform the automatic segmentation of multiple OAR: esophagus, heart, trachea, and aorta. Different from previous works using deep learning techniques, we make use of global localization information, based on an original distance map that yields not only the localization of each organ, but also the spatial relationship between them. Instead of segmenting directly the organs, we first generate the localization map by minimizing a reconstruction error within an adversarial framework. This map that includes localization information of all organs is then used to guide the segmentation task in a fully convolutional setting. Experimental results show encouraging performance on CT scans of 60 patients totaling 11,084 slices in comparison with other state-of-the-art methods.
Collapse
Affiliation(s)
- Roger Trullo
- Normandie University, Institut National des Sciences Appliquées Rouen, LITIS, Rouen, France
| | - Caroline Petitjean
- Normandie University, Institut National des Sciences Appliquées Rouen, LITIS, Rouen, France
| | | | - Su Ruan
- Normandie University, Institut National des Sciences Appliquées Rouen, LITIS, Rouen, France
| |
Collapse
|