1
|
Huang Y, Song R, Qin T, Yang M, Liu Z. Clinical evaluation of the convolutional neural network‑based automatic delineation tool in determining the clinical target volume and organs at risk in rectal cancer radiotherapy. Oncol Lett 2024; 28:539. [PMID: 39310024 PMCID: PMC11413726 DOI: 10.3892/ol.2024.14672] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2024] [Accepted: 07/16/2024] [Indexed: 09/25/2024] Open
Abstract
Delineating the clinical target volume (CTV) and organs at risk (OARs) is crucial in rectal cancer radiotherapy. However, the accuracy of manual delineation (MD) is variable and the process is time consuming. Automatic delineation (AD) may be a solution to produce quicker and more accurate contours. In the present study, a convolutional neural network (CNN)-based AD tool was clinically evaluated to analyze its accuracy and efficiency in rectal cancer. CT images were collected from 148 supine patients in whom tumor stage and type of surgery were not differentiated. The rectal cancer contours consisted of CTV and OARs, where the OARs included the bladder, left and right femoral head, left and right kidney, spinal cord and bowel bag. The MD contours reviewed and modified together by a senior radiation oncologist committee were set as the reference values. The Dice similarity coefficient (DSC), Jaccard coefficient (JAC) and Hausdorff distance (HD) were used to evaluate the AD accuracy. The correlation between CT slice number and AD accuracy was analyzed, and the AD accuracy for different contour numbers was compared. The time recorded in the present study included the MD time, AD time for different CT slice and contour numbers and the editing time for AD contours. The Pearson correlation coefficient, paired-sample t-test and unpaired-sample t-test were used for statistical analyses. The results of the present study indicated that the DSC, JAC and HD for CTV using AD were 0.80±0.06, 0.67±0.08 and 6.96±2.45 mm, respectively. Among the OARs, the highest DSC and JAC using AD were found for the right and left kidney, with 0.91±0.06 and 0.93±0.04, and 0.84±0.09 and 0.88±0.07, respectively, and HD was lowest for the spinal cord with 2.26±0.82 mm. The lowest accuracy was found for the bowel bag. The more CT slice numbers, the higher the accuracy of the spinal cord analysis. However, the contour number had no effect on AD accuracy. To obtain qualified contours, the AD time plus editing time was 662.97±195.57 sec, while the MD time was 3294.29±824.70 sec. In conclusion, the results of the present study indicate that AD can significantly improve efficiency and a higher number of CT slices and contours can reduce AD efficiency. The AD tool provides acceptable CTV and OARs for rectal cancer and improves efficiency for delineation.
Collapse
Affiliation(s)
- Yangyang Huang
- Department of Radiation Oncology, The Second Affiliated Hospital of Zhengzhou University, Zhengzhou, Henan 450014, P.R. China
| | - Rui Song
- Department of Radiation Oncology, The Second Affiliated Hospital of Zhengzhou University, Zhengzhou, Henan 450014, P.R. China
| | - Tingting Qin
- Department of Radiation Oncology, The Second Affiliated Hospital of Zhengzhou University, Zhengzhou, Henan 450014, P.R. China
| | - Menglin Yang
- Department of Radiation Oncology, The Second Affiliated Hospital of Zhengzhou University, Zhengzhou, Henan 450014, P.R. China
| | - Zongwen Liu
- Department of Radiation Oncology, The Second Affiliated Hospital of Zhengzhou University, Zhengzhou, Henan 450014, P.R. China
| |
Collapse
|
2
|
Bibault JE, Giraud P. Deep learning for automated segmentation in radiotherapy: a narrative review. Br J Radiol 2024; 97:13-20. [PMID: 38263838 PMCID: PMC11027240 DOI: 10.1093/bjr/tqad018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Revised: 08/10/2023] [Accepted: 10/27/2023] [Indexed: 01/25/2024] Open
Abstract
The segmentation of organs and structures is a critical component of radiation therapy planning, with manual segmentation being a laborious and time-consuming task. Interobserver variability can also impact the outcomes of radiation therapy. Deep neural networks have recently gained attention for their ability to automate segmentation tasks, with convolutional neural networks (CNNs) being a popular approach. This article provides a descriptive review of the literature on deep learning (DL) techniques for segmentation in radiation therapy planning. This review focuses on five clinical sub-sites and finds that U-net is the most commonly used CNN architecture. The studies using DL for image segmentation were included in brain, head and neck, lung, abdominal, and pelvic cancers. The majority of DL segmentation articles in radiation therapy planning have concentrated on normal tissue structures. N-fold cross-validation was commonly employed, without external validation. This research area is expanding quickly, and standardization of metrics and independent validation are critical to benchmarking and comparing proposed methods.
Collapse
Affiliation(s)
- Jean-Emmanuel Bibault
- Radiation Oncology Department, Georges Pompidou European Hospital, Assistance Publique—Hôpitaux de Paris, Université de Paris Cité, Paris, 75015, France
- INSERM UMR 1138, Centre de Recherche des Cordeliers, Paris, 75006, France
| | - Paul Giraud
- INSERM UMR 1138, Centre de Recherche des Cordeliers, Paris, 75006, France
- Radiation Oncology Department, Pitié Salpêtrière Hospital, Assistance Publique—Hôpitaux de Paris, Paris Sorbonne Universités, Paris, 75013, France
| |
Collapse
|
3
|
Liu P, Sun Y, Zhao X, Yan Y. Deep learning algorithm performance in contouring head and neck organs at risk: a systematic review and single-arm meta-analysis. Biomed Eng Online 2023; 22:104. [PMID: 37915046 PMCID: PMC10621161 DOI: 10.1186/s12938-023-01159-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2023] [Accepted: 09/21/2023] [Indexed: 11/03/2023] Open
Abstract
PURPOSE The contouring of organs at risk (OARs) in head and neck cancer radiation treatment planning is a crucial, yet repetitive and time-consuming process. Recent studies have applied deep learning (DL) algorithms to automatically contour head and neck OARs. This study aims to conduct a systematic review and meta-analysis to summarize and analyze the performance of DL algorithms in contouring head and neck OARs. The objective is to assess the advantages and limitations of DL algorithms in contour planning of head and neck OARs. METHODS This study conducted a literature search of Pubmed, Embase and Cochrane Library databases, to include studies related to DL contouring head and neck OARs, and the dice similarity coefficient (DSC) of four categories of OARs from the results of each study are selected as effect sizes for meta-analysis. Furthermore, this study conducted a subgroup analysis of OARs characterized by image modality and image type. RESULTS 149 articles were retrieved, and 22 studies were included in the meta-analysis after excluding duplicate literature, primary screening, and re-screening. The combined effect sizes of DSC for brainstem, spinal cord, mandible, left eye, right eye, left optic nerve, right optic nerve, optic chiasm, left parotid, right parotid, left submandibular, and right submandibular are 0.87, 0.83, 0.92, 0.90, 0.90, 0.71, 0.74, 0.62, 0.85, 0.85, 0.82, and 0.82, respectively. For subgroup analysis, the combined effect sizes for segmentation of the brainstem, mandible, left optic nerve, and left parotid gland using CT and MRI images are 0.86/0.92, 0.92/0.90, 0.71/0.73, and 0.84/0.87, respectively. Pooled effect sizes using 2D and 3D images of the brainstem, mandible, left optic nerve, and left parotid gland for contouring are 0.88/0.87, 0.92/0.92, 0.75/0.71 and 0.87/0.85. CONCLUSIONS The use of automated contouring technology based on DL algorithms is an essential tool for contouring head and neck OARs, achieving high accuracy, reducing the workload of clinical radiation oncologists, and providing individualized, standardized, and refined treatment plans for implementing "precision radiotherapy". Improving DL performance requires the construction of high-quality data sets and enhancing algorithm optimization and innovation.
Collapse
Affiliation(s)
- Peiru Liu
- General Hospital of Northern Theater Command, Department of Radiation Oncology, Shenyang, China
- Beifang Hospital of China Medical University, Shenyang, China
| | - Ying Sun
- General Hospital of Northern Theater Command, Department of Radiation Oncology, Shenyang, China
| | - Xinzhuo Zhao
- Shenyang University of Technology, School of Electrical Engineering,, Shenyang, China
| | - Ying Yan
- General Hospital of Northern Theater Command, Department of Radiation Oncology, Shenyang, China.
| |
Collapse
|
4
|
Song Y, Hu J, Wang Q, Yu C, Su J, Chen L, Jiang X, Chen B, Zhang L, Yu Q, Li P, Wang F, Bai S, Luo Y, Yi Z. Young oncologists benefit more than experts from deep learning-based organs-at-risk contouring modeling in nasopharyngeal carcinoma radiotherapy: A multi-institution clinical study exploring working experience and institute group style factor. Clin Transl Radiat Oncol 2023; 41:100635. [PMID: 37251619 PMCID: PMC10213188 DOI: 10.1016/j.ctro.2023.100635] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2022] [Revised: 04/26/2023] [Accepted: 05/01/2023] [Indexed: 05/31/2023] Open
Abstract
Background To comprehensively investigate the behaviors of oncologists with different working experiences and institute group styles in deep learning-based organs-at-risk (OAR) contouring. Methods A deep learning-based contouring system (DLCS) was modeled from 188 CT datasets of patients with nasopharyngeal carcinoma (NPC) in institute A. Three institute oncology groups, A, B, and C, were included; each contained a beginner and an expert. For each of the 28 OARs, two trials were performed with manual contouring first and post-DLCS edition later, for ten test cases. Contouring performance and group consistency were quantified by volumetric and surface Dice coefficients. A volume-based and a surface-based oncologist satisfaction rate (VOSR and SOSR) were defined to evaluate the oncologists' acceptance of DLCS. Results Based on DLCS, experience inconsistency was eliminated. Intra-institute consistency was eliminated for group C but still existed for group A and group B. Group C benefits most from DLCS with the highest number of improved OARs (8 for volumetric Dice and 10 for surface Dice), followed by group B. Beginners obtained more numbers of improved OARs than experts (7 v.s. 4 in volumetric Dice and 5 v.s. 4 in surface Dice). VOSR and SOSR varied for institute groups, but the rates of beginners were all significantly higher than those of experts for OARs with experience group significance. A remarkable positive linear relationship was found between VOSR and post-DLCS edition volumetric Dice with a coefficient of 0.78. Conclusions The DLCS was effective for various institutes and the beginners benefited more than the experts.
Collapse
Affiliation(s)
- Ying Song
- Cancer Center, West China Hospital, Sichuan University, No. 37 Guo Xue Alley, Chengdu 610065, PR China
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, No. 24, South Section 1 of the First Ring Road, Chengdu 610065, PR China
| | - Junjie Hu
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, No. 24, South Section 1 of the First Ring Road, Chengdu 610065, PR China
| | - Qiang Wang
- Cancer Center, West China Hospital, Sichuan University, No. 37 Guo Xue Alley, Chengdu 610065, PR China
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, No. 24, South Section 1 of the First Ring Road, Chengdu 610065, PR China
| | - Chengrong Yu
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, No. 24, South Section 1 of the First Ring Road, Chengdu 610065, PR China
| | - Jiachong Su
- Cancer Center, West China Hospital, Sichuan University, No. 37 Guo Xue Alley, Chengdu 610065, PR China
| | - Lin Chen
- Cancer Center, West China Hospital, Sichuan University, No. 37 Guo Xue Alley, Chengdu 610065, PR China
| | - Xiaorui Jiang
- Department of Oncology, First People's Hospital of Chengdu, No. 18, Wanxiang North Road, High-tech Zone, Chengdu 610041, PR China
| | - Bo Chen
- Department of Oncology, First People's Hospital of Chengdu, No. 18, Wanxiang North Road, High-tech Zone, Chengdu 610041, PR China
| | - Lei Zhang
- Department of Oncology, Second People's Hospital of Chengdu, Chengdu, PR China
| | - Qian Yu
- Department of Oncology, Second People's Hospital of Chengdu, Chengdu, PR China
| | - Ping Li
- Cancer Center, West China Hospital, Sichuan University, No. 37 Guo Xue Alley, Chengdu 610065, PR China
| | - Feng Wang
- Cancer Center, West China Hospital, Sichuan University, No. 37 Guo Xue Alley, Chengdu 610065, PR China
| | - Sen Bai
- Cancer Center, West China Hospital, Sichuan University, No. 37 Guo Xue Alley, Chengdu 610065, PR China
| | - Yong Luo
- Cancer Center, West China Hospital, Sichuan University, No. 37 Guo Xue Alley, Chengdu 610065, PR China
| | - Zhang Yi
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, No. 24, South Section 1 of the First Ring Road, Chengdu 610065, PR China
| |
Collapse
|
5
|
Pan S, Chang CW, Wang T, Wynne J, Hu M, Lei Y, Liu T, Patel P, Roper J, Yang X. Abdomen CT multi-organ segmentation using token-based MLP-Mixer. Med Phys 2023; 50:3027-3038. [PMID: 36463516 PMCID: PMC10175083 DOI: 10.1002/mp.16135] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Revised: 11/11/2022] [Accepted: 11/15/2022] [Indexed: 12/05/2022] Open
Abstract
BACKGROUND Manual contouring is very labor-intensive, time-consuming, and subject to intra- and inter-observer variability. An automated deep learning approach to fast and accurate contouring and segmentation is desirable during radiotherapy treatment planning. PURPOSE This work investigates an efficient deep-learning-based segmentation algorithm in abdomen computed tomography (CT) to facilitate radiation treatment planning. METHODS In this work, we propose a novel deep-learning model utilizing U-shaped multi-layer perceptron mixer (MLP-Mixer) and convolutional neural network (CNN) for multi-organ segmentation in abdomen CT images. The proposed model has a similar structure to V-net, while a proposed MLP-Convolutional block replaces each convolutional block. The MLP-Convolutional block consists of three components: an early convolutional block for local features extraction and feature resampling, a token-based MLP-Mixer layer for capturing global features with high efficiency, and a token projector for pixel-level detail recovery. We evaluate our proposed network using: (1) an institutional dataset with 60 patient cases and (2) a public dataset (BCTV) with 30 patient cases. The network performance was quantitatively evaluated in three domains: (1) volume similarity between the ground truth contours and the network predictions using the Dice score coefficient (DSC), sensitivity, and precision; (2) surface similarity using Hausdorff distance (HD), mean surface distance (MSD) and residual mean square distance (RMS); and (3) the computational complexity reported by the number of network parameters, training time, and inference time. The performance of the proposed network is compared with other state-of-the-art networks. RESULTS In the institutional dataset, the proposed network achieved the following volume similarity measures when averaged over all organs: DSC = 0.912, sensitivity = 0.917, precision = 0.917, average surface similarities were HD = 11.95 mm, MSD = 1.90 mm, RMS = 3.86 mm. The proposed network achieved DSC = 0.786 and HD = 9.04 mm on the public dataset. The network also shows statistically significant improvement, which is evaluated by a two-tailed Wilcoxon Mann-Whitney U test, on right lung (MSD where the maximum p-value is 0.001), spinal cord (sensitivity, precision, HD, RMSD where p-value ranges from 0.001 to 0.039), and stomach (DSC where the maximum p-value is 0.01) over all other competing networks. On the public dataset, the network report statistically significant improvement, which is shown by the Wilcoxon Mann-Whitney test, on pancreas (HD where the maximum p-value is 0.006), left (HD where the maximum p-value is 0.022) and right adrenal glands (DSC where the maximum p-value is 0.026). In both datasets, the proposed method can generate contours in less than 5 s. Overall, the proposed MLP-Vnet demonstrates comparable or better performance than competing methods with much lower memory complexity and higher speed. CONCLUSIONS The proposed MLP-Vnet demonstrates superior segmentation performance, in terms of accuracy and efficiency, relative to state-of-the-art methods. This reliable and efficient method demonstrates potential to streamline clinical workflows in abdominal radiotherapy, which may be especially important for online adaptive treatments.
Collapse
Affiliation(s)
- Shaoyan Pan
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
- Department of Biomedical Informatics, Emory University, Atlanta, GA 30322, USA
| | - Chih-Wei Chang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Jacob Wynne
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Mingzhe Hu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
- Department of Biomedical Informatics, Emory University, Atlanta, GA 30322, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Tian Liu
- Department of Radiation Oncology, Mount Sinai Medical Center, New York, NY, 10029, USA
| | - Pretesh Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Justin Roper
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
- Department of Biomedical Informatics, Emory University, Atlanta, GA 30322, USA
| |
Collapse
|
6
|
Lucido JJ, DeWees TA, Leavitt TR, Anand A, Beltran CJ, Brooke MD, Buroker JR, Foote RL, Foss OR, Gleason AM, Hodge TL, Hughes CO, Hunzeker AE, Laack NN, Lenz TK, Livne M, Morigami M, Moseley DJ, Undahl LM, Patel Y, Tryggestad EJ, Walker MZ, Zverovitch A, Patel SH. Validation of clinical acceptability of deep-learning-based automated segmentation of organs-at-risk for head-and-neck radiotherapy treatment planning. Front Oncol 2023; 13:1137803. [PMID: 37091160 PMCID: PMC10115982 DOI: 10.3389/fonc.2023.1137803] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2023] [Accepted: 03/24/2023] [Indexed: 04/09/2023] Open
Abstract
Introduction Organ-at-risk segmentation for head and neck cancer radiation therapy is a complex and time-consuming process (requiring up to 42 individual structure, and may delay start of treatment or even limit access to function-preserving care. Feasibility of using a deep learning (DL) based autosegmentation model to reduce contouring time without compromising contour accuracy is assessed through a blinded randomized trial of radiation oncologists (ROs) using retrospective, de-identified patient data. Methods Two head and neck expert ROs used dedicated time to create gold standard (GS) contours on computed tomography (CT) images. 445 CTs were used to train a custom 3D U-Net DL model covering 42 organs-at-risk, with an additional 20 CTs were held out for the randomized trial. For each held-out patient dataset, one of the eight participant ROs was randomly allocated to review and revise the contours produced by the DL model, while another reviewed contours produced by a medical dosimetry assistant (MDA), both blinded to their origin. Time required for MDAs and ROs to contour was recorded, and the unrevised DL contours, as well as the RO-revised contours by the MDAs and DL model were compared to the GS for that patient. Results Mean time for initial MDA contouring was 2.3 hours (range 1.6-3.8 hours) and RO-revision took 1.1 hours (range, 0.4-4.4 hours), compared to 0.7 hours (range 0.1-2.0 hours) for the RO-revisions to DL contours. Total time reduced by 76% (95%-Confidence Interval: 65%-88%) and RO-revision time reduced by 35% (95%-CI,-39%-91%). All geometric and dosimetric metrics computed, agreement with GS was equivalent or significantly greater (p<0.05) for RO-revised DL contours compared to the RO-revised MDA contours, including volumetric Dice similarity coefficient (VDSC), surface DSC, added path length, and the 95%-Hausdorff distance. 32 OARs (76%) had mean VDSC greater than 0.8 for the RO-revised DL contours, compared to 20 (48%) for RO-revised MDA contours, and 34 (81%) for the unrevised DL OARs. Conclusion DL autosegmentation demonstrated significant time-savings for organ-at-risk contouring while improving agreement with the institutional GS, indicating comparable accuracy of DL model. Integration into the clinical practice with a prospective evaluation is currently underway.
Collapse
Affiliation(s)
- J. John Lucido
- Department of Radiation Oncology, Mayo Clinic, Rochester, MN, United States
| | - Todd A. DeWees
- Department of Health Sciences Research, Mayo Clinic, Phoenix, AZ, United States
| | - Todd R. Leavitt
- Department of Health Sciences Research, Mayo Clinic, Phoenix, AZ, United States
| | - Aman Anand
- Department of Radiation Oncology, Mayo Clinic, Phoenix, AZ, United States
| | - Chris J. Beltran
- Department of Radiation Oncology, Mayo Clinic, Jacksonville, FL, United States
| | | | - Justine R. Buroker
- Research Services, Comprehensive Cancer Center, Mayo Clinic, Rochester, MN, United States
| | - Robert L. Foote
- Department of Radiation Oncology, Mayo Clinic, Rochester, MN, United States
| | - Olivia R. Foss
- Robert D. and Patricia E. Kern Center for the Science of Health Care Delivery, Mayo Clinic, Rochester, MN, United States
| | - Angela M. Gleason
- Robert D. and Patricia E. Kern Center for the Science of Health Care Delivery, Mayo Clinic, Rochester, MN, United States
| | - Teresa L. Hodge
- Department of Radiation Oncology, Mayo Clinic, Rochester, MN, United States
| | | | - Ashley E. Hunzeker
- Department of Radiation Oncology, Mayo Clinic, Rochester, MN, United States
| | - Nadia N. Laack
- Department of Radiation Oncology, Mayo Clinic, Rochester, MN, United States
| | - Tamra K. Lenz
- Department of Radiation Oncology, Mayo Clinic, Rochester, MN, United States
| | | | | | - Douglas J. Moseley
- Department of Radiation Oncology, Mayo Clinic, Rochester, MN, United States
| | - Lisa M. Undahl
- Department of Radiation Oncology, Mayo Clinic, Rochester, MN, United States
| | - Yojan Patel
- Google Health, Mountain View, CA, United States
| | - Erik J. Tryggestad
- Department of Radiation Oncology, Mayo Clinic, Rochester, MN, United States
| | | | | | - Samir H. Patel
- Department of Radiation Oncology, Mayo Clinic, Phoenix, AZ, United States
| |
Collapse
|
7
|
Douglass M, Gorayski P, Patel S, Santos A. Synthetic cranial MRI from 3D optical surface scans using deep learning for radiation therapy treatment planning. Phys Eng Sci Med 2023; 46:367-375. [PMID: 36752996 PMCID: PMC10030422 DOI: 10.1007/s13246-023-01229-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2022] [Accepted: 01/29/2023] [Indexed: 02/09/2023]
Abstract
BACKGROUND Optical scanning technologies are increasingly being utilised to supplement treatment workflows in radiation oncology, such as surface-guided radiotherapy or 3D printing custom bolus. One limitation of optical scanning devices is the absence of internal anatomical information of the patient being scanned. As a result, conventional radiation therapy treatment planning using this imaging modality is not feasible. Deep learning is useful for automating various manual tasks in radiation oncology, most notably, organ segmentation and treatment planning. Deep learning models have also been used to transform MRI datasets into synthetic CT datasets, facilitating the development of MRI-only radiation therapy planning. AIMS To train a pix2pix generative adversarial network to transform 3D optical scan data into estimated MRI datasets for a given patient to provide additional anatomical data for a select few radiation therapy treatment sites. The proposed network may provide useful anatomical information for treatment planning of surface mould brachytherapy, total body irradiation, and total skin electron therapy, for example, without delivering any imaging dose. METHODS A 2D pix2pix GAN was trained on 15,000 axial MRI slices of healthy adult brains paired with corresponding external mask slices. The model was validated on a further 5000 previously unseen external mask slices. The predictions were compared with the "ground-truth" MRI slices using the multi-scale structural similarity index (MSSI) metric. A certified neuro-radiologist was subsequently consulted to provide an independent review of the model's performance in terms of anatomical accuracy and consistency. The network was then applied to a 3D photogrammetry scan of a test subject to demonstrate the feasibility of this novel technique. RESULTS The trained pix2pix network predicted MRI slices with a mean MSSI of 0.831 ± 0.057 for the 5000 validation images indicating that it is possible to estimate a significant proportion of a patient's gross cranial anatomy from a patient's exterior contour. When independently reviewed by a certified neuro-radiologist, the model's performance was described as "quite amazing, but there are limitations in the regions where there is wide variation within the normal population." When the trained network was applied to a 3D model of a human subject acquired using optical photogrammetry, the network could estimate the corresponding MRI volume for that subject with good qualitative accuracy. However, a ground-truth MRI baseline was not available for quantitative comparison. CONCLUSIONS A deep learning model was developed, to transform 3D optical scan data of a patient into an estimated MRI volume, potentially increasing the usefulness of optical scanning in radiation therapy planning. This work has demonstrated that much of the human cranial anatomy can be predicted from the external shape of the head and may provide an additional source of valuable imaging data. Further research is required to investigate the feasibility of this approach for use in a clinical setting and further improve the model's accuracy.
Collapse
Affiliation(s)
- Michael Douglass
- Department of Radiation Oncology, Royal Adelaide Hospital, Adelaide, SA, 5000, Australia.
- Australian Bragg Centre for Proton Therapy and Research, SAHMRI, Adelaide, SA, 5000, Australia.
- School of Physical Sciences, University of Adelaide, Adelaide, SA, 5005, Australia.
| | - Peter Gorayski
- Department of Radiation Oncology, Royal Adelaide Hospital, Adelaide, SA, 5000, Australia
- Australian Bragg Centre for Proton Therapy and Research, SAHMRI, Adelaide, SA, 5000, Australia
- University of South Australia, Allied Health & Human Performance, Adelaide, SA, 5000, Australia
| | - Sandy Patel
- Department of Radiology, Royal Adelaide Hospital, Adelaide, SA, 5000, Australia
| | - Alexandre Santos
- Department of Radiation Oncology, Royal Adelaide Hospital, Adelaide, SA, 5000, Australia
- Australian Bragg Centre for Proton Therapy and Research, SAHMRI, Adelaide, SA, 5000, Australia
- School of Physical Sciences, University of Adelaide, Adelaide, SA, 5005, Australia
| |
Collapse
|
8
|
Hu M, Wang Z, Hu X, Wang Y, Wang G, Ding H, Bian M. High-resolution computed tomography diagnosis of pneumoconiosis complicated with pulmonary tuberculosis based on cascading deep supervision U-Net. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 226:107151. [PMID: 36179657 DOI: 10.1016/j.cmpb.2022.107151] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Revised: 09/16/2022] [Accepted: 09/20/2022] [Indexed: 06/16/2023]
Abstract
OBJECTIVE Pulmonary tuberculosis can promote pneumoconiosis deterioration, leading to higher mortality. This study aims to explore the diagnostic value of the cascading deep supervision U-Net (CSNet) model in pneumoconiosis complicated with pulmonary tuberculosis. METHODS A total of 162 patients with pneumoconiosis treated in our hospital were collected as the research objects. Patients were randomly divided into a training set (n = 113) and a test set (n = 49) in proportion (7:3). Based on the high-resolution computed tomography (HRCT), the traditional U-Net, supervision U-Net (SNet), and CSNet prediction models were constructed. Dice similarity coefficients, precision, recall, volumetric overlap error, and relative volume difference were used to evaluate the segmentation model. The area under the receiver operating characteristic curve (AUC) value represents the prediction efficiency of the model. RESULTS There were no statistically significant differences in gender, age, number of positive patients, and dust contact time between patients in the training set and test set (P > 0.05). The segmentation results of CSNet are better than the traditional U-Net model and the SNet model. The AUC value of the CSNet model was 0.947 (95% CI: 0.900∼0.994), which was higher than the traditional U-Net model. CONCLUSION The CSNet based on chest HRCT proposed in this study is superior to the traditional U-Net segmentation method in segmenting pneumoconiosis complicated with pulmonary tuberculosis. It has good prediction efficiency and can provide more clinical diagnostic value.
Collapse
Affiliation(s)
- Maoneng Hu
- Imaging Center, The Third Clinical College of Hefei of Anhui Medical University, The Third People's Hospital of Hefei, Hefei 230022, China.
| | - Zichen Wang
- Imaging Center, The Third Clinical College of Hefei of Anhui Medical University, The Third People's Hospital of Hefei, Hefei 230022, China
| | - Xinxin Hu
- Imaging Center, The Third Clinical College of Hefei of Anhui Medical University, The Third People's Hospital of Hefei, Hefei 230022, China
| | - Yi Wang
- Imaging Center, The Third Clinical College of Hefei of Anhui Medical University, The Third People's Hospital of Hefei, Hefei 230022, China
| | - Guoliang Wang
- Imaging Center, The Third Clinical College of Hefei of Anhui Medical University, The Third People's Hospital of Hefei, Hefei 230022, China
| | - Huanhuan Ding
- Imaging Center, The Third Clinical College of Hefei of Anhui Medical University, The Third People's Hospital of Hefei, Hefei 230022, China
| | - Mingmin Bian
- Imaging Center, The Third Clinical College of Hefei of Anhui Medical University, The Third People's Hospital of Hefei, Hefei 230022, China
| |
Collapse
|
9
|
Artificial intelligence and machine learning in cancer imaging. COMMUNICATIONS MEDICINE 2022; 2:133. [PMID: 36310650 PMCID: PMC9613681 DOI: 10.1038/s43856-022-00199-0] [Citation(s) in RCA: 73] [Impact Index Per Article: 36.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2020] [Accepted: 10/06/2022] [Indexed: 11/16/2022] Open
Abstract
An increasing array of tools is being developed using artificial intelligence (AI) and machine learning (ML) for cancer imaging. The development of an optimal tool requires multidisciplinary engagement to ensure that the appropriate use case is met, as well as to undertake robust development and testing prior to its adoption into healthcare systems. This multidisciplinary review highlights key developments in the field. We discuss the challenges and opportunities of AI and ML in cancer imaging; considerations for the development of algorithms into tools that can be widely used and disseminated; and the development of the ecosystem needed to promote growth of AI and ML in cancer imaging.
Collapse
|
10
|
Analysis the Innovation Path on Psychological Ideological with Political Teaching in Universities by Big Data in New Era. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:4305886. [PMID: 36110572 PMCID: PMC9470351 DOI: 10.1155/2022/4305886] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/22/2022] [Revised: 08/12/2022] [Accepted: 08/18/2022] [Indexed: 11/23/2022]
Abstract
The increasing pressure of life and the rapid development of the economy have caused huge mental health situation problems for people. Mental health situation has an important relationship with one's own life values. The cognition Ideological with political is to improve people's optimistic attitude and values. The students of college will face enormous pressure from their studies, which can easily causes the mental health situation problems. However, the psychological and ideological education in universities still adopts the traditional teaching method, which reduces students' learning hobbies and learning efficiency. This also reduces students' understanding of the content of psychological ideological with political teaching. Big data theory can process complex research object data and relationships. It can help researchers discover characteristic factors related to psychological ideological with political teaching. This study uses the hole convolution in big data theory and GRU technology to analyze the three factors of student behavior characteristics, mental health situation content characteristics, and the cognition Ideological with political content characteristics in college psychological the cognition Ideological with political. The research results show that the atrous convolution and GRU methods can more accurately predict the three characteristics of psychological ideological with political teaching in universities. This is helpful for educators to discover more appropriate psychological and ideological with political teaching methods.
Collapse
|
11
|
Li J, Udupa JK, Odhner D, Tong Y, Torigian DA. SOMA: Subject-, object-, and modality-adapted precision atlas approach for automatic anatomy recognition and delineation in medical images. Med Phys 2021; 48:7806-7825. [PMID: 34668207 PMCID: PMC8678400 DOI: 10.1002/mp.15308] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Revised: 09/12/2021] [Accepted: 09/29/2021] [Indexed: 11/06/2022] Open
Abstract
PURPOSE In the multi-atlas segmentation (MAS) method, a large enough atlas set, which can cover the complete spectrum of the whole population pattern of the target object will benefit the segmentation quality. However, the difficulty in obtaining and generating such a large set of atlases and the computational burden required in the segmentation procedure make this approach impractical. In this paper, we propose a method called SOMA to select subject-, object-, and modality-adapted precision atlases for automatic anatomy recognition in medical images with pathology, following the idea that different regions of the target object in a novel image can be recognized by different atlases with regionally best similarity, so that effective atlases have no need to be globally similar to the target subject and also have no need to be overall similar to the target object. METHODS The SOMA method consists of three main components: atlas building, object recognition, and object delineation. Considering the computational complexity, we utilize an all-to-template strategy to align all images to the same image space belonging to the root image determined by the minimum spanning tree (MST) strategy among a subset of radiologically near-normal images. The object recognition process is composed of two stages: rough recognition and refined recognition. In rough recognition, subimage matching is conducted between the test image and each image of the whole atlas set, and only the atlas corresponding to the best-matched subimage contributes to the recognition map regionally. The frequency of best match for each atlas is recorded by a counter, and the atlases with the highest frequencies are selected as the precision atlases. In refined recognition, only the precision atlases are examined, and the subimage matching is conducted in a nonlocal manner of searching to further increase the accuracy of boundary matching. Delineation is based on a U-net-based deep learning network, where the original gray scale image together with the fuzzy map from refined recognition compose a two-channel input to the network, and the output is a segmentation map of the target object. RESULTS Experiments are conducted on computed tomography (CT) images with different qualities in two body regions - head and neck (H&N) and thorax, from 298 subjects with nine objects and 241 subjects with six objects, respectively. Most objects achieve a localization error within two voxels after refined recognition, with marked improvement in localization accuracy from rough to refined recognition of 0.6-3 mm in H&N and 0.8-4.9 mm in thorax, and also in delineation accuracy (Dice coefficient) from refined recognition to delineation of 0.01-0.11 in H&N and 0.01-0.18 in thorax. CONCLUSIONS The SOMA method shows high accuracy and robustness in anatomy recognition and delineation. The improvements from rough to refined recognition and further to delineation, as well as immunity of recognition accuracy to varying image and object qualities, demonstrate the core principles of SOMA where segmentation accuracy increases with precision atlases and gradually refined object matching.
Collapse
Affiliation(s)
- Jieyu Li
- Institute of Image Processing and Pattern Recognition, Department of Automation, Shanghai Jiao Tong University, Shanghai, China
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Jayaram K. Udupa
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Dewey Odhner
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Yubing Tong
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Drew A. Torigian
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| |
Collapse
|
12
|
Sun R, Deutsch E, Fournier L. [Artificial intelligence and medical imaging]. Bull Cancer 2021; 109:83-88. [PMID: 34782120 DOI: 10.1016/j.bulcan.2021.09.009] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2021] [Revised: 08/31/2021] [Accepted: 09/02/2021] [Indexed: 01/06/2023]
Abstract
The use of artificial intelligence methods for image recognition is one of the most developed branches of the AI field and these technologies are now commonly used in our daily lives. In the field of medical imaging, approaches based on artificial intelligence are particularly promising, with numerous applications and a strong interest in the search for new biomarkers. Here, we will present the general methods used in these approaches as well as the potential areas of application.
Collapse
Affiliation(s)
- Roger Sun
- Gustave Roussy Cancer Campus, université Paris-Saclay, département de Radiothérapie, Inserm U1030, 94805 Villejuif, France.
| | - Eric Deutsch
- Gustave Roussy Cancer Campus, université Paris-Saclay, département de Radiothérapie, Inserm U1030, 94805 Villejuif, France
| | - Laure Fournier
- Hôpital Européen Georges-Pompidou, département de radiologie, 20, rue Leblanc, 75015 Paris, France
| |
Collapse
|
13
|
Liu Z, Liu F, Chen W, Tao Y, Liu X, Zhang F, Shen J, Guan H, Zhen H, Wang S, Chen Q, Chen Y, Hou X. Automatic Segmentation of Clinical Target Volume and Organs-at-Risk for Breast Conservative Radiotherapy Using a Convolutional Neural Network. Cancer Manag Res 2021; 13:8209-8217. [PMID: 34754241 PMCID: PMC8572021 DOI: 10.2147/cmar.s330249] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2021] [Accepted: 10/04/2021] [Indexed: 12/14/2022] Open
Abstract
Objective Delineation of clinical target volume (CTV) and organs at risk (OARs) is important for radiotherapy but is time-consuming. We trained and evaluated a U-ResNet model to provide fast and consistent auto-segmentation. Methods We collected 160 patients’ CT scans with breast cancer who underwent breast-conserving surgery (BCS) and were treated with radiotherapy. CTV and OARs were delineated manually and were used for model training. The dice similarity coefficient (DSC) and 95th percentile Hausdorff distance (95HD) were used to assess the performance of our model. CTV and OARs were randomly selected as ground truth (GT) masks, and artificial intelligence (AI) masks were generated by the proposed model. Two clinicians randomly compared CTV score differences of the contour. The consistency between two clinicians was tested. Time cost for auto-delineation was evaluated. Results The mean DSC values of the proposed method were 0.94, 0.95, 0.94, 0.96, 0.96 and 0.93 for breast CTV, contralateral breast, heart, right lung, left lung and spinal cord, respectively. The mean 95HD values were 4.31mm, 3.59mm, 4.86mm, 3.18mm, 2.79mm and 4.37mm for the above structures, respectively. The average CTV scores for AI and GT were 2.89 versus 2.92 when evaluated by oncologist A (P=0.612), and 2.75 versus 2.83 by oncologist B (P=0.213), with no statistically significant differences. The consistency between two clinicians was poor (kappa=0.282). The time for auto-segmentation of CTV and OARs was 10.03 s. Conclusion Our proposed model (U-ResNet) can improve the efficiency and accuracy of delineation compared with U-Net, performing equally well with the segmentation generated by oncologists.
Collapse
Affiliation(s)
- Zhikai Liu
- Department of Radiation Oncology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, 100730, People's Republic of China
| | - Fangjie Liu
- Department of Radiation Oncology, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, 510060, People's Republic of China
| | - Wanqi Chen
- Department of Radiation Oncology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, 100730, People's Republic of China
| | - Yinjie Tao
- Department of Radiation Oncology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, 100730, People's Republic of China
| | - Xia Liu
- Department of Radiation Oncology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, 100730, People's Republic of China
| | - Fuquan Zhang
- Department of Radiation Oncology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, 100730, People's Republic of China
| | - Jing Shen
- Department of Radiation Oncology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, 100730, People's Republic of China
| | - Hui Guan
- Department of Radiation Oncology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, 100730, People's Republic of China
| | - Hongnan Zhen
- Department of Radiation Oncology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, 100730, People's Republic of China
| | - Shaobin Wang
- MedMind Technology Co., Ltd., Beijing, 100055, People's Republic of China
| | - Qi Chen
- MedMind Technology Co., Ltd., Beijing, 100055, People's Republic of China
| | - Yu Chen
- MedMind Technology Co., Ltd., Beijing, 100055, People's Republic of China
| | - Xiaorong Hou
- Department of Radiation Oncology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, 100730, People's Republic of China
| |
Collapse
|
14
|
Chen M, Wu S, Zhao W, Zhou Y, Zhou Y, Wang G. Application of deep learning to auto-delineation of target volumes and organs at risk in radiotherapy. Cancer Radiother 2021; 26:494-501. [PMID: 34711488 DOI: 10.1016/j.canrad.2021.08.020] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2021] [Revised: 07/30/2021] [Accepted: 08/04/2021] [Indexed: 11/28/2022]
Abstract
The technological advancement heralded the arrival of precision radiotherapy (RT), thereby increasing the therapeutic ratio and decreasing the side effects from treatment. Contour of target volumes (TV) and organs at risk (OARs) in RT is a complicated process. In recent years, automatic contouring of TV and OARs has rapidly developed due to the advances in deep learning (DL). This technology has the potential to save time and to reduce intra- or inter-observer variability. In this paper, the authors provide an overview of RT, introduce the concept of DL, summarize the data characteristics of the included literature, summarize the possible challenges for DL in the future, and discuss the possible research directions.
Collapse
Affiliation(s)
- M Chen
- Department of Radiation Oncology, First Affiliated Hospital, Bengbu Medical College, Bengbu, Anhui 233004, China
| | - S Wu
- Department of Radiation Oncology, First Affiliated Hospital, Bengbu Medical College, Bengbu, Anhui 233004, China
| | - W Zhao
- Bengbu Medical College, Bengbu, Anhui 233030, China
| | - Y Zhou
- Department of Radiation Oncology, First Affiliated Hospital, Bengbu Medical College, Bengbu, Anhui 233004, China
| | - Y Zhou
- Department of Radiation Oncology, First Affiliated Hospital, Bengbu Medical College, Bengbu, Anhui 233004, China
| | - G Wang
- Department of Radiation Oncology, First Affiliated Hospital, Bengbu Medical College, Bengbu, Anhui 233004, China.
| |
Collapse
|
15
|
Douglass MJJ, Keal JA. DeepWL: Robust EPID based Winston-Lutz analysis using deep learning, synthetic image generation and optical path-tracing. Phys Med 2021; 89:306-316. [PMID: 34492498 DOI: 10.1016/j.ejmp.2021.08.012] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/02/2021] [Revised: 08/03/2021] [Accepted: 08/27/2021] [Indexed: 12/23/2022] Open
Abstract
Radiation therapy requires clinical linear accelerators to be mechanically and dosimetrically calibrated to a high standard. One important quality assurance test is the Winston-Lutz test which localises the radiation isocentre of the linac. In the current work we demonstrate a novel method of analysing EPID based Winston-Lutz QA images using a deep learning model trained only on synthetic image data. In addition, we propose a novel method of generating the synthetic WL images and associated 'ground-truth' masks using an optical path-tracing engine to 'fake' mega-voltage EPID images. The model called DeepWL was trained on 1500 synthetic WL images using data augmentation techniques for 180 epochs. The model was built using Keras with a TensorFlow backend on an Intel Core i5-6500T CPU and trained in approximately 15 h. DeepWL was shown to produce ball bearing and multi-leaf collimator field segmentations with a mean dice coefficient of 0.964 and 0.994 respectively on previously unseen synthetic testing data. When DeepWL was applied to WL data measured on an EPID, the predicted mean displacements were shown to be statistically similar to the Canny Edge detection method. However, the DeepWL predictions for the ball bearing locations were shown to correlate better with manual annotations compared with the Canny edge detection algorithm. DeepWL was demonstrated to analyse Winston-Lutz images with an accuracy suitable for routine linac quality assurance with some statistical evidence that it may outperform Canny Edge detection methods in terms of segmentation robustness and the resultant displacement predictions.
Collapse
Affiliation(s)
- Michael John James Douglass
- School of Physical Sciences, University of Adelaide, Adelaide 5005, South Australia, Australia; Department of Medical Physics, Royal Adelaide Hospital, Adelaide 5000, South Australia, Australia.
| | - James Alan Keal
- School of Physical Sciences, University of Adelaide, Adelaide 5005, South Australia, Australia
| |
Collapse
|
16
|
Maleki F, Le WT, Sananmuang T, Kadoury S, Forghani R. Machine Learning Applications for Head and Neck Imaging. Neuroimaging Clin N Am 2021; 30:517-529. [PMID: 33039001 DOI: 10.1016/j.nic.2020.08.003] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
The head and neck (HN) consists of a large number of vital anatomic structures within a compact area. Imaging plays a central role in the diagnosis and management of major disorders affecting the HN. This article reviews the recent applications of machine learning (ML) in HN imaging with a focus on deep learning approaches. It categorizes ML applications in HN imaging into deep learning and traditional ML applications and provides examples of each category. It also discusses the main challenges facing the successful deployment of ML-based applications in the clinical setting and provides suggestions for addressing these challenges.
Collapse
Affiliation(s)
- Farhad Maleki
- Augmented Intelligence & Precision Health Laboratory (AIPHL), Department of Radiology & Research Institute of the McGill University Health Centre, 5252 Boulevard de Maisonneuve Ouest, Montreal, Quebec H4A 3S5, Canada
| | - William Trung Le
- Polytechnique Montreal, PO Box 6079, succ. Centre-ville, Montreal, Quebec H3C 3A7, Canada
| | - Thiparom Sananmuang
- Department of Diagnostic and Therapeutic Radiology and Research, Faculty of Medicine Ramathibodi Hospital, Ratchathewi, Bangkok 10400, Thailand
| | - Samuel Kadoury
- Polytechnique Montreal, PO Box 6079, succ. Centre-ville, Montreal, Quebec H3C 3A7, Canada; CHUM Research Center, 900 St Denis Street, Montreal, Quebec H2X 0A9, Canada
| | - Reza Forghani
- Augmented Intelligence & Precision Health Laboratory (AIPHL), Department of Radiology & Research Institute of the McGill University Health Centre, 5252 Boulevard de Maisonneuve Ouest, Montreal, Quebec H4A 3S5, Canada; Department of Radiology, McGill University, 1650 Cedar Avenue, Montreal, Quebec H3G1A4, Canada; Segal Cancer Centre, Lady Davis Institute for Medical Research, Jewish General Hospital, 3755 Cote Ste-Catherine Road, Montreal, Quebec H3T 1E2, Canada; Gerald Bronfman Department of Oncology, McGill University, Suite 720, 5100 Maisonneuve Boulevard West, Montreal, Quebec H4A3T2, Canada; Department of Otolaryngology, Head and Neck Surgery, Royal Victoria Hospital, McGill University Health Centre, 1001 boul. Decarie Boulevard, Montreal, Quebec H3A 3J1, Canada.
| |
Collapse
|
17
|
Robert C, Munoz A, Moreau D, Mazurier J, Sidorski G, Gasnier A, Beldjoudi G, Grégoire V, Deutsch E, Meyer P, Simon L. Clinical implementation of deep-learning based auto-contouring tools-Experience of three French radiotherapy centers. Cancer Radiother 2021; 25:607-616. [PMID: 34389243 DOI: 10.1016/j.canrad.2021.06.023] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2021] [Revised: 06/17/2021] [Accepted: 06/18/2021] [Indexed: 12/23/2022]
Abstract
Deep-learning (DL)-based auto-contouring solutions have recently been proposed as a convincing alternative to decrease workload of target volumes and organs-at-risk (OAR) delineation in radiotherapy planning and improve inter-observer consistency. However, there is minimal literature of clinical implementations of such algorithms in a clinical routine. In this paper we first present an update of the state-of-the-art of DL-based solutions. We then summarize recent recommendations proposed by the European society for radiotherapy and oncology (ESTRO) to be followed before any clinical implementation of artificial intelligence-based solutions in clinic. The last section describes the methodology carried out by three French radiation oncology departments to deploy CE-marked commercial solutions. Based on the information collected, a majority of OAR are retained by the centers among those proposed by the manufacturers, validating the usefulness of DL-based models to decrease clinicians' workload. Target volumes, with the exception of lymph node areas in breast, head and neck and pelvic regions, whole breast, breast wall, prostate and seminal vesicles, are not available in the three commercial solutions at this time. No implemented workflows are currently available to continuously improve the models, but these can be adapted/retrained in some solutions during the commissioning phase to best fit local practices. In reported experiences, automatic workflows were implemented to limit human interactions and make the workflow more fluid. Recommendations published by the ESTRO group will be of importance for guiding physicists in the clinical implementation of patient specific and regular quality assurances.
Collapse
Affiliation(s)
- C Robert
- Department of Radiotherapy, Gustave-Roussy, Villejuif, France.
| | - A Munoz
- Department of Radiotherapy, Centre Léon-Bérard, Lyon, France
| | - D Moreau
- Department of Radiotherapy, Hôpital Européen Georges-Pompidou, Paris, France
| | - J Mazurier
- Department of Radiotherapy, Clinique Pasteur-Oncorad, Toulouse, France
| | - G Sidorski
- Department of Radiotherapy, Clinique Pasteur-Oncorad, Toulouse, France
| | - A Gasnier
- Department of Radiotherapy, Gustave-Roussy, Villejuif, France
| | - G Beldjoudi
- Department of Radiotherapy, Centre Léon-Bérard, Lyon, France
| | - V Grégoire
- Department of Radiotherapy, Centre Léon-Bérard, Lyon, France
| | - E Deutsch
- Department of Radiotherapy, Gustave-Roussy, Villejuif, France
| | - P Meyer
- Service d'Oncologie Radiothérapie, Institut de Cancérologie Strasbourg Europe (Icans), Strasbourg, France
| | - L Simon
- Institut Claudius Regaud (ICR), Institut Universitaire du Cancer de Toulouse - Oncopole (IUCT-O), Toulouse, France
| |
Collapse
|
18
|
Samarasinghe G, Jameson M, Vinod S, Field M, Dowling J, Sowmya A, Holloway L. Deep learning for segmentation in radiation therapy planning: a review. J Med Imaging Radiat Oncol 2021; 65:578-595. [PMID: 34313006 DOI: 10.1111/1754-9485.13286] [Citation(s) in RCA: 32] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2021] [Accepted: 06/29/2021] [Indexed: 12/21/2022]
Abstract
Segmentation of organs and structures, as either targets or organs-at-risk, has a significant influence on the success of radiation therapy. Manual segmentation is a tedious and time-consuming task for clinicians, and inter-observer variability can affect the outcomes of radiation therapy. The recent hype over deep neural networks has added many powerful auto-segmentation methods as variations of convolutional neural networks (CNN). This paper presents a descriptive review of the literature on deep learning techniques for segmentation in radiation therapy planning. The most common CNN architecture across the four clinical sub sites considered was U-net, with the majority of deep learning segmentation articles focussed on head and neck normal tissue structures. The most common data sets were CT images from an inhouse source, along with some public data sets. N-fold cross-validation was commonly employed; however, not all work separated training, test and validation data sets. This area of research is expanding rapidly. To facilitate comparisons of proposed methods and benchmarking, consistent use of appropriate metrics and independent validation should be carefully considered.
Collapse
Affiliation(s)
- Gihan Samarasinghe
- School of Computer Science and Engineering, University of New South Wales, Sydney, New South Wales, Australia.,Ingham Institute for Applied Medical Research and South Western Sydney Clinical School, UNSW, Liverpool, New South Wales, Australia
| | - Michael Jameson
- Genesiscare, Sydney, New South Wales, Australia.,St Vincent's Clinical School, University of New South Wales, Sydney, New South Wales, Australia
| | - Shalini Vinod
- Ingham Institute for Applied Medical Research and South Western Sydney Clinical School, UNSW, Liverpool, New South Wales, Australia.,Liverpool Cancer Therapy Centre, Liverpool Hospital, Liverpool, New South Wales, Australia
| | - Matthew Field
- Ingham Institute for Applied Medical Research and South Western Sydney Clinical School, UNSW, Liverpool, New South Wales, Australia.,Liverpool Cancer Therapy Centre, Liverpool Hospital, Liverpool, New South Wales, Australia
| | - Jason Dowling
- Commonwealth Scientific and Industrial Research Organisation, Australian E-Health Research Centre, Herston, Queensland, Australia
| | - Arcot Sowmya
- School of Computer Science and Engineering, University of New South Wales, Sydney, New South Wales, Australia
| | - Lois Holloway
- Ingham Institute for Applied Medical Research and South Western Sydney Clinical School, UNSW, Liverpool, New South Wales, Australia.,Liverpool Cancer Therapy Centre, Liverpool Hospital, Liverpool, New South Wales, Australia
| |
Collapse
|
19
|
Nikolov S, Blackwell S, Zverovitch A, Mendes R, Livne M, De Fauw J, Patel Y, Meyer C, Askham H, Romera-Paredes B, Kelly C, Karthikesalingam A, Chu C, Carnell D, Boon C, D'Souza D, Moinuddin SA, Garie B, McQuinlan Y, Ireland S, Hampton K, Fuller K, Montgomery H, Rees G, Suleyman M, Back T, Hughes CO, Ledsam JR, Ronneberger O. Clinically Applicable Segmentation of Head and Neck Anatomy for Radiotherapy: Deep Learning Algorithm Development and Validation Study. J Med Internet Res 2021; 23:e26151. [PMID: 34255661 PMCID: PMC8314151 DOI: 10.2196/26151] [Citation(s) in RCA: 118] [Impact Index Per Article: 39.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2020] [Revised: 02/10/2021] [Accepted: 04/30/2021] [Indexed: 12/16/2022] Open
Abstract
BACKGROUND Over half a million individuals are diagnosed with head and neck cancer each year globally. Radiotherapy is an important curative treatment for this disease, but it requires manual time to delineate radiosensitive organs at risk. This planning process can delay treatment while also introducing interoperator variability, resulting in downstream radiation dose differences. Although auto-segmentation algorithms offer a potentially time-saving solution, the challenges in defining, quantifying, and achieving expert performance remain. OBJECTIVE Adopting a deep learning approach, we aim to demonstrate a 3D U-Net architecture that achieves expert-level performance in delineating 21 distinct head and neck organs at risk commonly segmented in clinical practice. METHODS The model was trained on a data set of 663 deidentified computed tomography scans acquired in routine clinical practice and with both segmentations taken from clinical practice and segmentations created by experienced radiographers as part of this research, all in accordance with consensus organ at risk definitions. RESULTS We demonstrated the model's clinical applicability by assessing its performance on a test set of 21 computed tomography scans from clinical practice, each with 21 organs at risk segmented by 2 independent experts. We also introduced surface Dice similarity coefficient, a new metric for the comparison of organ delineation, to quantify the deviation between organ at risk surface contours rather than volumes, better reflecting the clinical task of correcting errors in automated organ segmentations. The model's generalizability was then demonstrated on 2 distinct open-source data sets, reflecting different centers and countries to model training. CONCLUSIONS Deep learning is an effective and clinically applicable technique for the segmentation of the head and neck anatomy for radiotherapy. With appropriate validation studies and regulatory approvals, this system could improve the efficiency, consistency, and safety of radiotherapy pathways.
Collapse
Affiliation(s)
| | | | | | - Ruheena Mendes
- University College London Hospitals NHS Foundation Trust, London, United Kingdom
| | | | | | | | | | | | | | | | | | | | - Dawn Carnell
- University College London Hospitals NHS Foundation Trust, London, United Kingdom
| | - Cheng Boon
- Clatterbridge Cancer Centre NHS Foundation Trust, Liverpool, United Kingdom
| | - Derek D'Souza
- University College London Hospitals NHS Foundation Trust, London, United Kingdom
| | - Syed Ali Moinuddin
- University College London Hospitals NHS Foundation Trust, London, United Kingdom
| | | | | | | | | | | | | | - Geraint Rees
- University College London, London, United Kingdom
| | | | | | | | | | | |
Collapse
|
20
|
Qiu B, van der Wel H, Kraeima J, Glas HH, Guo J, Borra RJH, Witjes MJH, van Ooijen PMA. Automatic Segmentation of Mandible from Conventional Methods to Deep Learning-A Review. J Pers Med 2021; 11:629. [PMID: 34357096 PMCID: PMC8307673 DOI: 10.3390/jpm11070629] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2021] [Revised: 06/26/2021] [Accepted: 06/28/2021] [Indexed: 01/05/2023] Open
Abstract
Medical imaging techniques, such as (cone beam) computed tomography and magnetic resonance imaging, have proven to be a valuable component for oral and maxillofacial surgery (OMFS). Accurate segmentation of the mandible from head and neck (H&N) scans is an important step in order to build a personalized 3D digital mandible model for 3D printing and treatment planning of OMFS. Segmented mandible structures are used to effectively visualize the mandible volumes and to evaluate particular mandible properties quantitatively. However, mandible segmentation is always challenging for both clinicians and researchers, due to complex structures and higher attenuation materials, such as teeth (filling) or metal implants that easily lead to high noise and strong artifacts during scanning. Moreover, the size and shape of the mandible vary to a large extent between individuals. Therefore, mandible segmentation is a tedious and time-consuming task and requires adequate training to be performed properly. With the advancement of computer vision approaches, researchers have developed several algorithms to automatically segment the mandible during the last two decades. The objective of this review was to present the available fully (semi)automatic segmentation methods of the mandible published in different scientific articles. This review provides a vivid description of the scientific advancements to clinicians and researchers in this field to help develop novel automatic methods for clinical applications.
Collapse
Affiliation(s)
- Bingjiang Qiu
- 3D Lab, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands; (B.Q.); (H.v.d.W.); (J.K.); (H.H.G.); (M.J.H.W.)
- Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands;
- Data Science Center in Health (DASH), University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| | - Hylke van der Wel
- 3D Lab, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands; (B.Q.); (H.v.d.W.); (J.K.); (H.H.G.); (M.J.H.W.)
- Department of Oral and Maxillofacial Surgery, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| | - Joep Kraeima
- 3D Lab, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands; (B.Q.); (H.v.d.W.); (J.K.); (H.H.G.); (M.J.H.W.)
- Department of Oral and Maxillofacial Surgery, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| | - Haye Hendrik Glas
- 3D Lab, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands; (B.Q.); (H.v.d.W.); (J.K.); (H.H.G.); (M.J.H.W.)
- Department of Oral and Maxillofacial Surgery, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| | - Jiapan Guo
- Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands;
- Data Science Center in Health (DASH), University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| | - Ronald J. H. Borra
- Medical Imaging Center (MIC), University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands;
| | - Max Johannes Hendrikus Witjes
- 3D Lab, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands; (B.Q.); (H.v.d.W.); (J.K.); (H.H.G.); (M.J.H.W.)
- Department of Oral and Maxillofacial Surgery, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| | - Peter M. A. van Ooijen
- Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands;
- Data Science Center in Health (DASH), University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| |
Collapse
|
21
|
Fu Y, Lei Y, Wang T, Curran WJ, Liu T, Yang X. A review of deep learning based methods for medical image multi-organ segmentation. Phys Med 2021; 85:107-122. [PMID: 33992856 PMCID: PMC8217246 DOI: 10.1016/j.ejmp.2021.05.003] [Citation(s) in RCA: 77] [Impact Index Per Article: 25.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/27/2020] [Revised: 03/12/2021] [Accepted: 05/03/2021] [Indexed: 12/12/2022] Open
Abstract
Deep learning has revolutionized image processing and achieved the-state-of-art performance in many medical image segmentation tasks. Many deep learning-based methods have been published to segment different parts of the body for different medical applications. It is necessary to summarize the current state of development for deep learning in the field of medical image segmentation. In this paper, we aim to provide a comprehensive review with a focus on multi-organ image segmentation, which is crucial for radiotherapy where the tumor and organs-at-risk need to be contoured for treatment planning. We grouped the surveyed methods into two broad categories which are 'pixel-wise classification' and 'end-to-end segmentation'. Each category was divided into subgroups according to their network design. For each type, we listed the surveyed works, highlighted important contributions and identified specific challenges. Following the detailed review, we discussed the achievements, shortcomings and future potentials of each category. To enable direct comparison, we listed the performance of the surveyed works that used thoracic and head-and-neck benchmark datasets.
Collapse
Affiliation(s)
- Yabo Fu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA.
| |
Collapse
|
22
|
Yakar M, Etiz D. Artificial intelligence in radiation oncology. Artif Intell Med Imaging 2021; 2:13-31. [DOI: 10.35711/aimi.v2.i2.13] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/04/2021] [Revised: 03/30/2021] [Accepted: 04/20/2021] [Indexed: 02/06/2023] Open
Abstract
Artificial intelligence (AI) is a computer science that tries to mimic human-like intelligence in machines that use computer software and algorithms to perform specific tasks without direct human input. Machine learning (ML) is a subunit of AI that uses data-driven algorithms that learn to imitate human behavior based on a previous example or experience. Deep learning is an ML technique that uses deep neural networks to create a model. The growth and sharing of data, increasing computing power, and developments in AI have initiated a transformation in healthcare. Advances in radiation oncology have produced a significant amount of data that must be integrated with computed tomography imaging, dosimetry, and imaging performed before each fraction. Of the many algorithms used in radiation oncology, has advantages and limitations with different computational power requirements. The aim of this review is to summarize the radiotherapy (RT) process in workflow order by identifying specific areas in which quality and efficiency can be improved by ML. The RT stage is divided into seven stages: patient evaluation, simulation, contouring, planning, quality control, treatment application, and patient follow-up. A systematic evaluation of the applicability, limitations, and advantages of AI algorithms has been done for each stage.
Collapse
Affiliation(s)
- Melek Yakar
- Department of Radiation Oncology, Eskisehir Osmangazi University Faculty of Medicine, Eskisehir 26040, Turkey
- Center of Research and Application for Computer Aided Diagnosis and Treatment in Health, Eskisehir Osmangazi University, Eskisehir 26040, Turkey
| | - Durmus Etiz
- Department of Radiation Oncology, Eskisehir Osmangazi University Faculty of Medicine, Eskisehir 26040, Turkey
- Center of Research and Application for Computer Aided Diagnosis and Treatment in Health, Eskisehir Osmangazi University, Eskisehir 26040, Turkey
| |
Collapse
|
23
|
Li J, Udupa JK, Tong Y, Wang L, Torigian DA. Segmentation evaluation with sparse ground truth data: Simulating true segmentations as perfect/imperfect as those generated by humans. Med Image Anal 2021; 69:101980. [PMID: 33588116 PMCID: PMC7933105 DOI: 10.1016/j.media.2021.101980] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2020] [Revised: 01/19/2021] [Accepted: 01/20/2021] [Indexed: 10/22/2022]
Abstract
Fully annotated data sets play important roles in medical image segmentation and evaluation. Expense and imprecision are the two main issues in generating ground truth (GT) segmentations. In this paper, in an attempt to overcome these two issues jointly, we propose a method, named SparseGT, which exploit variability among human segmenters to maximally save manual workload in GT generation for evaluating actual segmentations by algorithms. Pseudo ground truth (p-GT) segmentations are created by only a small fraction of workload and with human-level perfection/imperfection, and they can be used in practice as a substitute for fully manual GT in evaluating segmentation algorithms at the same precision. p-GT segmentations are generated by first selecting slices sparsely, where manual contouring is conducted only on these sparse slices, and subsequently filling segmentations on other slices automatically. By creating p-GT with different levels of sparseness, we determine the largest workload reduction achievable for each considered object, where the variability of the generated p-GT is statistically indistinguishable from inter-segmenter differences in full manual GT segmentations for that object. Furthermore, we investigate the segmentation evaluation errors introduced by variability in manual GT by applying p-GT in evaluation of actual segmentations by an algorithm. Experiments are conducted on ∼500 computed tomography (CT) studies involving six objects in two body regions, Head & Neck and Thorax, where optimal sparseness and corresponding evaluation errors are determined for each object and each strategy. Our results indicate that creating p-GT by the concatenated strategy of uniformly selecting sparse slices and filling segmentations via deep-learning (DL) network show highest manual workload reduction by ∼80-96% without sacrificing evaluation accuracy compared to fully manual GT. Nevertheless, other strategies also have obvious contributions in different situations. A non-uniform strategy for slice selection shows its advantage for objects with irregular shape change from slice to slice. An interpolation strategy for filling segmentations can achieve ∼60-90% of workload reduction in simulating human-level GT without the need of an actual training stage and shows potential in enlarging data sets for training p-GT generation networks. We conclude that not only over 90% reduction in workload is feasible without sacrificing evaluation accuracy but also the suitable strategy and the optimal sparseness level achievable for creating p-GT are object- and application-specific.
Collapse
Affiliation(s)
- Jieyu Li
- Institute of Image Processing and Pattern Recognition, Department of Automation, Shanghai Jiao Tong University, 800 Dongchuan RD, Shanghai, 200240, China; Medical Image Processing Group, Department of Radiology, University of Pennsylvania, 602 Goddard building, 3710 Hamilton Walk, Philadelphia, PA, 19104, United States
| | - Jayaram K Udupa
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, 602 Goddard building, 3710 Hamilton Walk, Philadelphia, PA, 19104, United States.
| | - Yubing Tong
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, 602 Goddard building, 3710 Hamilton Walk, Philadelphia, PA, 19104, United States
| | - Lisheng Wang
- Institute of Image Processing and Pattern Recognition, Department of Automation, Shanghai Jiao Tong University, 800 Dongchuan RD, Shanghai, 200240, China
| | - Drew A Torigian
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, 602 Goddard building, 3710 Hamilton Walk, Philadelphia, PA, 19104, United States
| |
Collapse
|
24
|
Kieselmann JP, Fuller CD, Gurney-Champion OJ, Oelfke U. Cross-modality deep learning: Contouring of MRI data from annotated CT data only. Med Phys 2021; 48:1673-1684. [PMID: 33251619 PMCID: PMC8058228 DOI: 10.1002/mp.14619] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2019] [Revised: 08/03/2020] [Accepted: 11/02/2020] [Indexed: 12/13/2022] Open
Abstract
PURPOSE Online adaptive radiotherapy would greatly benefit from the development of reliable auto-segmentation algorithms for organs-at-risk and radiation targets. Current practice of manual segmentation is subjective and time-consuming. While deep learning-based algorithms offer ample opportunities to solve this problem, they typically require large datasets. However, medical imaging data are generally sparse, in particular annotated MR images for radiotherapy. In this study, we developed a method to exploit the wealth of publicly available, annotated CT images to generate synthetic MR images, which could then be used to train a convolutional neural network (CNN) to segment the parotid glands on MR images of head and neck cancer patients. METHODS Imaging data comprised 202 annotated CT and 27 annotated MR images. The unpaired CT and MR images were fed into a 2D CycleGAN network to generate synthetic MR images from the CT images. Annotations of axial slices of the synthetic images were generated by propagating the CT contours. These were then used to train a 2D CNN. We assessed the segmentation accuracy using the real MR images as test dataset. The accuracy was quantified with the 3D Dice similarity coefficient (DSC), Hausdorff distance (HD), and mean surface distance (MSD) between manual and auto-generated contours. We benchmarked the approach by a comparison to the interobserver variation determined for the real MR images, as well as to the accuracy when training the 2D CNN to segment the CT images. RESULTS The determined accuracy (DSC: 0.77±0.07, HD: 18.04±12.59mm, MSD: 2.51±1.47mm) was close to the interobserver variation (DSC: 0.84±0.06, HD: 10.85±5.74mm, MSD: 1.50±0.77mm), as well as to the accuracy when training the 2D CNN to segment the CT images (DSC: 0.81±0.07, HD: 13.00±7.61mm, MSD: 1.87±0.84mm). CONCLUSIONS The introduced cross-modality learning technique can be of great value for segmentation problems with sparse training data. We anticipate using this method with any nonannotated MRI dataset to generate annotated synthetic MR images of the same type via image style transfer from annotated CT images. Furthermore, as this technique allows for fast adaptation of annotated datasets from one imaging modality to another, it could prove useful for translating between large varieties of MRI contrasts due to differences in imaging protocols within and between institutions.
Collapse
Affiliation(s)
- Jennifer P. Kieselmann
- Joint Department of Physics, The Institute of Cancer Research and The Royal Marsden NHS Foundation Trust, London SM2 5NG, UK
| | - Clifton D. Fuller
- Department of Radiation Oncology, The University of Texas M. D. Anderson Cancer Center, Houston, Texas 77030, USA
| | - Oliver J. Gurney-Champion
- Joint Department of Physics, The Institute of Cancer Research and The Royal Marsden NHS Foundation Trust, London SM2 5NG, UK
| | - Uwe Oelfke
- Joint Department of Physics, The Institute of Cancer Research and The Royal Marsden NHS Foundation Trust, London SM2 5NG, UK
| |
Collapse
|
25
|
Cardenas CE, Beadle BM, Garden AS, Skinner HD, Yang J, Rhee DJ, McCarroll RE, Netherton TJ, Gay SS, Zhang L, Court LE. Generating High-Quality Lymph Node Clinical Target Volumes for Head and Neck Cancer Radiation Therapy Using a Fully Automated Deep Learning-Based Approach. Int J Radiat Oncol Biol Phys 2021; 109:801-812. [PMID: 33068690 PMCID: PMC9472456 DOI: 10.1016/j.ijrobp.2020.10.005] [Citation(s) in RCA: 38] [Impact Index Per Article: 12.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2020] [Revised: 08/12/2020] [Accepted: 10/06/2020] [Indexed: 12/17/2022]
Abstract
PURPOSE To develop a deep learning model that generates consistent, high-quality lymph node clinical target volumes (CTV) contours for head and neck cancer (HNC) patients, as an integral part of a fully automated radiation treatment planning workflow. METHODS AND MATERIALS Computed tomography (CT) scans from 71 HNC patients were retrospectively collected and split into training (n = 51), cross-validation (n = 10), and test (n = 10) data sets. All had target volume delineations covering lymph node levels Ia through V (Ia-V), Ib through V (Ib-V), II through IV (II-IV), and retropharyngeal (RP) nodes, which were previously approved by a radiation oncologist specializing in HNC. Volumes of interest (VOIs) about nodal levels were automatically identified using computer vision techniques. The VOI (cropped CT image) and approved contours were used to train a U-Net autosegmentation model. Each lymph node level was trained independently, with model parameters optimized by assessing performance on the cross-validation data set. Once optimal model parameters were identified, overlap and distance metrics were calculated between ground truth and autosegmentations on the test set. Lastly, this final model was used on 32 additional patient scans (not included in original 71 cases) and autosegmentations visually rated by 3 radiation oncologists as being "clinically acceptable without requiring edits," "requiring minor edits," or "requiring major edits." RESULTS When comparing ground truths to autosegmentations on the test data set, median Dice Similarity Coefficients were 0.90, 0.90, 0.89, and 0.81, and median mean surface distance values were 1.0 mm, 1.0 mm, 1.1 mm, and 1.3 mm for node levels Ia-V, Ib-V, II-IV, and RP nodes, respectively. Qualitative scoring varied among physicians. Overall, 99% of autosegmented target volumes were either scored as being clinically acceptable or requiring minor edits (ie, stylistic recommendations, <2 minutes). CONCLUSIONS We developed a fully automated artificial intelligence approach to autodelineate nodal CTVs for patients with intact HNC. Most autosegmentations were found to be clinically acceptable after qualitative review when considering recommended stylistic edits. This promising work automatically delineates nodal CTVs in a robust and consistent manner; this approach can be implemented in ongoing efforts for fully automated radiation treatment planning.
Collapse
Affiliation(s)
- Carlos E Cardenas
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, Texas.
| | - Beth M Beadle
- Department of Radiation Oncology, Stanford University, Palo Alto, California
| | - Adam S Garden
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Heath D Skinner
- Department of Radiation Oncology, University of Pittsburgh, Pittsburgh, Pennsylvania
| | - Jinzhong Yang
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Dong Joo Rhee
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Rachel E McCarroll
- Department of Radiation Oncology, University of Maryland Medical System, Baltimore, Maryland
| | - Tucker J Netherton
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Skylar S Gay
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Lifei Zhang
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Laurence E Court
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, Texas
| |
Collapse
|
26
|
Vinas L, Scholey J, Descovich M, Kearney V, Sudhyadhom A. Improved contrast and noise of megavoltage computed tomography (MVCT) through cycle-consistent generative machine learning. Med Phys 2021; 48:676-690. [PMID: 33232526 PMCID: PMC8743188 DOI: 10.1002/mp.14616] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2020] [Revised: 09/15/2020] [Accepted: 11/12/2020] [Indexed: 01/11/2023] Open
Abstract
PURPOSE Megavoltage computed tomography (MVCT) has been implemented on many radiation therapy treatment machines as a tomographic imaging modality that allows for three-dimensional visualization and localization of patient anatomy. Yet MVCT images exhibit lower contrast and greater noise than its kilovoltage CT (kVCT) counterpart. In this work, we sought to improve these disadvantages of MVCT images through an image-to-image-based machine learning transformation of MVCT and kVCT images. We demonstrated that by learning the style of kVCT images, MVCT images can be converted into high-quality synthetic kVCT (skVCT) images with higher contrast and lower noise, when compared to the original MVCT. METHODS Kilovoltage CT and MVCT images of 120 head and neck (H&N) cancer patients treated on an Accuray TomoHD system were retrospectively analyzed in this study. A cycle-consistent generative adversarial network (CycleGAN) machine learning, a variant of the generative adversarial network (GAN), was used to learn Hounsfield Unit (HU) transformations from MVCT to kVCT images, creating skVCT images. A formal mathematical proof is given describing the interplay between function sensitivity and input noise and how it applies to the error variance of a high-capacity function trained with noisy input data. Finally, we show how skVCT shares distributional similarity to kVCT for various macro-structures found in the body. RESULTS Signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) were improved in skVCT images relative to the original MVCT images and were consistent with kVCT images. Specifically, skVCT CNR for muscle-fat, bone-fat, and bone-muscle improved to 14.8 ± 0.4, 122.7 ± 22.6, and 107.9 ± 22.4 compared with 1.6 ± 0.3, 7.6 ± 1.9, and 6.0 ± 1.7, respectively, in the original MVCT images and was more consistent with kVCT CNR values of 15.2 ± 0.8, 124.9 ± 27.0, and 109.7 ± 26.5, respectively. Noise was significantly reduced in skVCT images with SNR values improving by roughly an order of magnitude and consistent with kVCT SNR values. Axial slice mean (S-ME) and mean absolute error (S-MAE) agreement between kVCT and MVCT/skVCT improved, on average, from -16.0 and 109.1 HU to 8.4 and 76.9 HU, respectively. CONCLUSIONS A kVCT-like qualitative aid was generated from input MVCT data through a CycleGAN instance. This qualitative aid, skVCT, was robust toward embedded metallic material, dramatically improves HU alignment from MVCT, and appears perceptually similar to kVCT with SNR and CNR values equivalent to that of kVCT images.
Collapse
Affiliation(s)
- Luciano Vinas
- Department of Physics, University of California Berkeley, Berkeley, California, 94720
- Department of Radiation Oncology, University of California San Francisco, San Francisco, California 94143
| | - Jessica Scholey
- Department of Radiation Oncology, University of California San Francisco, San Francisco, California 94143
| | - Martina Descovich
- Department of Radiation Oncology, University of California San Francisco, San Francisco, California 94143
| | - Vasant Kearney
- Department of Radiation Oncology, University of California San Francisco, San Francisco, California 94143
| | - Atchar Sudhyadhom
- Department of Radiation Oncology, University of California San Francisco, San Francisco, California 94143
| |
Collapse
|
27
|
Men K, Chen X, Yang B, Zhu J, Yi J, Wang S, Li Y, Dai J. Automatic segmentation of three clinical target volumes in radiotherapy using lifelong learning. Radiother Oncol 2021; 157:1-7. [PMID: 33418008 DOI: 10.1016/j.radonc.2020.12.034] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2020] [Revised: 12/21/2020] [Accepted: 12/22/2020] [Indexed: 10/22/2022]
Abstract
BACKGROUND AND PURPOSE Convolutional neural networks (CNNs) have comparable human level performance in automatic segmentation. An important challenge that CNNs face in segmentation is catastrophic forgetting. They lose performance on tasks that were previously learned when trained on task. In this study, we propose a lifelong learning method to learn multiple segmentation tasks continuously without forgetting previous tasks. MATERIALS AND METHODS The cohort included three tumors, 800 patients of which had nasopharyngeal cancer (NPC), 800 patients had breast cancer, and 800 patients had rectal cancer. The tasks included segmentation of the clinical target volume (CTV) of these three cancers. The proposed lifelong learning network adopted dilation adapter to learn three segmentation tasks one by one. Only the newly added dilation adapter (seven layers) was fine tuning for incoming new task, whereas all the other learned layers were frozen. RESULTS Compared with single-task, multi-task or transfer learning, the proposed lifelong learning can achieve better or comparable segmentation accuracy with a DSC of 0.86 for NPC, 0.89 for breast cancer, and 0.87 for rectal cancer. Lifelong learning can avoid forgetting in sequential learning and yield good performance with less training data. Furthermore, it is more efficient than single-task or transfer learning, which reduced the number of parameters, size of model, and training time by ~58.8%, ~55.6%, and ~25.0%, respectively. CONCLUSION The proposed method preserved the knowledge of previous tasks while learning a new one using a dilation adapter. It could yield comparable performance with much less training data, model parameters, and training time.
Collapse
Affiliation(s)
- Kuo Men
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Xinyuan Chen
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Bining Yang
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Ji Zhu
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Junlin Yi
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Shulian Wang
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Yexiong Li
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Jianrong Dai
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China.
| |
Collapse
|
28
|
Zhang S, Wang H, Tian S, Zhang X, Li J, Lei R, Gao M, Liu C, Yang L, Bi X, Zhu L, Zhu S, Xu T, Yang R. A slice classification model-facilitated 3D encoder-decoder network for segmenting organs at risk in head and neck cancer. JOURNAL OF RADIATION RESEARCH 2021; 62:94-103. [PMID: 33029634 PMCID: PMC7779351 DOI: 10.1093/jrr/rraa094] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/05/2020] [Revised: 05/30/2020] [Indexed: 06/06/2023]
Abstract
For deep learning networks used to segment organs at risk (OARs) in head and neck (H&N) cancers, the class-imbalance problem between small volume OARs and whole computed tomography (CT) images results in delineation with serious false-positives on irrelevant slices and unnecessary time-consuming calculations. To alleviate this problem, a slice classification model-facilitated 3D encoder-decoder network was developed and validated. In the developed two-step segmentation model, a slice classification model was firstly utilized to classify CT slices into six categories in the craniocaudal direction. Then the target categories for different OARs were pushed to the different 3D encoder-decoder segmentation networks, respectively. All the patients were divided into training (n = 120), validation (n = 30) and testing (n = 20) datasets. The average accuracy of the slice classification model was 95.99%. The Dice similarity coefficient and 95% Hausdorff distance, respectively, for each OAR were as follows: right eye (0.88 ± 0.03 and 1.57 ± 0.92 mm), left eye (0.89 ± 0.03 and 1.35 ± 0.43 mm), right optic nerve (0.72 ± 0.09 and 1.79 ± 1.01 mm), left optic nerve (0.73 ± 0.09 and 1.60 ± 0.71 mm), brainstem (0.87 ± 0.04 and 2.28 ± 0.99 mm), right temporal lobe (0.81 ± 0.12 and 3.28 ± 2.27 mm), left temporal lobe (0.82 ± 0.09 and 3.73 ± 2.08 mm), right temporomandibular joint (0.70 ± 0.13 and 1.79 ± 0.79 mm), left temporomandibular joint (0.70 ± 0.16 and 1.98 ± 1.48 mm), mandible (0.89 ± 0.02 and 1.66 ± 0.51 mm), right parotid (0.77 ± 0.07 and 7.30 ± 4.19 mm) and left parotid (0.71 ± 0.12 and 8.41 ± 4.84 mm). The total segmentation time was 40.13 s. The 3D encoder-decoder network facilitated by the slice classification model demonstrated superior performance in accuracy and efficiency in segmenting OARs in H&N CT images. This may significantly reduce the workload for radiation oncologists.
Collapse
Affiliation(s)
- Shuming Zhang
- Department of Radiation Oncology, Peking University Third Hospital, Beijing, China
| | - Hao Wang
- Department of Radiation Oncology, Peking University Third Hospital, Beijing, China
| | - Suqing Tian
- Department of Radiation Oncology, Peking University Third Hospital, Beijing, China
| | - Xuyang Zhang
- Department of Radiation Oncology, Peking University Third Hospital, Beijing, China
- Cancer Center, Beijing Luhe Hospital, Capital Medical University, Beijing, China
| | - Jiaqi Li
- Department of Radiation Oncology, Peking University Third Hospital, Beijing, China
- Department of Emergency, Beijing Children’s Hospital, Capital Medical University, Beijing, China
| | - Runhong Lei
- Department of Radiation Oncology, Peking University Third Hospital, Beijing, China
| | - Mingze Gao
- Beijing Linking Medical Technology Co., Ltd, Beijing, China
| | - Chunlei Liu
- Beijing Linking Medical Technology Co., Ltd, Beijing, China
| | - Li Yang
- Beijing Linking Medical Technology Co., Ltd, Beijing, China
| | - Xinfang Bi
- Beijing Linking Medical Technology Co., Ltd, Beijing, China
| | - Linlin Zhu
- Beijing Linking Medical Technology Co., Ltd, Beijing, China
| | - Senhua Zhu
- Beijing Linking Medical Technology Co., Ltd, Beijing, China
| | - Ting Xu
- Institute of Science and Technology Development, Beijing University of Posts and Telecommunications, Beijing, China
| | - Ruijie Yang
- Department of Radiation Oncology, Peking University Third Hospital, Beijing, China
| |
Collapse
|
29
|
Men K, Chen X, Zhu J, Yang B, Zhang Y, Yi J, Jianrong Dai A. Continual improvement of nasopharyngeal carcinoma segmentation with less labeling effort. Phys Med 2020; 80:347-351. [PMID: 33271391 DOI: 10.1016/j.ejmp.2020.11.005] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/04/2020] [Revised: 09/22/2020] [Accepted: 11/02/2020] [Indexed: 12/12/2022] Open
Abstract
PURPOSE Convolutional neural networks (CNNs) offer a promising approach to automated segmentation. However, labeling contours on a large scale is laborious. Here we propose a method to improve segmentation continually with less labeling effort. METHODS The cohort included 600 patients with nasopharyngeal carcinoma. The proposed method was comprised of four steps. First, an initial CNN model was trained from scratch to perform segmentation of the clinical target volume. Second, a binary classifier was trained using a secondary CNN to identify samples for which the initial model gave a dice similarity coefficient (DSC) < 0.85. Third, the classifier was used to select such samples from the new coming data. Forth, the final model was fine-tuned from the initial model, using only selected samples. RESULTS The classifier can detect poor segmentation of the model with an accuracy of 92%. The proposed segmentation method improved the DSC from 0.82 to 0.86 while reducing the labeling effort by 45%. CONCLUSIONS The proposed method reduces the amount of labeled training data and improves segmentation by continually acquiring, fine-tuning, and transferring knowledge over long time spans.
Collapse
Affiliation(s)
- Kuo Men
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Xinyuan Chen
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Ji Zhu
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Bining Yang
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Ye Zhang
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Junlin Yi
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - And Jianrong Dai
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China.
| |
Collapse
|
30
|
Wang Z, Chang Y, Peng Z, Lv Y, Shi W, Wang F, Pei X, Xu XG. Evaluation of deep learning-based auto-segmentation algorithms for delineating clinical target volume and organs at risk involving data for 125 cervical cancer patients. J Appl Clin Med Phys 2020; 21:272-279. [PMID: 33238060 PMCID: PMC7769393 DOI: 10.1002/acm2.13097] [Citation(s) in RCA: 32] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2020] [Revised: 10/03/2020] [Accepted: 10/21/2020] [Indexed: 12/15/2022] Open
Abstract
Objective To evaluate the accuracy of a deep learning‐based auto‐segmentation mode to that of manual contouring by one medical resident, where both entities tried to mimic the delineation "habits" of the same clinical senior physician. Methods This study included 125 cervical cancer patients whose clinical target volumes (CTVs) and organs at risk (OARs) were delineated by the same senior physician. Of these 125 cases, 100 were used for model training and the remaining 25 for model testing. In addition, the medical resident instructed by the senior physician for approximately 8 months delineated the CTVs and OARs for the testing cases. The dice similarity coefficient (DSC) and the Hausdorff Distance (HD) were used to evaluate the delineation accuracy for CTV, bladder, rectum, small intestine, femoral‐head‐left, and femoral‐head‐right. Results The DSC values of the auto‐segmentation model and manual contouring by the resident were, respectively, 0.86 and 0.83 for the CTV (P < 0.05), 0.91 and 0.91 for the bladder (P > 0.05), 0.88 and 0.84 for the femoral‐head‐right (P < 0.05), 0.88 and 0.84 for the femoral‐head‐left (P < 0.05), 0.86 and 0.81 for the small intestine (P < 0.05), and 0.81 and 0.84 for the rectum (P > 0.05). The HD (mm) values were, respectively, 14.84 and 18.37 for the CTV (P < 0.05), 7.82 and 7.63 for the bladder (P > 0.05), 6.18 and 6.75 for the femoral‐head‐right (P > 0.05), 6.17 and 6.31 for the femoral‐head‐left (P > 0.05), 22.21 and 26.70 for the small intestine (P > 0.05), and 7.04 and 6.13 for the rectum (P > 0.05). The auto‐segmentation model took approximately 2 min to delineate the CTV and OARs while the resident took approximately 90 min to complete the same task. Conclusion The auto‐segmentation model was as accurate as the medical resident but with much better efficiency in this study. Furthermore, the auto‐segmentation approach offers additional perceivable advantages of being consistent and ever improving when compared with manual approaches.
Collapse
Affiliation(s)
- Zhi Wang
- Center of Radiological Medical Physics, University of Science and Technology of China, Hefei, China.,Department of Radiation Oncology, First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Yankui Chang
- Center of Radiological Medical Physics, University of Science and Technology of China, Hefei, China
| | - Zhao Peng
- Center of Radiological Medical Physics, University of Science and Technology of China, Hefei, China
| | - Yin Lv
- Department of Radiation Oncology, First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Weijiong Shi
- Department of Radiation Oncology, First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Fan Wang
- Department of Radiation Oncology, First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Xi Pei
- Center of Radiological Medical Physics, University of Science and Technology of China, Hefei, China.,Anhui Wisdom Technology Co., Ltd., Hefei, Anhui, China
| | - X George Xu
- Center of Radiological Medical Physics, University of Science and Technology of China, Hefei, China
| |
Collapse
|
31
|
Overview of artificial intelligence-based applications in radiotherapy: Recommendations for implementation and quality assurance. Radiother Oncol 2020; 153:55-66. [PMID: 32920005 DOI: 10.1016/j.radonc.2020.09.008] [Citation(s) in RCA: 151] [Impact Index Per Article: 37.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2020] [Revised: 09/02/2020] [Accepted: 09/03/2020] [Indexed: 02/06/2023]
Abstract
Artificial Intelligence (AI) is currently being introduced into different domains, including medicine. Specifically in radiation oncology, machine learning models allow automation and optimization of the workflow. A lack of knowledge and interpretation of these AI models can hold back wide-spread and full deployment into clinical practice. To facilitate the integration of AI models in the radiotherapy workflow, generally applicable recommendations on implementation and quality assurance (QA) of AI models are presented. For commonly used applications in radiotherapy such as auto-segmentation, automated treatment planning and synthetic computed tomography (sCT) the basic concepts are discussed in depth. Emphasis is put on the commissioning, implementation and case-specific and routine QA of AI models needed for a methodical introduction in clinical practice.
Collapse
|
32
|
Sultana S, Robinson A, Song DY, Lee J. Automatic multi-organ segmentation in computed tomography images using hierarchical convolutional neural network. JOURNAL OF MEDICAL IMAGING (BELLINGHAM, WASH.) 2020; 7:055001. [PMID: 33102622 DOI: 10.1117/1.jmi.7.5.055001] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Received: 04/13/2020] [Accepted: 09/28/2020] [Indexed: 01/17/2023]
Abstract
Purpose: Accurate segmentation of treatment planning computed tomography (CT) images is important for radiation therapy (RT) planning. However, low soft tissue contrast in CT makes the segmentation task challenging. We propose a two-step hierarchical convolutional neural network (CNN) segmentation strategy to automatically segment multiple organs from CT. Approach: The first step generates a coarse segmentation from which organ-specific regions of interest (ROIs) are produced. The second step produces detailed segmentation of each organ. The ROIs are generated using UNet, which automatically identifies the area of each organ and improves computational efficiency by eliminating irrelevant background information. For the fine segmentation step, we combined UNet with a generative adversarial network. The generator is designed as a UNet that is trained to segment organ structures and the discriminator is a fully convolutional network, which distinguishes whether the segmentation is real or generator-predicted, thus improving the segmentation accuracy. We validated the proposed method on male pelvic and head and neck (H&N) CTs used for RT planning of prostate and H&N cancer, respectively. For the pelvic structure segmentation, the network was trained to segment the prostate, bladder, and rectum. For H&N, the network was trained to segment the parotid glands (PG) and submandibular glands (SMG). Results: The trained segmentation networks were tested on 15 pelvic and 20 H&N independent datasets. The H&N segmentation network was also tested on a public domain dataset ( N = 38 ) and showed similar performance. The average dice similarity coefficients ( mean ± SD ) of pelvic structures are 0.91 ± 0.05 (prostate), 0.95 ± 0.06 (bladder), 0.90 ± 0.09 (rectum), and H&N structures are 0.87 ± 0.04 (PG) and 0.86 ± 0.05 (SMG). The segmentation for each CT takes < 10 s on average. Conclusions: Experimental results demonstrate that the proposed method can produce fast, accurate, and reproducible segmentation of multiple organs of different sizes and shapes and show its potential to be applicable to different disease sites.
Collapse
Affiliation(s)
- Sharmin Sultana
- Johns Hopkins University, Department of Radiation Oncology and Molecular Radiation Sciences, Baltimore, Maryland, United States
| | - Adam Robinson
- Johns Hopkins University, Department of Radiation Oncology and Molecular Radiation Sciences, Baltimore, Maryland, United States
| | - Daniel Y Song
- Johns Hopkins University, Department of Radiation Oncology and Molecular Radiation Sciences, Baltimore, Maryland, United States
| | - Junghoon Lee
- Johns Hopkins University, Department of Radiation Oncology and Molecular Radiation Sciences, Baltimore, Maryland, United States
| |
Collapse
|
33
|
Vrtovec T, Močnik D, Strojan P, Pernuš F, Ibragimov B. Auto-segmentation of organs at risk for head and neck radiotherapy planning: From atlas-based to deep learning methods. Med Phys 2020; 47:e929-e950. [PMID: 32510603 DOI: 10.1002/mp.14320] [Citation(s) in RCA: 80] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2019] [Revised: 05/27/2020] [Accepted: 05/29/2020] [Indexed: 02/06/2023] Open
Abstract
Radiotherapy (RT) is one of the basic treatment modalities for cancer of the head and neck (H&N), which requires a precise spatial description of the target volumes and organs at risk (OARs) to deliver a highly conformal radiation dose to the tumor cells while sparing the healthy tissues. For this purpose, target volumes and OARs have to be delineated and segmented from medical images. As manual delineation is a tedious and time-consuming task subjected to intra/interobserver variability, computerized auto-segmentation has been developed as an alternative. The field of medical imaging and RT planning has experienced an increased interest in the past decade, with new emerging trends that shifted the field of H&N OAR auto-segmentation from atlas-based to deep learning-based approaches. In this review, we systematically analyzed 78 relevant publications on auto-segmentation of OARs in the H&N region from 2008 to date, and provided critical discussions and recommendations from various perspectives: image modality - both computed tomography and magnetic resonance image modalities are being exploited, but the potential of the latter should be explored more in the future; OAR - the spinal cord, brainstem, and major salivary glands are the most studied OARs, but additional experiments should be conducted for several less studied soft tissue structures; image database - several image databases with the corresponding ground truth are currently available for methodology evaluation, but should be augmented with data from multiple observers and multiple institutions; methodology - current methods have shifted from atlas-based to deep learning auto-segmentation, which is expected to become even more sophisticated; ground truth - delineation guidelines should be followed and participation of multiple experts from multiple institutions is recommended; performance metrics - the Dice coefficient as the standard volumetric overlap metrics should be accompanied with at least one distance metrics, and combined with clinical acceptability scores and risk assessments; segmentation performance - the best performing methods achieve clinically acceptable auto-segmentation for several OARs, however, the dosimetric impact should be also studied to provide clinically relevant endpoints for RT planning.
Collapse
Affiliation(s)
- Tomaž Vrtovec
- Faculty Electrical Engineering, University of Ljubljana, Tržaška cesta 25, Ljubljana, SI-1000, Slovenia
| | - Domen Močnik
- Faculty Electrical Engineering, University of Ljubljana, Tržaška cesta 25, Ljubljana, SI-1000, Slovenia
| | - Primož Strojan
- Institute of Oncology Ljubljana, Zaloška cesta 2, Ljubljana, SI-1000, Slovenia
| | - Franjo Pernuš
- Faculty Electrical Engineering, University of Ljubljana, Tržaška cesta 25, Ljubljana, SI-1000, Slovenia
| | - Bulat Ibragimov
- Faculty Electrical Engineering, University of Ljubljana, Tržaška cesta 25, Ljubljana, SI-1000, Slovenia.,Department of Computer Science, University of Copenhagen, Universitetsparken 1, Copenhagen, D-2100, Denmark
| |
Collapse
|
34
|
Kearney V, Chan JW, Wang T, Perry A, Descovich M, Morin O, Yom SS, Solberg TD. DoseGAN: a generative adversarial network for synthetic dose prediction using attention-gated discrimination and generation. Sci Rep 2020; 10:11073. [PMID: 32632116 PMCID: PMC7338467 DOI: 10.1038/s41598-020-68062-7] [Citation(s) in RCA: 35] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2019] [Accepted: 05/27/2020] [Indexed: 11/08/2022] Open
Abstract
Deep learning algorithms have recently been developed that utilize patient anatomy and raw imaging information to predict radiation dose, as a means to increase treatment planning efficiency and improve radiotherapy plan quality. Current state-of-the-art techniques rely on convolutional neural networks (CNNs) that use pixel-to-pixel loss to update network parameters. However, stereotactic body radiotherapy (SBRT) dose is often heterogeneous, making it difficult to model using pixel-level loss. Generative adversarial networks (GANs) utilize adversarial learning that incorporates image-level loss and is better suited to learn from heterogeneous labels. However, GANs are difficult to train and rely on compromised architectures to facilitate convergence. This study suggests an attention-gated generative adversarial network (DoseGAN) to improve learning, increase model complexity, and reduce network redundancy by focusing on relevant anatomy. DoseGAN was compared to alternative state-of-the-art dose prediction algorithms using heterogeneity index, conformity index, and various dosimetric parameters. All algorithms were trained, validated, and tested using 141 prostate SBRT patients. DoseGAN was able to predict more realistic volumetric dosimetry compared to all other algorithms and achieved statistically significant improvement compared to all alternative algorithms for the V100 and V120 of the PTV, V60 of the rectum, and heterogeneity index.
Collapse
Affiliation(s)
- Vasant Kearney
- Department of Radiation Oncology, University of California, San Francisco, CA, 94115, USA.
| | - Jason W Chan
- Department of Radiation Oncology, University of California, San Francisco, CA, 94115, USA
| | - Tianqi Wang
- Department of Radiation Oncology, University of California, San Francisco, CA, 94115, USA
| | - Alan Perry
- Department of Radiation Oncology, University of California, San Francisco, CA, 94115, USA
| | - Martina Descovich
- Department of Radiation Oncology, University of California, San Francisco, CA, 94115, USA
| | - Olivier Morin
- Department of Radiation Oncology, University of California, San Francisco, CA, 94115, USA
| | - Sue S Yom
- Department of Radiation Oncology, University of California, San Francisco, CA, 94115, USA
| | - Timothy D Solberg
- Department of Radiation Oncology, University of California, San Francisco, CA, 94115, USA
| |
Collapse
|
35
|
Kearney V, Ziemer BP, Perry A, Wang T, Chan JW, Ma L, Morin O, Yom SS, Solberg TD. Attention-Aware Discrimination for MR-to-CT Image Translation Using Cycle-Consistent Generative Adversarial Networks. Radiol Artif Intell 2020; 2:e190027. [PMID: 33937817 DOI: 10.1148/ryai.2020190027] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2019] [Revised: 11/18/2019] [Accepted: 11/25/2019] [Indexed: 11/11/2022]
Abstract
Purpose To suggest an attention-aware, cycle-consistent generative adversarial network (A-CycleGAN) enhanced with variational autoencoding (VAE) as a superior alternative to current state-of-the-art MR-to-CT image translation methods. Materials and Methods An attention-gating mechanism is incorporated into a discriminator network to encourage a more parsimonious use of network parameters, whereas VAE enhancement enables deeper discrimination architectures without inhibiting model convergence. Findings from 60 patients with head, neck, and brain cancer were used to train and validate A-CycleGAN, and findings from 30 patients were used for the holdout test set and were used to report final evaluation metric results using mean absolute error (MAE) and peak signal-to-noise ratio (PSNR). Results A-CycleGAN achieved superior results compared with U-Net, a generative adversarial network (GAN), and a cycle-consistent GAN. The A-CycleGAN averages, 95% confidence intervals (CIs), and Wilcoxon signed-rank two-sided test statistics are shown for MAE (19.61 [95% CI: 18.83, 20.39], P = .0104), structure similarity index metric (0.778 [95% CI: 0.758, 0.798], P = .0495), and PSNR (62.35 [95% CI: 61.80, 62.90], P = .0571). Conclusion A-CycleGANs were a superior alternative to state-of-the-art MR-to-CT image translation methods.© RSNA, 2020.
Collapse
Affiliation(s)
- Vasant Kearney
- Department of Radiation Oncology, University of California, 1600 Divisidero St, San Francisco, CA 94115
| | - Benjamin P Ziemer
- Department of Radiation Oncology, University of California, 1600 Divisidero St, San Francisco, CA 94115
| | - Alan Perry
- Department of Radiation Oncology, University of California, 1600 Divisidero St, San Francisco, CA 94115
| | - Tianqi Wang
- Department of Radiation Oncology, University of California, 1600 Divisidero St, San Francisco, CA 94115
| | - Jason W Chan
- Department of Radiation Oncology, University of California, 1600 Divisidero St, San Francisco, CA 94115
| | - Lijun Ma
- Department of Radiation Oncology, University of California, 1600 Divisidero St, San Francisco, CA 94115
| | - Olivier Morin
- Department of Radiation Oncology, University of California, 1600 Divisidero St, San Francisco, CA 94115
| | - Sue S Yom
- Department of Radiation Oncology, University of California, 1600 Divisidero St, San Francisco, CA 94115
| | - Timothy D Solberg
- Department of Radiation Oncology, University of California, 1600 Divisidero St, San Francisco, CA 94115
| |
Collapse
|
36
|
El Naqa I, Haider MA, Giger ML, Ten Haken RK. Artificial Intelligence: reshaping the practice of radiological sciences in the 21st century. Br J Radiol 2020; 93:20190855. [PMID: 31965813 PMCID: PMC7055429 DOI: 10.1259/bjr.20190855] [Citation(s) in RCA: 47] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2019] [Revised: 01/12/2020] [Accepted: 01/13/2020] [Indexed: 12/15/2022] Open
Abstract
Advances in computing hardware and software platforms have led to the recent resurgence in artificial intelligence (AI) touching almost every aspect of our daily lives by its capability for automating complex tasks or providing superior predictive analytics. AI applications are currently spanning many diverse fields from economics to entertainment, to manufacturing, as well as medicine. Since modern AI's inception decades ago, practitioners in radiological sciences have been pioneering its development and implementation in medicine, particularly in areas related to diagnostic imaging and therapy. In this anniversary article, we embark on a journey to reflect on the learned lessons from past AI's chequered history. We further summarize the current status of AI in radiological sciences, highlighting, with examples, its impressive achievements and effect on re-shaping the practice of medical imaging and radiotherapy in the areas of computer-aided detection, diagnosis, prognosis, and decision support. Moving beyond the commercial hype of AI into reality, we discuss the current challenges to overcome, for AI to achieve its promised hope of providing better precision healthcare for each patient while reducing cost burden on their families and the society at large.
Collapse
Affiliation(s)
- Issam El Naqa
- Department of Radiation Oncology, University of Michigan, Ann Arbor, MI, USA
| | - Masoom A Haider
- Department of Medical Imaging and Lunenfeld-Tanenbaum Research Institute, University of Toronto, Toronto, ON, Canada
| | | | - Randall K Ten Haken
- Department of Radiation Oncology, University of Michigan, Ann Arbor, MI, USA
| |
Collapse
|
37
|
Liu Z, Liu X, Xiao B, Wang S, Miao Z, Sun Y, Zhang F. Segmentation of organs-at-risk in cervical cancer CT images with a convolutional neural network. Phys Med 2020; 69:184-191. [PMID: 31918371 DOI: 10.1016/j.ejmp.2019.12.008] [Citation(s) in RCA: 50] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/16/2019] [Revised: 11/12/2019] [Accepted: 12/08/2019] [Indexed: 02/06/2023] Open
Abstract
PURPOSE We introduced and evaluated an end-to-end organs-at-risk (OARs) segmentation model that can provide accurate and consistent OARs segmentation results in much less time. METHODS We collected 105 patients' Computed Tomography (CT) scans that diagnosed locally advanced cervical cancer and treated with radiotherapy in one hospital. Seven organs, including the bladder, bone marrow, left femoral head, right femoral head, rectum, small intestine and spinal cord were defined as OARs. The annotated contours of the OARs previously delineated manually by the patient's radiotherapy oncologist and confirmed by the professional committee consisted of eight experienced oncologists before the radiotherapy were used as the ground truth masks. A multi-class segmentation model based on U-Net was designed to fulfil the OARs segmentation task. The Dice Similarity Coefficient (DSC) and 95th Hausdorff Distance (HD) are used as quantitative evaluation metrics to evaluate the proposed method. RESULTS The mean DSC values of the proposed method are 0.924, 0.854, 0.906, 0.900, 0.791, 0.833 and 0.827 for the bladder, bone marrow, femoral head left, femoral head right, rectum, small intestine, and spinal cord, respectively. The mean HD values are 5.098, 1.993, 1.390, 1.435, 5.949, 5.281 and 3.269 for the above OARs respectively. CONCLUSIONS Our proposed method can help reduce the inter-observer and intra-observer variability of manual OARs delineation and lessen oncologists' efforts. The experimental results demonstrate that our model outperforms the benchmark U-Net model and the oncologists' evaluations show that the segmentation results are highly acceptable to be used in radiation therapy planning.
Collapse
Affiliation(s)
- Zhikai Liu
- Department of Radiation Oncology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing 100730, China
| | - Xia Liu
- Department of Radiation Oncology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing 100730, China
| | - Bin Xiao
- MedMind Technology Co., Ltd., Beijing 100080, China
| | - Shaobin Wang
- MedMind Technology Co., Ltd., Beijing 100080, China
| | - Zheng Miao
- Department of Radiation Oncology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing 100730, China
| | - Yuliang Sun
- Department of Radiation Oncology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing 100730, China
| | - Fuquan Zhang
- Department of Radiation Oncology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing 100730, China
| |
Collapse
|
38
|
Tang X. The role of artificial intelligence in medical imaging research. BJR Open 2019; 2:20190031. [PMID: 33178962 PMCID: PMC7594889 DOI: 10.1259/bjro.20190031] [Citation(s) in RCA: 33] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2019] [Revised: 10/01/2019] [Accepted: 11/13/2019] [Indexed: 12/22/2022] Open
Abstract
Without doubt, artificial intelligence (AI) is the most discussed topic today in medical imaging research, both in diagnostic and therapeutic. For diagnostic imaging alone, the number of publications on AI has increased from about 100-150 per year in 2007-2008 to 1000-1100 per year in 2017-2018. Researchers have applied AI to automatically recognizing complex patterns in imaging data and providing quantitative assessments of radiographic characteristics. In radiation oncology, AI has been applied on different image modalities that are used at different stages of the treatment. i.e. tumor delineation and treatment assessment. Radiomics, the extraction of a large number of image features from radiation images with a high-throughput approach, is one of the most popular research topics today in medical imaging research. AI is the essential boosting power of processing massive number of medical images and therefore uncovers disease characteristics that fail to be appreciated by the naked eyes. The objectives of this paper are to review the history of AI in medical imaging research, the current role, the challenges need to be resolved before AI can be adopted widely in the clinic, and the potential future.
Collapse
|
39
|
Kearney V, Chan JW, Wang T, Perry A, Yom SS, Solberg TD. Attention-enabled 3D boosted convolutional neural networks for semantic CT segmentation using deep supervision. ACTA ACUST UNITED AC 2019; 64:135001. [DOI: 10.1088/1361-6560/ab2818] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|