1
|
Rabe M, Kurz C, Thummerer A, Landry G. Artificial intelligence for treatment delivery: image-guided radiotherapy. Strahlenther Onkol 2024:10.1007/s00066-024-02277-9. [PMID: 39138806 DOI: 10.1007/s00066-024-02277-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2024] [Accepted: 07/07/2024] [Indexed: 08/15/2024]
Abstract
Radiation therapy (RT) is a highly digitized field relying heavily on computational methods and, as such, has a high affinity for the automation potential afforded by modern artificial intelligence (AI). This is particularly relevant where imaging is concerned and is especially so during image-guided RT (IGRT). With the advent of online adaptive RT (ART) workflows at magnetic resonance (MR) linear accelerators (linacs) and at cone-beam computed tomography (CBCT) linacs, the need for automation is further increased. AI as applied to modern IGRT is thus one area of RT where we can expect important developments in the near future. In this review article, after outlining modern IGRT and online ART workflows, we cover the role of AI in CBCT and MRI correction for dose calculation, auto-segmentation on IGRT imaging, motion management, and response assessment based on in-room imaging.
Collapse
Affiliation(s)
- Moritz Rabe
- Department of Radiation Oncology, LMU University Hospital, LMU Munich, Marchioninistraße 15, 81377, Munich, Bavaria, Germany
| | - Christopher Kurz
- Department of Radiation Oncology, LMU University Hospital, LMU Munich, Marchioninistraße 15, 81377, Munich, Bavaria, Germany
| | - Adrian Thummerer
- Department of Radiation Oncology, LMU University Hospital, LMU Munich, Marchioninistraße 15, 81377, Munich, Bavaria, Germany
| | - Guillaume Landry
- Department of Radiation Oncology, LMU University Hospital, LMU Munich, Marchioninistraße 15, 81377, Munich, Bavaria, Germany.
- German Cancer Consortium (DKTK), partner site Munich, a partnership between the DKFZ and the LMU University Hospital Munich, Marchioninistraße 15, 81377, Munich, Bavaria, Germany.
- Bavarian Cancer Research Center (BZKF), Marchioninistraße 15, 81377, Munich, Bavaria, Germany.
| |
Collapse
|
2
|
Zhu J, Yan J, Zhang J, Yu L, Song A, Zheng Z, Chen Y, Wang S, Chen Q, Liu Z, Zhang F. Automatic segmentation of high-risk clinical target volume and organs at risk in brachytherapy of cervical cancer with a convolutional neural network. Cancer Radiother 2024; 28:354-364. [PMID: 39147623 DOI: 10.1016/j.canrad.2024.03.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Revised: 11/26/2023] [Accepted: 03/14/2024] [Indexed: 08/17/2024]
Abstract
PURPOSE This study aimed to design an autodelineation model based on convolutional neural networks for generating high-risk clinical target volumes and organs at risk in image-guided adaptive brachytherapy for cervical cancer. MATERIALS AND METHODS A novel SERes-u-net was trained and tested using CT scans from 98 patients with locally advanced cervical cancer who underwent image-guided adaptive brachytherapy. The Dice similarity coefficient, 95th percentile Hausdorff distance, and clinical assessment were used for evaluation. RESULTS The mean Dice similarity coefficients of our model were 80.8%, 91.9%, 85.2%, 60.4%, and 82.8% for the high-risk clinical target volumes, bladder, rectum, sigmoid, and bowel loops, respectively. The corresponding 95th percentile Hausdorff distances were 5.23mm, 4.75mm, 4.06mm, 30.0mm, and 20.5mm. The evaluation results revealed that 99.3% of the convolutional neural networks-generated high-risk clinical target volumes slices were acceptable for oncologist A and 100% for oncologist B. Most segmentations of the organs at risk were clinically acceptable, except for the 25% sigmoid, which required significant revision in the opinion of oncologist A. There was a significant difference in the clinical evaluation of convolutional neural networks-generated high-risk clinical target volumes between the two oncologists (P<0.001), whereas the score differences of the organs at risk were not significant between the two oncologists. In the consistency evaluation, a large discrepancy was observed between senior and junior clinicians. About 40% of SERes-u-net-generated contours were thought to be better by junior clinicians. CONCLUSION The high-risk clinical target volumes and organs at risk of cervical cancer generated by the proposed convolutional neural networks model can be used clinically, potentially improving segmentation consistency and efficiency of contouring in image-guided adaptive brachytherapy workflow.
Collapse
Affiliation(s)
- J Zhu
- Department of Radiation Oncology, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academe of Medical Sciences & Peking Union Medical College, Beijing 100730, China
| | - J Yan
- Department of Radiation Oncology, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academe of Medical Sciences & Peking Union Medical College, Beijing 100730, China
| | - J Zhang
- Department of Radiation Oncology, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academe of Medical Sciences & Peking Union Medical College, Beijing 100730, China
| | - L Yu
- Department of Radiation Oncology, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academe of Medical Sciences & Peking Union Medical College, Beijing 100730, China
| | - A Song
- Department of Radiation Oncology, Cangzhou Central Hospital, Cangzhou, Hebei 061001, China
| | - Z Zheng
- Department of Radiation Oncology, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academe of Medical Sciences & Peking Union Medical College, Beijing 100730, China
| | - Y Chen
- MedMind Technology Co., Ltd., Beijing 100730, China
| | - S Wang
- MedMind Technology Co., Ltd., Beijing 100730, China
| | - Q Chen
- MedMind Technology Co., Ltd., Beijing 100730, China
| | - Z Liu
- Department of Radiation Oncology, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academe of Medical Sciences & Peking Union Medical College, Beijing 100730, China.
| | - F Zhang
- Department of Radiation Oncology, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academe of Medical Sciences & Peking Union Medical College, Beijing 100730, China.
| |
Collapse
|
3
|
Kumar K, Yeo AU, McIntosh L, Kron T, Wheeler G, Franich RD. Deep Learning Auto-Segmentation Network for Pediatric Computed Tomography Data Sets: Can We Extrapolate From Adults? Int J Radiat Oncol Biol Phys 2024; 119:1297-1306. [PMID: 38246249 DOI: 10.1016/j.ijrobp.2024.01.201] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2023] [Revised: 12/10/2023] [Accepted: 01/07/2024] [Indexed: 01/23/2024]
Abstract
PURPOSE Artificial intelligence (AI)-based auto-segmentation models hold promise for enhanced efficiency and consistency in organ contouring for adaptive radiation therapy and radiation therapy planning. However, their performance on pediatric computed tomography (CT) data and cross-scanner compatibility remain unclear. This study aimed to evaluate the performance of AI-based auto-segmentation models trained on adult CT data when applied to pediatric data sets and explore the improvement in performance gained by including pediatric training data. It also examined their ability to accurately segment CT data acquired from different scanners. METHODS AND MATERIALS Using the nnU-Net framework, segmentation models were trained on data sets of adult, pediatric, and combined CT scans for 7 pelvic/thoracic organs. Each model was trained on 290 to 300 cases per category and organ. Training data sets included a combination of clinical data and several open repositories. The study incorporated a database of 459 pediatric (0-16 years) CT scans and 950 adults (>18 years), ensuring all scans had human expert ground-truth contours of the selected organs. Performance was evaluated based on Dice similarity coefficients (DSC) of the model-generated contours. RESULTS AI models trained exclusively on adult data underperformed on pediatric data, especially for the 0 to 2 age group: mean DSC was below 0.5 for the bladder and spleen. The addition of pediatric training data demonstrated significant improvement for all age groups, achieving a mean DSC of above 0.85 for all organs in every age group. Larger organs like the liver and kidneys maintained consistent performance for all models across age groups. No significant difference emerged in the cross-scanner performance evaluation, suggesting robust cross-scanner generalization. CONCLUSIONS For optimal segmentation across age groups, it is important to include pediatric data in the training of segmentation models. The successful cross-scanner generalization also supports the real-world clinical applicability of these AI models. This study emphasizes the significance of data set diversity in training robust AI systems for medical image interpretation tasks.
Collapse
Affiliation(s)
- Kartik Kumar
- Physical Sciences Department, Peter MacCallum Cancer Centre, Victoria, Australia; School of Science, RMIT University, Melbourne, Victoria, Australia
| | - Adam U Yeo
- Physical Sciences Department, Peter MacCallum Cancer Centre, Victoria, Australia; School of Science, RMIT University, Melbourne, Victoria, Australia; Sir Peter MacCallum Department of Oncology, University of Melbourne, Melbourne, Victoria, Australia
| | - Lachlan McIntosh
- Physical Sciences Department, Peter MacCallum Cancer Centre, Victoria, Australia; School of Science, RMIT University, Melbourne, Victoria, Australia
| | - Tomas Kron
- Physical Sciences Department, Peter MacCallum Cancer Centre, Victoria, Australia; School of Science, RMIT University, Melbourne, Victoria, Australia; Sir Peter MacCallum Department of Oncology, University of Melbourne, Melbourne, Victoria, Australia; Centre for Medical Radiation Physics, University of Wollongong, Wollongong, New South Wales, Australia
| | - Greg Wheeler
- Physical Sciences Department, Peter MacCallum Cancer Centre, Victoria, Australia; Sir Peter MacCallum Department of Oncology, University of Melbourne, Melbourne, Victoria, Australia
| | - Rick D Franich
- Physical Sciences Department, Peter MacCallum Cancer Centre, Victoria, Australia; School of Science, RMIT University, Melbourne, Victoria, Australia.
| |
Collapse
|
4
|
Radici L, Piva C, Casanova Borca V, Cante D, Ferrario S, Paolini M, Cabras L, Petrucci E, Franco P, La Porta MR, Pasquino M. Clinical evaluation of a deep learning CBCT auto-segmentation software for prostate adaptive radiation therapy. Clin Transl Radiat Oncol 2024; 47:100796. [PMID: 38884004 PMCID: PMC11176659 DOI: 10.1016/j.ctro.2024.100796] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2023] [Revised: 05/09/2024] [Accepted: 05/16/2024] [Indexed: 06/18/2024] Open
Abstract
Purpose Aim of the present study is to characterize a deep learning-based auto-segmentation software (DL) for prostate cone beam computed tomography (CBCT) images and to evaluate its applicability in clinical adaptive radiation therapy routine. Materials and methods Ten patients, who received exclusive radiation therapy with definitive intent on the prostate gland and seminal vesicles, were selected. Femoral heads, bladder, rectum, prostate, and seminal vesicles were retrospectively contoured by four different expert radiation oncologists on patients CBCT, acquired during treatment. Consensus contours (CC) were generated starting from these data and compared with those created by DL with different algorithms, trained on CBCT (DL-CBCT) or computed tomography (DL-CT). Dice similarity coefficient (DSC), centre of mass (COM) shift and volume relative variation (VRV) were chosen as comparison metrics. Since no tolerance limit can be defined, results were also compared with the inter-operator variability (IOV), using the same metrics. Results The best agreement between DL and CC was observed for femoral heads (DSC of 0.96 for both DL-CBCT and DL-CT). Performance worsened for low-contrast soft tissue organs: the worst results were found for seminal vesicles (DSC of 0.70 and 0.59 for DL-CBCT and DL-CT, respectively). The analysis shows that it is appropriate to use algorithms trained on the specific imaging modality. Furthermore, the statistical analysis showed that, for almost all considered structures, there is no significant difference between DL-CBCT and human operator in terms of IOV. Conclusions The accuracy of DL-CBCT is in accordance with CC; its use in clinical practice is justified by the comparison with the inter-operator variability.
Collapse
Affiliation(s)
| | | | | | | | | | | | - Laura Cabras
- Medical Physics Department, ASL TO4 Ivrea, Italy
| | | | - Pierfrancesco Franco
- Department of Translational Sciences (DIMET), University of Eastern Piedmont, Novara, Italy
- Department of Radiation Oncology, 'Maggiore della Carità' University Hospital, Novara, Italy
| | | | | |
Collapse
|
5
|
Zhao H, Liang X, Meng B, Dohopolski M, Choi B, Cai B, Lin MH, Bai T, Nguyen D, Jiang S. Progressive auto-segmentation for cone-beam computed tomography-based online adaptive radiotherapy. Phys Imaging Radiat Oncol 2024; 31:100610. [PMID: 39132556 PMCID: PMC11315102 DOI: 10.1016/j.phro.2024.100610] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2023] [Revised: 06/28/2024] [Accepted: 07/08/2024] [Indexed: 08/13/2024] Open
Abstract
Background and purpose Accurate and automated segmentation of targets and organs-at-risk (OARs) is crucial for the successful clinical application of online adaptive radiotherapy (ART). Current methods for cone-beam computed tomography (CBCT) auto-segmentation face challenges, resulting in segmentations often failing to reach clinical acceptability. Current approaches for CBCT auto-segmentation overlook the wealth of information available from initial planning and prior adaptive fractions that could enhance segmentation precision. Materials and methods We introduce a novel framework that incorporates data from a patient's initial plan and previous adaptive fractions, harnessing this additional temporal context to significantly refine the segmentation accuracy for the current fraction's CBCT images. We present LSTM-UNet, an innovative architecture that integrates Long Short-Term Memory (LSTM) units into the skip connections of the traditional U-Net framework to retain information from previous fractions. The models underwent initial pre-training with simulated data followed by fine-tuning on a clinical dataset. Results Our proposed model's segmentation predictions yield an average Dice similarity coefficient of 79% from 8 Head & Neck organs and targets, compared to 52% from a baseline model without prior knowledge and 78% from a baseline model with prior knowledge but no memory. Conclusions Our proposed model excels beyond baseline segmentation frameworks by effectively utilizing information from prior fractions, thus reducing the effort of clinicians to revise the auto-segmentation results. Moreover, it works together with registration-based methods that offer better prior knowledge. Our model holds promise for integration into the online ART workflow, offering precise segmentation capabilities on synthetic CT images.
Collapse
Affiliation(s)
- Hengrui Zhao
- Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Xiao Liang
- Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Boyu Meng
- Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Michael Dohopolski
- Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Byongsu Choi
- Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Bin Cai
- Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Mu-Han Lin
- Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Ti Bai
- Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Dan Nguyen
- Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Steve Jiang
- Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| |
Collapse
|
6
|
Nealon KA, Han EY, Kry SF, Nguyen C, Pham M, Reed VK, Rosenthal D, Simiele S, Court LE. Monitoring Variations in the Use of Automated Contouring Software. Pract Radiat Oncol 2024; 14:e75-e85. [PMID: 37797883 DOI: 10.1016/j.prro.2023.09.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Revised: 08/22/2023] [Accepted: 09/16/2023] [Indexed: 10/07/2023]
Abstract
PURPOSE Our purpose was to identify variations in the clinical use of automatically generated contours that could be attributed to software error, off-label use, or automation bias. METHODS AND MATERIALS For 500 head and neck patients who were contoured by an in-house automated contouring system, Dice similarity coefficient and added path length were calculated between the contours generated by the automated system and the final contours after editing for clinical use. Statistical process control was used and control charts were generated with control limits at 3 standard deviations. Contours that exceeded the thresholds were investigated to determine the cause. Moving mean control plots were then generated to identify dosimetrists who were editing less over time, which could be indicative of automation bias. RESULTS Major contouring edits were flagged for: 1.0% brain, 3.1% brain stem, 3.5% left cochlea, 2.9% right cochlea, 4.8% esophagus, 4.1% left eye, 4.0% right eye, 2.2% left lens, 4.9% right lens, 2.5% mandible, 11% left optic nerve, 6.1% right optic nerve, 3.8% left parotid, 5.9% right parotid, and 3.0% of spinal cord contours. Identified causes of editing included unexpected patient positioning, deviation from standard clinical practice, and disagreement between dosimetrist preference and automated contouring style. A statistically significant (P < .05) difference was identified between the contour editing practice of dosimetrists, with 1 dosimetrist editing more across all organs at risk. Eighteen percent (27/150) of moving mean control plots created for 5 dosimetrists indicated the amount of contour editing was decreasing over time, possibly corresponding to automation bias. CONCLUSIONS The developed system was used to detect statistically significant edits caused by software error, unexpected clinical use, and automation bias. The increased ability to detect systematic errors that occur when editing automatically generated contours will improve the safety of the automatic treatment planning workflow.
Collapse
Affiliation(s)
- Kelly A Nealon
- Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts.
| | - Eun Young Han
- Department of Radiation Physics - Patient Care, The University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Stephen F Kry
- Radiation Physics Outreach, The University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Callistus Nguyen
- Department of Radiation Physics - Research, The University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Mary Pham
- Department of Radiation Physics - Research, The University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Valerie K Reed
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas
| | - David Rosenthal
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Samantha Simiele
- Department of Radiation Physics - Patient Care, The University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Laurence E Court
- Department of Radiation Physics - Patient Care, The University of Texas MD Anderson Cancer Center, Houston, Texas
| |
Collapse
|
7
|
Boldrini L, D'Aviero A, De Felice F, Desideri I, Grassi R, Greco C, Iorio GC, Nardone V, Piras A, Salvestrini V. Artificial intelligence applied to image-guided radiation therapy (IGRT): a systematic review by the Young Group of the Italian Association of Radiotherapy and Clinical Oncology (yAIRO). LA RADIOLOGIA MEDICA 2024; 129:133-151. [PMID: 37740838 DOI: 10.1007/s11547-023-01708-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Accepted: 08/16/2023] [Indexed: 09/25/2023]
Abstract
INTRODUCTION The advent of image-guided radiation therapy (IGRT) has recently changed the workflow of radiation treatments by ensuring highly collimated treatments. Artificial intelligence (AI) and radiomics are tools that have shown promising results for diagnosis, treatment optimization and outcome prediction. This review aims to assess the impact of AI and radiomics on modern IGRT modalities in RT. METHODS A PubMed/MEDLINE and Embase systematic review was conducted to investigate the impact of radiomics and AI to modern IGRT modalities. The search strategy was "Radiomics" AND "Cone Beam Computed Tomography"; "Radiomics" AND "Magnetic Resonance guided Radiotherapy"; "Radiomics" AND "on board Magnetic Resonance Radiotherapy"; "Artificial Intelligence" AND "Cone Beam Computed Tomography"; "Artificial Intelligence" AND "Magnetic Resonance guided Radiotherapy"; "Artificial Intelligence" AND "on board Magnetic Resonance Radiotherapy" and only original articles up to 01.11.2022 were considered. RESULTS A total of 402 studies were obtained using the previously mentioned search strategy on PubMed and Embase. The analysis was performed on a total of 84 papers obtained following the complete selection process. Radiomics application to IGRT was analyzed in 23 papers, while a total 61 papers were focused on the impact of AI on IGRT techniques. DISCUSSION AI and radiomics seem to significantly impact IGRT in all the phases of RT workflow, even if the evidence in the literature is based on retrospective data. Further studies are needed to confirm these tools' potential and provide a stronger correlation with clinical outcomes and gold-standard treatment strategies.
Collapse
Affiliation(s)
- Luca Boldrini
- UOC Radioterapia Oncologica, Fondazione Policlinico Universitario IRCCS "A. Gemelli", Rome, Italy
- Università Cattolica del Sacro Cuore, Rome, Italy
| | - Andrea D'Aviero
- Radiation Oncology, Mater Olbia Hospital, Olbia, Sassari, Italy
| | - Francesca De Felice
- Radiation Oncology, Department of Radiological, Policlinico Umberto I, Rome, Italy
- Oncological and Pathological Sciences, "Sapienza" University of Rome, Rome, Italy
| | - Isacco Desideri
- Radiation Oncology Unit, Azienda Ospedaliero-Universitaria Careggi, Department of Experimental and Clinical Biomedical Sciences, University of Florence, Florence, Italy
| | - Roberta Grassi
- Department of Precision Medicine, University of Campania "L. Vanvitelli", Naples, Italy
| | - Carlo Greco
- Department of Radiation Oncology, Università Campus Bio-Medico di Roma, Fondazione Policlinico Universitario Campus Bio-Medico, Rome, Italy
| | | | - Valerio Nardone
- Department of Precision Medicine, University of Campania "L. Vanvitelli", Naples, Italy
| | - Antonio Piras
- UO Radioterapia Oncologica, Villa Santa Teresa, Bagheria, Palermo, Italy.
| | - Viola Salvestrini
- Radiation Oncology Unit, Azienda Ospedaliero-Universitaria Careggi, Department of Experimental and Clinical Biomedical Sciences, University of Florence, Florence, Italy
- Cyberknife Center, Istituto Fiorentino di Cura e Assistenza (IFCA), 50139, Florence, Italy
| |
Collapse
|
8
|
Weisman AJ, Huff DT, Govindan RM, Chen S, Perk TG. Multi-organ segmentation of CT via convolutional neural network: impact of training setting and scanner manufacturer. Biomed Phys Eng Express 2023; 9:065021. [PMID: 37725928 DOI: 10.1088/2057-1976/acfb06] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2023] [Accepted: 09/19/2023] [Indexed: 09/21/2023]
Abstract
Objective. Automated organ segmentation on CT images can enable the clinical use of advanced quantitative software devices, but model performance sensitivities must be understood before widespread adoption can occur. The goal of this study was to investigate performance differences between Convolutional Neural Networks (CNNs) trained to segment one (single-class) versus multiple (multi-class) organs, and between CNNs trained on scans from a single manufacturer versus multiple manufacturers.Methods. The multi-class CNN was trained on CT images obtained from 455 whole-body PET/CT scans (413 for training, 42 for testing) taken with Siemens, GE, and Phillips PET/CT scanners where 16 organs were segmented. The multi-class CNN was compared to 16 smaller single-class CNNs trained using the same data, but with segmentations of only one organ per model. In addition, CNNs trained on Siemens-only (N = 186) and GE-only (N = 219) scans (manufacturer-specific) were compared with CNNs trained on data from both Siemens and GE scanners (manufacturer-mixed). Segmentation performance was quantified using five performance metrics, including the Dice Similarity Coefficient (DSC).Results. The multi-class CNN performed well compared to previous studies, even in organs usually considered difficult auto-segmentation targets (e.g., pancreas, bowel). Segmentations from the multi-class CNN were significantly superior to those from smaller single-class CNNs in most organs, and the 16 single-class models took, on average, six times longer to segment all 16 organs compared to the single multi-class model. The manufacturer-mixed approach achieved minimally higher performance over the manufacturer-specific approach.Significance. A CNN trained on contours of multiple organs and CT data from multiple manufacturers yielded high-quality segmentations. Such a model is an essential enabler of image processing in a software device that quantifies and analyzes such data to determine a patient's treatment response. To date, this activity of whole organ segmentation has not been adopted due to the intense manual workload and time required.
Collapse
Affiliation(s)
- Amy J Weisman
- AIQ Solutions, Madison, WI, United States of America
| | - Daniel T Huff
- AIQ Solutions, Madison, WI, United States of America
| | | | - Song Chen
- Department of Nuclear Medicine, The First Hospital of China Medical University, Shenyang, Liaoning, People's Republic of China
| | | |
Collapse
|
9
|
Mervak BM, Fried JG, Wasnik AP. A Review of the Clinical Applications of Artificial Intelligence in Abdominal Imaging. Diagnostics (Basel) 2023; 13:2889. [PMID: 37761253 PMCID: PMC10529018 DOI: 10.3390/diagnostics13182889] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Revised: 08/23/2023] [Accepted: 09/05/2023] [Indexed: 09/29/2023] Open
Abstract
Artificial intelligence (AI) has been a topic of substantial interest for radiologists in recent years. Although many of the first clinical applications were in the neuro, cardiothoracic, and breast imaging subspecialties, the number of investigated and real-world applications of body imaging has been increasing, with more than 30 FDA-approved algorithms now available for applications in the abdomen and pelvis. In this manuscript, we explore some of the fundamentals of artificial intelligence and machine learning, review major functions that AI algorithms may perform, introduce current and potential future applications of AI in abdominal imaging, provide a basic understanding of the pathways by which AI algorithms can receive FDA approval, and explore some of the challenges with the implementation of AI in clinical practice.
Collapse
Affiliation(s)
| | | | - Ashish P. Wasnik
- Department of Radiology, University of Michigan—Michigan Medicine, 1500 E. Medical Center Dr., Ann Arbor, MI 48109, USA; (B.M.M.); (J.G.F.)
| |
Collapse
|
10
|
Farah L, Davaze-Schneider J, Martin T, Nguyen P, Borget I, Martelli N. Are current clinical studies on artificial intelligence-based medical devices comprehensive enough to support a full health technology assessment? A systematic review. Artif Intell Med 2023; 140:102547. [PMID: 37210155 DOI: 10.1016/j.artmed.2023.102547] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Revised: 03/28/2023] [Accepted: 04/04/2023] [Indexed: 05/22/2023]
Abstract
INTRODUCTION Artificial Intelligence-based Medical Devices (AI-based MDs) are experiencing exponential growth in healthcare. This study aimed to investigate whether current studies assessing AI contain the information required for health technology assessment (HTA) by HTA bodies. METHODS We conducted a systematic literature review based on the Preferred Reporting Items for Systematic Reviews and Meta-Analyses methodology to extract articles published between 2016 and 2021 related to the assessment of AI-based MDs. Data extraction focused on study characteristics, technology, algorithms, comparators, and results. AI quality assessment and HTA scores were calculated to evaluate whether the items present in the included studies were concordant with the HTA requirements. We performed a linear regression for the HTA and AI scores with the explanatory variables of the impact factor, publication date, and medical specialty. We conducted a univariate analysis of the HTA score and a multivariate analysis of the AI score with an alpha risk of 5 %. RESULTS Of 5578 retrieved records, 56 were included. The mean AI quality assessment score was 67 %; 32 % of articles had an AI quality score ≥ 70 %, 50 % had a score between 50 % and 70 %, and 18 % had a score under 50 %. The highest quality scores were observed for the study design (82 %) and optimisation (69 %) categories, whereas the scores were lowest in the clinical practice category (23 %). The mean HTA score was 52 % for all seven domains. 100 % of the studies assessed clinical effectiveness, whereas only 9 % evaluated safety, and 20 % evaluated economic issues. There was a statistically significant relationship between the impact factor and the HTA and AI scores (both p = 0.046). DISCUSSION Clinical studies on AI-based MDs have limitations and often lack adapted, robust, and complete evidence. High-quality datasets are also required because the output data can only be trusted if the inputs are reliable. The existing assessment frameworks are not specifically designed to assess AI-based MDs. From the perspective of regulatory authorities, we suggest that these frameworks should be adapted to assess the interpretability, explainability, cybersecurity, and safety of ongoing updates. From the perspective of HTA agencies, we highlight that transparency, professional and patient acceptance, ethical issues, and organizational changes are required for the implementation of these devices. Economic assessments of AI should rely on a robust methodology (business impact or health economic models) to provide decision-makers with more reliable evidence. CONCLUSION Currently, AI studies are insufficient to cover HTA prerequisites. HTA processes also need to be adapted because they do not consider the important specificities of AI-based MDs. Specific HTA workflows and accurate assessment tools should be designed to standardise evaluations, generate reliable evidence, and create confidence.
Collapse
Affiliation(s)
- Line Farah
- Groupe de Recherche et d'accueil en Droit et Economie de la Santé (GRADES) Department, University Paris-Saclay, Orsay, France; Innovation Center for Medical Devices, Foch Hospital, 40 Rue Worth, 92150 Suresnes, France.
| | - Julie Davaze-Schneider
- Pharmacy Department, Georges Pompidou European Hospital, AP-HP, 20 Rue Leblanc, 75015 Paris, France
| | - Tess Martin
- Groupe de Recherche et d'accueil en Droit et Economie de la Santé (GRADES) Department, University Paris-Saclay, Orsay, France; Pharmacy Department, Georges Pompidou European Hospital, AP-HP, 20 Rue Leblanc, 75015 Paris, France
| | - Pierre Nguyen
- Pharmacy Department, Georges Pompidou European Hospital, AP-HP, 20 Rue Leblanc, 75015 Paris, France
| | - Isabelle Borget
- Groupe de Recherche et d'accueil en Droit et Economie de la Santé (GRADES) Department, University Paris-Saclay, Orsay, France; Department of Biostatistics and Epidemiology, Gustave Roussy, University Paris-Saclay, 94805 Villejuif, France; Oncostat U1018, Inserm, University Paris-Saclay, Équipe Labellisée Ligue Contre le Cancer, Villejuif, France
| | - Nicolas Martelli
- Groupe de Recherche et d'accueil en Droit et Economie de la Santé (GRADES) Department, University Paris-Saclay, Orsay, France; Pharmacy Department, Georges Pompidou European Hospital, AP-HP, 20 Rue Leblanc, 75015 Paris, France
| |
Collapse
|
11
|
Cubero L, García-Elcano L, Mylona E, Boue-Rafle A, Cozzarini C, Ubeira Gabellini MG, Rancati T, Fiorino C, de Crevoisier R, Acosta O, Pascau J. Deep learning-based segmentation of prostatic urethra on computed tomography scans for treatment planning. Phys Imaging Radiat Oncol 2023; 26:100431. [PMID: 37007914 PMCID: PMC10064422 DOI: 10.1016/j.phro.2023.100431] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2022] [Revised: 03/08/2023] [Accepted: 03/14/2023] [Indexed: 04/04/2023] Open
Abstract
Background and purpose The intraprostatic urethra is an organ at risk in prostate cancer radiotherapy, but its segmentation in computed tomography (CT) is challenging. This work sought to: i) propose an automatic pipeline for intraprostatic urethra segmentation in CT, ii) analyze the dose to the urethra, iii) compare the predictions to magnetic resonance (MR) contours. Materials and methods First, we trained Deep Learning networks to segment the rectum, bladder, prostate, and seminal vesicles. Then, the proposed Deep Learning Urethra Segmentation model was trained with the bladder and prostate distance transforms and 44 labeled CT with visible catheters. The evaluation was performed on 11 datasets, calculating centerline distance (CLD) and percentage of centerline within 3.5 and 5 mm. We applied this method to a dataset of 32 patients treated with intensity-modulated radiation therapy (IMRT) to quantify the urethral dose. Finally, we compared predicted intraprostatic urethra contours to manual delineations in MR for 15 patients without catheter. Results A mean CLD of 1.6 ± 0.8 mm for the whole urethra and 1.7 ± 1.4, 1.5 ± 0.9, and 1.7 ± 0.9 mm for the top, middle, and bottom thirds were obtained in CT. On average, 94% and 97% of the segmented centerlines were within a 3.5 mm and 5 mm radius, respectively. In IMRT, the urethra received a higher dose than the overall prostate. We also found a slight deviation between the predicted and manual MR delineations. Conclusion A fully-automatic segmentation pipeline was validated to delineate the intraprostatic urethra in CT images.
Collapse
Affiliation(s)
- Lucía Cubero
- Departamento de Bioingeniería, Universidad Carlos III de Madrid, Madrid, Spain
- Université Rennes, CLCC Eugène Marquis, Inserm, LTSI - UMR 1099, F-35000 Rennes, France
| | - Laura García-Elcano
- Departamento de Bioingeniería, Universidad Carlos III de Madrid, Madrid, Spain
| | | | - Adrien Boue-Rafle
- Université Rennes, CLCC Eugène Marquis, Inserm, LTSI - UMR 1099, F-35000 Rennes, France
| | - Cesare Cozzarini
- Department of Radiation Oncology, San Raffaele Scientific Institute - IRCCS, Milan, Italy
| | | | - Tiziana Rancati
- Science Unit, Fondazione IRCCS Istituto Nazionale dei Tumori, Milan, Italy
| | - Claudio Fiorino
- Department of Medical Physics, San Raffaele Scientific Institute - IRCCS, Milan, Italy
| | - Renaud de Crevoisier
- Université Rennes, CLCC Eugène Marquis, Inserm, LTSI - UMR 1099, F-35000 Rennes, France
| | - Oscar Acosta
- Université Rennes, CLCC Eugène Marquis, Inserm, LTSI - UMR 1099, F-35000 Rennes, France
| | - Javier Pascau
- Departamento de Bioingeniería, Universidad Carlos III de Madrid, Madrid, Spain
- Instituto de Investigación Sanitaria Gregorio Marañón, Madrid, Spain
- Corresponding author at: Departamento de Bioingeniería, Universidad Carlos III de Madrid, Madrid, Spain.
| |
Collapse
|
12
|
Liang X, Morgan H, Bai T, Dohopolski M, Nguyen D, Jiang S. Deep learning based direct segmentation assisted by deformable image registration for cone-beam CT based auto-segmentation for adaptive radiotherapy. Phys Med Biol 2023; 68. [PMID: 36657169 DOI: 10.1088/1361-6560/acb4d7] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Accepted: 01/19/2023] [Indexed: 01/21/2023]
Abstract
Cone-beam CT (CBCT)-based online adaptive radiotherapy calls for accurate auto-segmentation to reduce the time cost for physicians. However, deep learning (DL)-based direct segmentation of CBCT images is a challenging task, mainly due to the poor image quality and lack of well-labelled large training datasets. Deformable image registration (DIR) is often used to propagate the manual contours on the planning CT (pCT) of the same patient to CBCT. In this work, we undertake solving the problems mentioned above with the assistance of DIR. Our method consists of three main components. First, we use deformed pCT contours derived from multiple DIR methods between pCT and CBCT as pseudo labels for initial training of the DL-based direct segmentation model. Second, we use deformed pCT contours from another DIR algorithm as influencer volumes to define the region of interest for DL-based direct segmentation. Third, the initially trained DL model is further fine-tuned using a smaller set of true labels. Nine patients are used for model evaluation. We found that DL-based direct segmentation on CBCT without influencer volumes has much poorer performance compared to DIR-based segmentation. However, adding deformed pCT contours as influencer volumes in the direct segmentation network dramatically improves segmentation performance, reaching the accuracy level of DIR-based segmentation. The DL model with influencer volumes can be further improved through fine-tuning using a smaller set of true labels, achieving mean Dice similarity coefficient of 0.86, Hausdorff distance at the 95th percentile of 2.34 mm, and average surface distance of 0.56 mm. A DL-based direct CBCT segmentation model can be improved to outperform DIR-based segmentation models by using deformed pCT contours as pseudo labels and influencer volumes for initial training, and by using a smaller set of true labels for model fine tuning.
Collapse
Affiliation(s)
- Xiao Liang
- Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
| | - Howard Morgan
- Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
| | - Ti Bai
- Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
| | - Michael Dohopolski
- Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
| | - Dan Nguyen
- Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
| | - Steve Jiang
- Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
| |
Collapse
|
13
|
Hirashima H, Nakamura M, Imanishi K, Nakao M, Mizowaki T. Evaluation of generalization ability for deep learning-based auto-segmentation accuracy in limited field of view CBCT of male pelvic region. J Appl Clin Med Phys 2023; 24:e13912. [PMID: 36659871 PMCID: PMC10161011 DOI: 10.1002/acm2.13912] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2022] [Revised: 01/09/2023] [Accepted: 01/10/2023] [Indexed: 01/21/2023] Open
Abstract
PURPOSE The aim of this study was to evaluate generalization ability of segmentation accuracy for limited FOV CBCT in the male pelvic region using a full-image CNN. Auto-segmentation accuracy was evaluated using various datasets with different intensity distributions and FOV sizes. METHODS A total of 171 CBCT datasets from patients with prostate cancer were enrolled. There were 151, 10, and 10 CBCT datasets acquired from Vero4DRT, TrueBeam STx, and Clinac-iX, respectively. The FOV for Vero4DRT, TrueBeam STx, and Clinac-iX was 20, 26, and 25 cm, respectively. The ROIs, including the bladder, prostate, rectum, and seminal vesicles, were manually delineated. The U2 -Net CNN network architecture was used to train the segmentation model. A total of 131 limited FOV CBCT datasets from Vero4DRT were used for training (104 datasets) and validation (27 datasets); thereafter the rest were for testing. The training routine was set to save the best weight values when the DSC in the validation set was maximized. Segmentation accuracy was qualitatively and quantitatively evaluated between the ground truth and predicted ROIs in the different testing datasets. RESULTS The mean scores ± standard deviation of visual evaluation for bladder, prostate, rectum, and seminal vesicle in all treatment machines were 1.0 ± 0.7, 1.5 ± 0.6, 1.4 ± 0.6, and 2.1 ± 0.8 points, respectively. The median DSC values for all imaging devices were ≥0.94 for the bladder, 0.84-0.87 for the prostate and rectum, and 0.48-0.69 for the seminal vesicles. Although the DSC values for the bladder and seminal vesicles were significantly different among the three imaging devices, the DSC value of the bladder changed by less than 1% point. The median MSD values for all imaging devices were ≤1.2 mm for the bladder and 1.4-2.2 mm for the prostate, rectum, and seminal vesicles. The MSD values for the seminal vesicles were significantly different between the three imaging devices. CONCLUSION The proposed method is effective for testing datasets with different intensity distributions and FOV from training datasets.
Collapse
Affiliation(s)
- Hideaki Hirashima
- Department of Radiation Oncology and Image-Applied Therapy, Graduate School of Medicine, Kyoto University, Sakyo-ku, Kyoto, Japan
| | - Mitsuhiro Nakamura
- Department of Radiation Oncology and Image-Applied Therapy, Graduate School of Medicine, Kyoto University, Sakyo-ku, Kyoto, Japan.,Department of Advanced Medical Physics, Graduate School of Medicine, Kyoto University, Sakyo-ku, Kyoto, Japan
| | | | - Megumi Nakao
- Department of Advanced Medical Engineering and Intelligence, Graduate School of Medicine, Kyoto University, Sakyo-ku, Kyoto, Japan
| | - Takashi Mizowaki
- Department of Radiation Oncology and Image-Applied Therapy, Graduate School of Medicine, Kyoto University, Sakyo-ku, Kyoto, Japan
| |
Collapse
|
14
|
Anatomical evaluation of deep-learning synthetic computed tomography images generated from male pelvis cone-beam computed tomography. Phys Imaging Radiat Oncol 2023; 25:100416. [PMID: 36969503 PMCID: PMC10037090 DOI: 10.1016/j.phro.2023.100416] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2022] [Revised: 01/17/2023] [Accepted: 01/18/2023] [Indexed: 01/25/2023] Open
Abstract
Background and purpose To improve cone-beam computed tomography (CBCT), deep-learning (DL)-models are being explored to generate synthetic CTs (sCT). The sCT evaluation is mainly focused on image quality and CT number accuracy. However, correct representation of daily anatomy of the CBCT is also important for sCTs in adaptive radiotherapy. The aim of this study was to emphasize the importance of anatomical correctness by quantitatively assessing sCT scans generated from CBCT scans using different paired and unpaired dl-models. Materials and methods Planning CTs (pCT) and CBCTs of 56 prostate cancer patients were included to generate sCTs. Three different dl-models, Dual-UNet, Single-UNet and Cycle-consistent Generative Adversarial Network (CycleGAN), were evaluated on image quality and anatomical correctness. The image quality was assessed using image metrics, such as Mean Absolute Error (MAE). The anatomical correctness between sCT and CBCT was quantified using organs-at-risk volumes and average surface distances (ASD). Results MAE was 24 Hounsfield Unit (HU) [range:19-30 HU] for Dual-UNet, 40 HU [range:34-56 HU] for Single-UNet and 41HU [range:37-46 HU] for CycleGAN. Bladder ASD was 4.5 mm [range:1.6-12.3 mm] for Dual-UNet, 0.7 mm [range:0.4-1.2 mm] for Single-UNet and 0.9 mm [range:0.4-1.1 mm] CycleGAN. Conclusions Although Dual-UNet performed best in standard image quality measures, such as MAE, the contour based anatomical feature comparison with the CBCT showed that Dual-UNet performed worst on anatomical comparison. This emphasizes the importance of adding anatomy based evaluation of sCTs generated by dl-models. For applications in the pelvic area, direct anatomical comparison with the CBCT may provide a useful method to assess the clinical applicability of dl-based sCT generation methods.
Collapse
|
15
|
Abbani N, Baudier T, Rit S, Franco FD, Okoli F, Jaouen V, Tilquin F, Barateau A, Simon A, de Crevoisier R, Bert J, Sarrut D. Deep learning-based segmentation in prostate radiation therapy using Monte Carlo simulated cone-beam computed tomography. Med Phys 2022; 49:6930-6944. [PMID: 36000762 DOI: 10.1002/mp.15946] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Revised: 07/28/2022] [Accepted: 08/05/2022] [Indexed: 12/13/2022] Open
Abstract
PURPOSE Segmenting organs in cone-beam CT (CBCT) images would allow to adapt the radiotherapy based on the organ deformations that may occur between treatment fractions. However, this is a difficult task because of the relative lack of contrast in CBCT images, leading to high inter-observer variability. Deformable image registration (DIR) and deep-learning based automatic segmentation approaches have shown interesting results for this task in the past years. However, they are either sensitive to large organ deformations, or require to train a convolutional neural network (CNN) from a database of delineated CBCT images, which is difficult to do without improvement of image quality. In this work, we propose an alternative approach: to train a CNN (using a deep learning-based segmentation tool called nnU-Net) from a database of artificial CBCT images simulated from planning CT, for which it is easier to obtain the organ contours. METHODS Pseudo-CBCT (pCBCT) images were simulated from readily available segmented planning CT images, using the GATE Monte Carlo simulation. CT reference delineations were copied onto the pCBCT, resulting in a database of segmented images used to train the neural network. The studied segmentation contours were: bladder, rectum, and prostate contours. We trained multiple nnU-Net models using different training: (1) segmented real CBCT, (2) pCBCT, (3) segmented real CT and tested on pseudo-CT (pCT) generated from CBCT with cycleGAN, and (4) a combination of (2) and (3). The evaluation was performed on different datasets of segmented CBCT or pCT by comparing predicted segmentations with reference ones thanks to Dice similarity score and Hausdorff distance. A qualitative evaluation was also performed to compare DIR-based and nnU-Net-based segmentations. RESULTS Training with pCBCT was found to lead to comparable results to using real CBCT images. When evaluated on CBCT obtained from the same hospital as the CT images used in the simulation of the pCBCT, the model trained with pCBCT scored mean DSCs of 0.92 ± 0.05, 0.87 ± 0.02, and 0.85 ± 0.04 and mean Hausdorff distance 4.67 ± 3.01, 3.91 ± 0.98, and 5.00 ± 1.32 for the bladder, rectum, and prostate contours respectively, while the model trained with real CBCT scored mean DSCs of 0.91 ± 0.06, 0.83 ± 0.07, and 0.81 ± 0.05 and mean Hausdorff distance 5.62 ± 3.24, 6.43 ± 5.11, and 6.19 ± 1.14 for the bladder, rectum, and prostate contours, respectively. It was also found to outperform models using pCT or a combination of both, except for the prostate contour when tested on a dataset from a different hospital. Moreover, the resulting segmentations demonstrated a clinical acceptability, where 78% of bladder segmentations, 98% of rectum segmentations, and 93% of prostate segmentations required minor or no corrections, and for 76% of the patients, all structures of the patient required minor or no corrections. CONCLUSION We proposed to use simulated CBCT images to train a nnU-Net segmentation model, avoiding the need to gather complex and time-consuming reference delineations on CBCT images.
Collapse
Affiliation(s)
- Nelly Abbani
- Univ Lyon, INSA-Lyon, Université Claude Bernard Lyon 1, CNRS, Inserm, Lyon, France
| | - Thomas Baudier
- Univ Lyon, INSA-Lyon, Université Claude Bernard Lyon 1, CNRS, Inserm, Lyon, France
| | - Simon Rit
- Univ Lyon, INSA-Lyon, Université Claude Bernard Lyon 1, CNRS, Inserm, Lyon, France
| | - Francesca di Franco
- Univ Lyon, INSA-Lyon, Université Claude Bernard Lyon 1, CNRS, Inserm, Lyon, France
| | - Franklin Okoli
- LaTIM, Université de Bretagne Occidentale, Inserm, Brest, France
| | - Vincent Jaouen
- LaTIM, Université de Bretagne Occidentale, Inserm, Brest, France
| | | | - Anaïs Barateau
- Univ Rennes, CLCC Eugène Marquis, Inserm, Rennes, France
| | - Antoine Simon
- Univ Rennes, CLCC Eugène Marquis, Inserm, Rennes, France
| | | | - Julien Bert
- LaTIM, Université de Bretagne Occidentale, Inserm, Brest, France
| | - David Sarrut
- Univ Lyon, INSA-Lyon, Université Claude Bernard Lyon 1, CNRS, Inserm, Lyon, France
| |
Collapse
|
16
|
Li G, Wu X, Ma X. Artificial intelligence in radiotherapy. Semin Cancer Biol 2022; 86:160-171. [PMID: 35998809 DOI: 10.1016/j.semcancer.2022.08.005] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2022] [Accepted: 08/18/2022] [Indexed: 11/19/2022]
Abstract
Radiotherapy is a discipline closely integrated with computer science. Artificial intelligence (AI) has developed rapidly over the past few years. With the explosive growth of medical big data, AI promises to revolutionize the field of radiotherapy through highly automated workflow, enhanced quality assurance, improved regional balances of expert experiences, and individualized treatment guided by multi-omics. In addition to independent researchers, the increasing number of large databases, biobanks, and open challenges significantly facilitated AI studies on radiation oncology. This article reviews the latest research, clinical applications, and challenges of AI in each part of radiotherapy including image processing, contouring, planning, quality assurance, motion management, and outcome prediction. By summarizing cutting-edge findings and challenges, we aim to inspire researchers to explore more future possibilities and accelerate the arrival of AI radiotherapy.
Collapse
Affiliation(s)
- Guangqi Li
- Division of Biotherapy, Cancer Center, West China Hospital and State Key Laboratory of Biotherapy, Sichuan University, No. 37 GuoXue Alley, Chengdu 610041, China
| | - Xin Wu
- Head & Neck Oncology ward, Division of Radiotherapy Oncology, Cancer Center, West China Hospital, Sichuan University, No. 37 GuoXue Alley, Chengdu 610041, China
| | - Xuelei Ma
- Division of Biotherapy, Cancer Center, West China Hospital and State Key Laboratory of Biotherapy, Sichuan University, No. 37 GuoXue Alley, Chengdu 610041, China.
| |
Collapse
|
17
|
Jin L, Chen Q, Shi A, Wang X, Ren R, Zheng A, Song P, Zhang Y, Wang N, Wang C, Wang N, Cheng X, Wang S, Ge H. Deep Learning for Automated Contouring of Gross Tumor Volumes in Esophageal Cancer. Front Oncol 2022; 12:892171. [PMID: 35924169 PMCID: PMC9339638 DOI: 10.3389/fonc.2022.892171] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2022] [Accepted: 06/21/2022] [Indexed: 12/03/2022] Open
Abstract
Purpose The aim of this study was to propose and evaluate a novel three-dimensional (3D) V-Net and two-dimensional (2D) U-Net mixed (VUMix-Net) architecture for a fully automatic and accurate gross tumor volume (GTV) in esophageal cancer (EC)–delineated contours. Methods We collected the computed tomography (CT) scans of 215 EC patients. 3D V-Net, 2D U-Net, and VUMix-Net were developed and further applied simultaneously to delineate GTVs. The Dice similarity coefficient (DSC) and 95th-percentile Hausdorff distance (95HD) were used as quantitative metrics to evaluate the performance of the three models in ECs from different segments. The CT data of 20 patients were randomly selected as the ground truth (GT) masks, and the corresponding delineation results were generated by artificial intelligence (AI). Score differences between the two groups (GT versus AI) and the evaluation consistency were compared. Results In all patients, there was a significant difference in the 2D DSCs from U-Net, V-Net, and VUMix-Net (p=0.01). In addition, VUMix-Net showed achieved better 3D-DSC and 95HD values. There was a significant difference among the 3D-DSC (mean ± STD) and 95HD values for upper-, middle-, and lower-segment EC (p<0.001), and the middle EC values were the best. In middle-segment EC, VUMix-Net achieved the highest 2D-DSC values (p<0.001) and lowest 95HD values (p=0.044). Conclusion The new model (VUMix-Net) showed certain advantages in delineating the GTVs of EC. Additionally, it can generate the GTVs of EC that meet clinical requirements and have the same quality as human-generated contours. The system demonstrated the best performance for the ECs of the middle segment.
Collapse
Affiliation(s)
- Linzhi Jin
- Department of Radiation Oncology, The Affiliated Cancer Hospital of Zhengzhou University & Henan Cancer Hospital, Zhengzhou, China
- Department of Radiation Oncology, Anyang Tumor Hospital, The Fourth Affiliated Hospital of Henan University of Science and Technology, Anyang, China
| | - Qi Chen
- Department of Research and Development, MedMind Technology Co, Ltd., Beijing, China
| | - Aiwei Shi
- Department of Research and Development, MedMind Technology Co, Ltd., Beijing, China
| | - Xiaomin Wang
- Department of Radiation Oncology, Anyang Tumor Hospital, The Fourth Affiliated Hospital of Henan University of Science and Technology, Anyang, China
| | - Runchuan Ren
- Department of Radiation Oncology, Anyang Tumor Hospital, The Fourth Affiliated Hospital of Henan University of Science and Technology, Anyang, China
| | - Anping Zheng
- Department of Radiation Oncology, Anyang Tumor Hospital, The Fourth Affiliated Hospital of Henan University of Science and Technology, Anyang, China
| | - Ping Song
- Department of Radiation Oncology, Anyang Tumor Hospital, The Fourth Affiliated Hospital of Henan University of Science and Technology, Anyang, China
| | - Yaowen Zhang
- Department of Radiation Oncology, Anyang Tumor Hospital, The Fourth Affiliated Hospital of Henan University of Science and Technology, Anyang, China
| | - Nan Wang
- Department of Radiation Oncology, The Affiliated Cancer Hospital of Zhengzhou University & Henan Cancer Hospital, Zhengzhou, China
| | - Chenyu Wang
- Department of Radiation Oncology, Anyang Tumor Hospital, The Fourth Affiliated Hospital of Henan University of Science and Technology, Anyang, China
| | - Nengchao Wang
- Department of Radiation Oncology, Anyang Tumor Hospital, The Fourth Affiliated Hospital of Henan University of Science and Technology, Anyang, China
| | - Xinyu Cheng
- Department of Radiation Oncology, Anyang Tumor Hospital, The Fourth Affiliated Hospital of Henan University of Science and Technology, Anyang, China
| | - Shaobin Wang
- Department of Research and Development, MedMind Technology Co, Ltd., Beijing, China
| | - Hong Ge
- Department of Radiation Oncology, The Affiliated Cancer Hospital of Zhengzhou University & Henan Cancer Hospital, Zhengzhou, China
- *Correspondence: Hong Ge,
| |
Collapse
|
18
|
Wang J, Zhu Q, Zhang S, Wen L, Wang L. Observation of Clinical Efficacy of Anisodamine and Chlorpromazine in the Treatment of Intractable Hiccup after Stroke. BIOMED RESEARCH INTERNATIONAL 2022; 2022:6563193. [PMID: 35915796 PMCID: PMC9338746 DOI: 10.1155/2022/6563193] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/11/2022] [Revised: 04/21/2022] [Accepted: 04/22/2022] [Indexed: 11/18/2022]
Abstract
Objective This study is aimed at investigating the clinical efficacy of anisodamine combined with chlorpromazine on intractable hiccups after stroke. Methods 150 patients admitted to Affiliated Hospital of the Hebei University of Engineering from 2017 to 2021 were selected as the research objects, all of which received the computed tomography (CT) examination. During CT examination, intelligent algorithms were used to segment the images. An unsupervised multilayer image threshold segmentation algorithm was proposed by using Kullback-Leibler (K-L) divergence and the modified particle swarm optimization (MPSO) algorithm. All patients were divided into three groups, with each group of 50 patients. Patients in the control group (group A) took the calcium tablets, vitamin C tablets, and vitamin B1 tablets orally. Patients in the control group (group B) received the acupoint injection of anisodamine, and those in the observation group (group C) received the acupoint injection of anisodamine combined with chlorpromazine. The therapeutic effect and patient satisfaction of the three groups were compared. Results The two-dimensional (2D) K-L divergence was applied for the multilayer segmentation of images, which was helpful to obtain accurate images. The MPSO algorithm was adopted to reduce the computational complexity. The total efficiency of group C was 98%, that of group B was 56%, and that of group A was 22%. The total efficiency and satisfaction rate of group C were signally better than those of group A and group B (P < 0.05). Conclusion The combination of 2D K-L divergence and MPSO algorithm could improve the accuracy of multilayer image segmentation and CT imaging. Acupoint injection of anisodamine combined with chlorpromazine had better efficacy than the injection of anisodamine alone for the treatment of intractable hiccups after stroke, with high safety and clinical promotion value.
Collapse
Affiliation(s)
- Jing Wang
- Department of Neurology, Affiliated Hospital of Hebei University of Engineering, Handan, 056002 Hebei, China
| | - Qinghua Zhu
- Department of Neurology, Affiliated Hospital of Hebei University of Engineering, Handan, 056002 Hebei, China
| | - Shuyan Zhang
- Department of Neurology, Affiliated Hospital of Hebei University of Engineering, Handan, 056002 Hebei, China
| | - Lisha Wen
- Department of Neurology, Affiliated Hospital of Hebei University of Engineering, Handan, 056002 Hebei, China
| | - Li Wang
- Department of Neurology, Affiliated Hospital of Hebei University of Engineering, Handan, 056002 Hebei, China
| |
Collapse
|
19
|
Shirokikh B, Dalechina A, Shevtsov A, Krivov E, Kostjuchenko V, Durgaryan A, Galkin M, Golanov A, Belyaev M. Systematic Clinical Evaluation of A Deep Learning Method for Medical Image Segmentation: Radiosurgery Application. IEEE J Biomed Health Inform 2022; 26:3037-3046. [PMID: 35213318 DOI: 10.1109/jbhi.2022.3153394] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
We systematically evaluate a Deep Learning model in a 3D medical image segmentation task. With our model, we address the flaws of manual segmentation: high inter-rater contouring variability and time consumption of the contouring process. The main extension over the existing evaluations is the careful and detailed analysis that could be further generalized on other medical image segmentation tasks. Firstly, we analyze the changes in the inter-rater detection agreement. We show that the model reduces the number of detection disagreements by 48% (p < 0.05). Secondly, we show that the model improves the inter-rater contouring agreement from 0.845 to 0.871 surface Dice Score (p < 0.05). Thirdly, we show that the model accelerates the delineation process between 1.6 and 2.0 times (p < 0.05). Finally, we design the setup of the clinical experiment to either exclude or estimate the evaluation biases; thus, preserving the significance of the results. Besides the clinical evaluation, we also share intuitions and practical ideas for building an efficient DL-based model for 3D medical image segmentation.
Collapse
|
20
|
Adamson PM, Bhattbhatt V, Principi S, Beriwal S, Strain LS, Offe M, Wang AS, Vo N, Schmidt TG, Jordan P. Technical note: Evaluation of a V‐Net autosegmentation algorithm for pediatric CT scans: Performance, generalizability and application to patient‐specific CT dosimetry. Med Phys 2022; 49:2342-2354. [PMID: 35128672 PMCID: PMC9007850 DOI: 10.1002/mp.15521] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2021] [Revised: 12/23/2021] [Accepted: 01/08/2022] [Indexed: 11/09/2022] Open
Abstract
PURPOSE This study developed and evaluated a fully convolutional network (FCN) for pediatric CT organ segmentation and investigated the generalizability of the FCN across image heterogeneities such as CT scanner model protocols and patient age. We also evaluated the autosegmentation models as part of a software tool for patient-specific CT dose estimation. METHODS A collection of 359 pediatric CT datasets with expert organ contours were used for model development and evaluation. Autosegmentation models were trained for each organ using a modified FCN 3D V-Net. An independent test set of 60 patients was withheld for testing. To evaluate the impact of CT scanner model protocol and patient age heterogeneities, separate models were trained using a subset of scanner model protocols and pediatric age groups. Train and test sets were split to answer questions about the generalizability of pediatric FCN autosegmentation models to unseen age groups and scanner model protocols, as well as the merit of scanner model protocol or age-group-specific models. Finally, the organ contours resulting from the autosegmentation models were applied to patient-specific dose maps to evaluate the impact of segmentation errors on organ dose estimation. RESULTS Results demonstrate that the autosegmentation models generalize to CT scanner acquisition and reconstruction methods which were not present in the training dataset. While models are not equally generalizable across age groups, age-group-specific models do not hold any advantage over combining heterogeneous age groups into a single training set. Dice similarity coefficient (DSC) and mean surface distance results are presented for 19 organ structures, for example, median DSC of 0.52 (duodenum), 0.74 (pancreas), 0.92 (stomach), and 0.96 (heart). The FCN models achieve a mean dose error within 5% of expert segmentations for all 19 organs except for the spinal canal, where the mean error was 6.31%. CONCLUSIONS Overall, these results are promising for the adoption of FCN autosegmentation models for pediatric CT, including applications for patient-specific CT dose estimation.
Collapse
Affiliation(s)
| | | | - Sara Principi
- Department of Biomedical Engineering Marquette University and Medical College of Wisconsin Milwaukee WI 53201 United States
| | | | - Linda S. Strain
- Department of Radiology Children's Wisconsin and Medical College of Wisconsin Milwaukee WI 53226 United States
| | - Michael Offe
- Department of Biomedical Engineering Marquette University and Medical College of Wisconsin Milwaukee WI 53201 United States
| | - Adam S. Wang
- Department of Radiology Stanford University Stanford CA 94305 United States
| | - Nghia‐Jack Vo
- Department of Radiology Children's Wisconsin and Medical College of Wisconsin Milwaukee WI 53226 United States
| | - Taly Gilat Schmidt
- Department of Biomedical Engineering Marquette University and Medical College of Wisconsin Milwaukee WI 53201 United States
| | - Petr Jordan
- Varian Medical Systems Palo Alto CA 94304 United States
| |
Collapse
|
21
|
Jordan P, Adamson PM, Bhattbhatt V, Beriwal S, Shen S, Radermecker O, Bose S, Strain LS, Offe M, Fraley D, Principi S, Ye DH, Wang AS, Van Heteren J, Vo NJ, Schmidt TG. Pediatric chest-abdomen-pelvis and abdomen-pelvis CT images with expert organ contours. Med Phys 2022; 49:3523-3528. [PMID: 35067940 PMCID: PMC9090951 DOI: 10.1002/mp.15485] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2021] [Revised: 12/26/2021] [Accepted: 12/31/2021] [Indexed: 11/09/2022] Open
Abstract
PURPOSE Organ autosegmentation efforts to date have largely been focused on adult populations, due to limited availability of pediatric training data. Pediatric patients may present additional challenges for organ segmentation. This paper describes a dataset of 359 pediatric chest-abdomen-pelvis and abdomen-pelvis CT images with expert contours of up to 29 anatomical organ structures to aid in the evaluation and development of autosegmentation algorithms for pediatric CT imaging. ACQUISITION AND VALIDATION METHODS The dataset collection consists of axial CT images in DICOM format of 180 male and 179 female pediatric chest-abdomen-pelvis or abdomen-pelvis exams acquired from one of three CT scanners at Children's Wisconsin. The datasets represent random pediatric cases based upon routine clinical indications. Subjects ranged in age from 5 days to 16 years, with a mean age of seven years. The CT acquisition, contrast, and reconstruction protocols varied across the scanner models and patients, with specifications available in the DICOM headers. Expert contours were manually labeled for up to 29 organ structures per subject. Not all contours are available for all subjects, due to limited field of view or unreliable contouring due to high noise. DATA FORMAT AND USAGE NOTES The data are available on TCIA (https://www.cancerimagingarchive.net/) under the collection Pediatric-CT-SEG. The axial CT image slices for each subject are available in DICOM format. The expert contours are stored in a single DICOM RTSTRUCT file for each subject. The contours are names as listed in Table 2. POTENTIAL APPLICATIONS This dataset will enable the evaluation and development of organ autosegmentation algorithms for pediatric populations, which exhibit variations in organ shape and size across age. Automated organ segmentation from CT images has numerous applications including radiation therapy, diagnostic tasks, surgical planning, and patient-specific organ dose estimation. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
| | | | | | | | | | | | | | | | - Michael Offe
- Department of Biomedical Engineering, Medical College of Wisconsin and Marquette University, Milwaukee, WI
| | - David Fraley
- Department of Biomedical Engineering, Medical College of Wisconsin and Marquette University, Milwaukee, WI
| | - Sara Principi
- Department of Biomedical Engineering, Medical College of Wisconsin and Marquette University, Milwaukee, WI
| | - Dong Hye Ye
- Department of Electrical Engineering, Marquette University, Milwaukee, WI
| | - Adam S Wang
- Department of Radiology, Stanford University, Stanford, CA
| | | | - Nghia-Jack Vo
- Department of Radiology, Medical College of Wisconsin, Milwaukee, WI
| | - Taly Gilat Schmidt
- Department of Biomedical Engineering, Medical College of Wisconsin and Marquette University, Milwaukee, WI
| |
Collapse
|
22
|
Casati M, Piffer S, Calusi S, Marrazzo L, Simontacchi G, Di Cataldo V, Greto D, Desideri I, Vernaleone M, Francolini G, Livi L, Pallotta S. Clinical validation of an automatic atlas‐based segmentation tool for male pelvis CT images. J Appl Clin Med Phys 2022; 23:e13507. [PMID: 35064746 PMCID: PMC8906199 DOI: 10.1002/acm2.13507] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2021] [Revised: 12/01/2021] [Accepted: 12/06/2021] [Indexed: 12/20/2022] Open
Abstract
Purpose This retrospective work aims to evaluate the possible impact on intra‐ and inter‐observer variability, contouring time, and contour accuracy of introducing a pelvis computed tomography (CT) auto‐segmentation tool in radiotherapy planning workflow. Methods Tests were carried out on five structures (bladder, rectum, pelvic lymph‐nodes, and femoral heads) of six previously treated subjects, enrolling five radiation oncologists (ROs) to manually re‐contour and edit auto‐contours generated with a male pelvis CT atlas created with the commercial software MIM MAESTRO. The ROs first delineated manual contours (M). Then they modified the auto‐contours, producing automatic‐modified (AM) contours. The procedure was repeated to evaluate intra‐observer variability, producing M1, M2, AM1, and AM2 contour sets (each comprising 5 structures × 6 test patients × 5 ROs = 150 contours), for a total of 600 contours. Potential time savings was evaluated by comparing contouring and editing times. Structure contours were compared to a reference standard by means of Dice similarity coefficient (DSC) and mean distance to agreement (MDA), to assess intra‐ and inter‐observer variability. To exclude any automation bias, ROs evaluated both M and AM sets as “clinically acceptable” or “to be corrected” in a blind test. Results Comparing AM to M sets, a significant reduction of both inter‐observer variability (p < 0.001) and contouring time (‐45% whole pelvis, p < 0.001) was obtained. Intra‐observer variability reduction was significant only for bladder and femoral heads (p < 0.001). The statistical test showed no significant bias. Conclusion Our atlas‐based workflow proved to be effective for clinical practice as it can improve contour reproducibility and generate time savings. Based on these findings, institutions are encouraged to implement their auto‐segmentation method.
Collapse
Affiliation(s)
- Marta Casati
- Medical Physics Unit Careggi University Hospital Florence Italy
| | - Stefano Piffer
- Department of Experimental and Clinical Biomedical Sciences University of Florence Florence Italy
- National Institute of Nuclear Physics (INFN) Florence Italy
| | - Silvia Calusi
- Department of Experimental and Clinical Biomedical Sciences University of Florence Florence Italy
- National Institute of Nuclear Physics (INFN) Florence Italy
| | - Livia Marrazzo
- Medical Physics Unit Careggi University Hospital Florence Italy
| | | | | | - Daniela Greto
- Radiation Oncology Unit Careggi University Hospital Florence Italy
| | - Isacco Desideri
- Department of Experimental and Clinical Biomedical Sciences University of Florence Florence Italy
| | - Marco Vernaleone
- Radiation Oncology Unit Careggi University Hospital Florence Italy
| | | | - Lorenzo Livi
- Department of Experimental and Clinical Biomedical Sciences University of Florence Florence Italy
- Radiation Oncology Unit Careggi University Hospital Florence Italy
| | - Stefania Pallotta
- Medical Physics Unit Careggi University Hospital Florence Italy
- Department of Experimental and Clinical Biomedical Sciences University of Florence Florence Italy
| |
Collapse
|
23
|
Dai Z, Zhang Y, Zhu L, Tan J, Yang G, Zhang B, Cai C, Jin H, Meng H, Tan X, Jian W, Yang W, Wang X. Geometric and Dosimetric Evaluation of Deep Learning-Based Automatic Delineation on CBCT-Synthesized CT and Planning CT for Breast Cancer Adaptive Radiotherapy: A Multi-Institutional Study. Front Oncol 2021; 11:725507. [PMID: 34858813 PMCID: PMC8630628 DOI: 10.3389/fonc.2021.725507] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2021] [Accepted: 10/12/2021] [Indexed: 12/29/2022] Open
Abstract
Purpose We developed a deep learning model to achieve automatic multitarget delineation on planning CT (pCT) and synthetic CT (sCT) images generated from cone-beam CT (CBCT) images. The geometric and dosimetric impact of the model was evaluated for breast cancer adaptive radiation therapy. Methods We retrospectively analyzed 1,127 patients treated with radiotherapy after breast-conserving surgery from two medical institutions. The CBCT images for patient setup acquired utilizing breath-hold guided by optical surface monitoring system were used to generate sCT with a generative adversarial network. Organs at risk (OARs), clinical target volume (CTV), and tumor bed (TB) were delineated automatically with a 3D U-Net model on pCT and sCT images. The geometric accuracy of the model was evaluated with metrics, including Dice similarity coefficient (DSC) and 95% Hausdorff distance (HD95). Dosimetric evaluation was performed by quick dose recalculation on sCT images relying on gamma analysis and dose-volume histogram (DVH) parameters. The relationship between ΔD95, ΔV95 and DSC-CTV was assessed to quantify the clinical impact of the geometric changes of CTV. Results The ranges of DSC and HD95 were 0.73–0.97 and 2.22–9.36 mm for pCT, 0.63–0.95 and 2.30–19.57 mm for sCT from institution A, 0.70–0.97 and 2.10–11.43 mm for pCT from institution B, respectively. The quality of sCT was excellent with an average mean absolute error (MAE) of 71.58 ± 8.78 HU. The mean gamma pass rate (3%/3 mm criterion) was 91.46 ± 4.63%. DSC-CTV down to 0.65 accounted for a variation of more than 6% of V95 and 3 Gy of D95. DSC-CTV up to 0.80 accounted for a variation of less than 4% of V95 and 2 Gy of D95. The mean ΔD90/ΔD95 of CTV and TB were less than 2Gy/4Gy, 4Gy/5Gy for all the patients. The cardiac dose difference in left breast cancer cases was larger than that in right breast cancer cases. Conclusions The accurate multitarget delineation is achievable on pCT and sCT via deep learning. The results show that dose distribution needs to be considered to evaluate the clinical impact of geometric variations during breast cancer radiotherapy.
Collapse
Affiliation(s)
- Zhenhui Dai
- Department of Radiation Therapy, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Yiwen Zhang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
| | - Lin Zhu
- Department of Radiation Therapy, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Junwen Tan
- Department of Oncology, The Fourth Affiliated Hospital, Guangxi Medical University, Liuzhou, China
| | - Geng Yang
- Department of Radiation Therapy, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Bailin Zhang
- Department of Radiation Therapy, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Chunya Cai
- Department of Radiation Therapy, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Huaizhi Jin
- Department of Radiation Therapy, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Haoyu Meng
- Department of Radiation Therapy, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Xiang Tan
- Department of Radiation Therapy, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Wanwei Jian
- Department of Radiation Therapy, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Wei Yang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
| | - Xuetao Wang
- Department of Radiation Therapy, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| |
Collapse
|
24
|
Deep Learning-Based Image Segmentation of Cone-Beam Computed Tomography Images for Oral Lesion Detection. JOURNAL OF HEALTHCARE ENGINEERING 2021; 2021:4603475. [PMID: 34594482 PMCID: PMC8478545 DOI: 10.1155/2021/4603475] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/27/2021] [Accepted: 09/08/2021] [Indexed: 11/18/2022]
Abstract
This paper aimed to study the adoption of deep learning (DL) algorithm of oral lesions for segmentation of cone-beam computed tomography (CBCT) images. 90 patients with oral lesions were taken as research subjects, and they were grouped into blank, control, and experimental groups, whose images were treated by the manual segmentation method, threshold segmentation algorithm, and full convolutional neural network (FCNN) DL algorithm, respectively. Then, effects of different methods on oral lesion CBCT image recognition and segmentation were analyzed. The results showed that there was no substantial difference in the number of patients with different types of oral lesions among three groups (P > 0.05). The accuracy of lesion segmentation in the experimental group was as high as 98.3%, while those of the blank group and control group were 78.4% and 62.1%, respectively. The accuracy of segmentation of CBCT images in the blank group and control group was considerably inferior to the experimental group (P < 0.05). The segmentation effect on the lesion and the lesion model in the experimental group and control group was evidently superior to the blank group (P < 0.05). In short, the image segmentation accuracy of the FCNN DL method was better than the traditional manual segmentation and threshold segmentation algorithms. Applying the DL segmentation algorithm to CBCT images of oral lesions can accurately identify and segment the lesions.
Collapse
|
25
|
Eckl M, Sarria GR, Springer S, Willam M, Ruder AM, Steil V, Ehmann M, Wenz F, Fleckenstein J. Dosimetric benefits of daily treatment plan adaptation for prostate cancer stereotactic body radiotherapy. Radiat Oncol 2021; 16:145. [PMID: 34348765 PMCID: PMC8335467 DOI: 10.1186/s13014-021-01872-9] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2021] [Accepted: 07/27/2021] [Indexed: 01/02/2023] Open
Abstract
BACKGROUND Hypofractionation is increasingly being applied in radiotherapy for prostate cancer, requiring higher accuracy of daily treatment deliveries than in conventional image-guided radiotherapy (IGRT). Different adaptive radiotherapy (ART) strategies were evaluated with regard to dosimetric benefits. METHODS Treatments plans for 32 patients were retrospectively generated and analyzed according to the PACE-C trial treatment scheme (40 Gy in 5 fractions). Using a previously trained cycle-generative adversarial network algorithm, synthetic CT (sCT) were generated out of five daily cone-beam CT. Dose calculation on sCT was performed for four different adaptation approaches: IGRT without adaptation, adaptation via segment aperture morphing (SAM) and segment weight optimization (ART1) or additional shape optimization (ART2) as well as a full re-optimization (ART3). Dose distributions were evaluated regarding dose-volume parameters and a penalty score. RESULTS Compared to the IGRT approach, the ART1, ART2 and ART3 approaches substantially reduced the V37Gy(bladder) and V36Gy(rectum) from a mean of 7.4cm3 and 2.0cm3 to (5.9cm3, 6.1cm3, 5.2cm3) as well as to (1.4cm3, 1.4cm3, 1.0cm3), respectively. Plan adaptation required on average 2.6 min for the ART1 approach and yielded doses to the rectum being insignificantly different from the ART2 approach. Based on an accumulation over the total patient collective, a penalty score revealed dosimetric violations reduced by 79.2%, 75.7% and 93.2% through adaptation. CONCLUSION Treatment plan adaptation was demonstrated to adequately restore relevant dose criteria on a daily basis. While for SAM adaptation approaches dosimetric benefits were realized through ensuring sufficient target coverage, a full re-optimization mainly improved OAR sparing which helps to guide the decision of when to apply which adaptation strategy.
Collapse
Affiliation(s)
- Miriam Eckl
- Department of Radiation Oncology, University Medical Centre Mannheim, University of Heidelberg, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany.
| | - Gustavo R Sarria
- Department of Radiation Oncology, University Hospital Bonn, University of Bonn, Bonn, Germany
| | - Sandra Springer
- Department of Radiation Oncology, University Medical Centre Mannheim, University of Heidelberg, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Marvin Willam
- Department of Radiation Oncology, University Medical Centre Mannheim, University of Heidelberg, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Arne M Ruder
- Department of Radiation Oncology, University Medical Centre Mannheim, University of Heidelberg, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Volker Steil
- Department of Radiation Oncology, University Medical Centre Mannheim, University of Heidelberg, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Michael Ehmann
- Department of Radiation Oncology, University Medical Centre Mannheim, University of Heidelberg, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Frederik Wenz
- University Medical Center Freiburg, University of Freiburg, Freiburg im Breisgau, Germany
| | - Jens Fleckenstein
- Department of Radiation Oncology, University Medical Centre Mannheim, University of Heidelberg, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| |
Collapse
|
26
|
Samarasinghe G, Jameson M, Vinod S, Field M, Dowling J, Sowmya A, Holloway L. Deep learning for segmentation in radiation therapy planning: a review. J Med Imaging Radiat Oncol 2021; 65:578-595. [PMID: 34313006 DOI: 10.1111/1754-9485.13286] [Citation(s) in RCA: 32] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2021] [Accepted: 06/29/2021] [Indexed: 12/21/2022]
Abstract
Segmentation of organs and structures, as either targets or organs-at-risk, has a significant influence on the success of radiation therapy. Manual segmentation is a tedious and time-consuming task for clinicians, and inter-observer variability can affect the outcomes of radiation therapy. The recent hype over deep neural networks has added many powerful auto-segmentation methods as variations of convolutional neural networks (CNN). This paper presents a descriptive review of the literature on deep learning techniques for segmentation in radiation therapy planning. The most common CNN architecture across the four clinical sub sites considered was U-net, with the majority of deep learning segmentation articles focussed on head and neck normal tissue structures. The most common data sets were CT images from an inhouse source, along with some public data sets. N-fold cross-validation was commonly employed; however, not all work separated training, test and validation data sets. This area of research is expanding rapidly. To facilitate comparisons of proposed methods and benchmarking, consistent use of appropriate metrics and independent validation should be carefully considered.
Collapse
Affiliation(s)
- Gihan Samarasinghe
- School of Computer Science and Engineering, University of New South Wales, Sydney, New South Wales, Australia.,Ingham Institute for Applied Medical Research and South Western Sydney Clinical School, UNSW, Liverpool, New South Wales, Australia
| | - Michael Jameson
- Genesiscare, Sydney, New South Wales, Australia.,St Vincent's Clinical School, University of New South Wales, Sydney, New South Wales, Australia
| | - Shalini Vinod
- Ingham Institute for Applied Medical Research and South Western Sydney Clinical School, UNSW, Liverpool, New South Wales, Australia.,Liverpool Cancer Therapy Centre, Liverpool Hospital, Liverpool, New South Wales, Australia
| | - Matthew Field
- Ingham Institute for Applied Medical Research and South Western Sydney Clinical School, UNSW, Liverpool, New South Wales, Australia.,Liverpool Cancer Therapy Centre, Liverpool Hospital, Liverpool, New South Wales, Australia
| | - Jason Dowling
- Commonwealth Scientific and Industrial Research Organisation, Australian E-Health Research Centre, Herston, Queensland, Australia
| | - Arcot Sowmya
- School of Computer Science and Engineering, University of New South Wales, Sydney, New South Wales, Australia
| | - Lois Holloway
- Ingham Institute for Applied Medical Research and South Western Sydney Clinical School, UNSW, Liverpool, New South Wales, Australia.,Liverpool Cancer Therapy Centre, Liverpool Hospital, Liverpool, New South Wales, Australia
| |
Collapse
|
27
|
Hirashima H, Nakamura M, Baillehache P, Fujimoto Y, Nakagawa S, Saruya Y, Kabasawa T, Mizowaki T. Development of in-house fully residual deep convolutional neural network-based segmentation software for the male pelvic CT. Radiat Oncol 2021; 16:135. [PMID: 34294090 PMCID: PMC8299691 DOI: 10.1186/s13014-021-01867-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2021] [Accepted: 07/19/2021] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND This study aimed to (1) develop a fully residual deep convolutional neural network (CNN)-based segmentation software for computed tomography image segmentation of the male pelvic region and (2) demonstrate its efficiency in the male pelvic region. METHODS A total of 470 prostate cancer patients who had undergone intensity-modulated radiotherapy or volumetric-modulated arc therapy were enrolled. Our model was based on FusionNet, a fully residual deep CNN developed to semantically segment biological images. To develop the CNN-based segmentation software, 450 patients were randomly selected and separated into the training, validation and testing groups (270, 90, and 90 patients, respectively). In Experiment 1, to determine the optimal model, we first assessed the segmentation accuracy according to the size of the training dataset (90, 180, and 270 patients). In Experiment 2, the effect of varying the number of training labels on segmentation accuracy was evaluated. After determining the optimal model, in Experiment 3, the developed software was used on the remaining 20 datasets to assess the segmentation accuracy. The volumetric dice similarity coefficient (DSC) and the 95th-percentile Hausdorff distance (95%HD) were calculated to evaluate the segmentation accuracy for each organ in Experiment 3. RESULTS In Experiment 1, the median DSC for the prostate were 0.61 for dataset 1 (90 patients), 0.86 for dataset 2 (180 patients), and 0.86 for dataset 3 (270 patients), respectively. The median DSCs for all the organs increased significantly when the number of training cases increased from 90 to 180 but did not improve upon further increase from 180 to 270. The number of labels applied during training had a little effect on the DSCs in Experiment 2. The optimal model was built by 270 patients and four organs. In Experiment 3, the median of the DSC and the 95%HD values were 0.82 and 3.23 mm for prostate; 0.71 and 3.82 mm for seminal vesicles; 0.89 and 2.65 mm for the rectum; 0.95 and 4.18 mm for the bladder, respectively. CONCLUSIONS We have developed a CNN-based segmentation software for the male pelvic region and demonstrated that the CNN-based segmentation software is efficient for the male pelvic region.
Collapse
Affiliation(s)
- Hideaki Hirashima
- Department of Radiation Oncology and Image-Applied Therapy, Graduate School of Medicine, Kyoto University, 54 Kawahara-cho, Shogoin, Sakyo-ku, Kyoto, 606-8507, Japan
| | - Mitsuhiro Nakamura
- Department of Radiation Oncology and Image-Applied Therapy, Graduate School of Medicine, Kyoto University, 54 Kawahara-cho, Shogoin, Sakyo-ku, Kyoto, 606-8507, Japan. .,Division of Medical Physics, Department of Information Technology and Medical Engineering, Human Health Sciences, Graduate School of Medicine, Kyoto University, 53 Kawahara-cho, Shogoin, Sakyo-ku, Kyoto, 606-8507, Japan.
| | - Pascal Baillehache
- Rist, Inc., Impact HUB Tokyo, 2-11-3 Meguro, Meguro-ku, Tokyo, 153-0063, Japan
| | - Yusuke Fujimoto
- Rist, Inc., Impact HUB Tokyo, 2-11-3 Meguro, Meguro-ku, Tokyo, 153-0063, Japan
| | - Shota Nakagawa
- Rist, Inc., Impact HUB Tokyo, 2-11-3 Meguro, Meguro-ku, Tokyo, 153-0063, Japan
| | - Yusuke Saruya
- Rist, Inc., Impact HUB Tokyo, 2-11-3 Meguro, Meguro-ku, Tokyo, 153-0063, Japan
| | - Tatsumasa Kabasawa
- Rist, Inc., Impact HUB Tokyo, 2-11-3 Meguro, Meguro-ku, Tokyo, 153-0063, Japan
| | - Takashi Mizowaki
- Department of Radiation Oncology and Image-Applied Therapy, Graduate School of Medicine, Kyoto University, 54 Kawahara-cho, Shogoin, Sakyo-ku, Kyoto, 606-8507, Japan
| |
Collapse
|
28
|
Kiser KJ, Barman A, Stieb S, Fuller CD, Giancardo L. Novel Autosegmentation Spatial Similarity Metrics Capture the Time Required to Correct Segmentations Better Than Traditional Metrics in a Thoracic Cavity Segmentation Workflow. J Digit Imaging 2021; 34:541-553. [PMID: 34027588 PMCID: PMC8329111 DOI: 10.1007/s10278-021-00460-3] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2020] [Revised: 03/28/2021] [Accepted: 05/04/2021] [Indexed: 12/20/2022] Open
Abstract
Automated segmentation templates can save clinicians time compared to de novo segmentation but may still take substantial time to review and correct. It has not been thoroughly investigated which automated segmentation-corrected segmentation similarity metrics best predict clinician correction time. Bilateral thoracic cavity volumes in 329 CT scans were segmented by a UNet-inspired deep learning segmentation tool and subsequently corrected by a fourth-year medical student. Eight spatial similarity metrics were calculated between the automated and corrected segmentations and associated with correction times using Spearman’s rank correlation coefficients. Nine clinical variables were also associated with metrics and correction times using Spearman’s rank correlation coefficients or Mann–Whitney U tests. The added path length, false negative path length, and surface Dice similarity coefficient correlated better with correction time than traditional metrics, including the popular volumetric Dice similarity coefficient (respectively ρ = 0.69, ρ = 0.65, ρ = − 0.48 versus ρ = − 0.25; correlation p values < 0.001). Clinical variables poorly represented in the autosegmentation tool’s training data were often associated with decreased accuracy but not necessarily with prolonged correction time. Metrics used to develop and evaluate autosegmentation tools should correlate with clinical time saved. To our knowledge, this is only the second investigation of which metrics correlate with time saved. Validation of our findings is indicated in other anatomic sites and clinical workflows. Novel spatial similarity metrics may be preferable to traditional metrics for developing and evaluating autosegmentation tools that are intended to save clinicians time.
Collapse
Affiliation(s)
- Kendall J. Kiser
- Department of Radiation Oncology, Washington University School of Medicine in St. Louis, St. Louis, MO USA
| | - Arko Barman
- Center for Precision Health, UT Health School of Biomedical Informatics, Houston, TX USA
| | - Sonja Stieb
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX USA
| | - Clifton D. Fuller
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX USA
| | - Luca Giancardo
- Center for Precision Health, UT Health School of Biomedical Informatics, Houston, TX USA
| |
Collapse
|
29
|
Konuthula N, Perez FA, Maga AM, Abuzeid WM, Moe K, Hannaford B, Bly RA. Automated atlas-based segmentation for skull base surgical planning. Int J Comput Assist Radiol Surg 2021; 16:933-941. [PMID: 34009539 DOI: 10.1007/s11548-021-02390-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2020] [Accepted: 04/27/2021] [Indexed: 11/26/2022]
Abstract
PURPOSE Computational surgical planning tools could help develop novel skull base surgical approaches that improve safety and patient outcomes. This defines a need for automated skull base segmentation to improve the usability of surgical planning software. The objective of this work was to design and validate an algorithm for atlas-based automated segmentation of skull base structures in individual image sets for skull base surgical planning. METHODS Advanced Normalization Tools software was used to construct a synthetic CT template from 6 subjects, and skull base structures were manually segmented to create a reference atlas. Landmark registration followed by Elastix deformable registration was applied to the template to register it to each of the 30 trusted reference image sets. Dice coefficient, average Hausdorff distance, and clinical usability scoring were used to compare the atlas segmentations to those of the trusted reference image sets. RESULTS The mean for average Hausdorff distance for all structures was less than 2 mm (mean for 95th percentile Hausdorff distance was less than 5 mm). For structures greater than 2.5 mL in volume, the average Dice coefficient was 0.73 (range 0.59-0.82), and for structures less than 2.5 mL in volume the Dice coefficient was less than 0.7. The usability scoring survey was completed by three experts, and all structures met the criteria for acceptable effort except for the foramen spinosum, rotundum, and carotid artery, which required more than minor corrections. CONCLUSION Currently available open-source algorithms, such as the Elastix deformable algorithm, can be used for automated atlas-based segmentation of skull base structures with acceptable clinical accuracy and minimal corrections with the use of the proposed atlas. The first publicly available CT template and anterior skull base segmentation atlas being released (available at this link: http://hdl.handle.net/1773/46259 ) with this paper will allow for general use of automated atlas-based segmentation of the skull base.
Collapse
Affiliation(s)
- Neeraja Konuthula
- Department of Otolaryngology, Head and Neck Surgery, University of Washington, Seattle, WA, USA
| | - Francisco A Perez
- Department of Radiology, University of Washington, Seattle, WA, USA
- Division of Radiology, Seattle Children's Hospital, Seattle, WA, USA
| | - A Murat Maga
- Department of Craniofacial Medicine, University of Washington, Seattle, WA, USA
- Craniofacial Center, Seattle Children's Hospital, Seattle, WA, USA
| | - Waleed M Abuzeid
- Department of Otolaryngology, Head and Neck Surgery, University of Washington, Seattle, WA, USA
| | - Kris Moe
- Department of Otolaryngology, Head and Neck Surgery, University of Washington, Seattle, WA, USA
- Otolaryngology-Head and Neck Surgery, Harborview Medical Center, Seattle, WA, USA
| | - Blake Hannaford
- Department of Electrical and Computer Engineering, University of Washington, Seattle, WA, USA
| | - Randall A Bly
- Department of Otolaryngology, Head and Neck Surgery, University of Washington, Seattle, WA, USA.
- Division of Pediatric Otolaryngology, Head and Neck Surgery, Seattle Children's Hospital, Seattle, WA, USA.
| |
Collapse
|
30
|
El Khoury K, Fockedey M, Brion E, Macq B. Improved 3D U-Net robustness against JPEG 2000 compression for male pelvic organ segmentation in radiotherapy. J Med Imaging (Bellingham) 2021; 8:041207. [PMID: 33842669 PMCID: PMC8020060 DOI: 10.1117/1.jmi.8.4.041207] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2020] [Accepted: 03/12/2021] [Indexed: 11/15/2022] Open
Abstract
Purpose: Automation of organ segmentation, via convolutional neural networks (CNNs), is key to facilitate the work of medical practitioners by ensuring that the adequate radiation dose is delivered to the target area while avoiding harmful exposure of healthy organs. The issue with CNNs is that they require large amounts of data transfer and storage which makes the use of image compression a necessity. Compression will affect image quality which in turn affects the segmentation process. We address the dilemma involved with handling large amounts of data while preserving segmentation accuracy. Approach: We analyze and improve 2D and 3D U-Net robustness against JPEG 2000 compression for male pelvic organ segmentation. We conduct three experiments on 56 cone beam computed tomography (CT) and 74 CT scans targeting bladder and rectum segmentation. The two objectives of the experiments are to compare the compression robustness of 2D versus 3D U-Net and to improve the 3D U-Net compression tolerance via fine-tuning. Results: We show that a 3D U-Net is 50% more robust to compression than a 2D U-Net. Moreover, by fine-tuning the 3D U-Net, we can double its compression tolerance compared to a 2D U-Net. Furthermore, we determine that fine-tuning the network to a compression ratio of 64:1 will ensure its flexibility to be used at compression ratios equal or lower. Conclusions: We reduce the potential risk involved with using image compression on automated organ segmentation. We demonstrate that a 3D U-Net can be fine-tuned to handle high compression ratios while preserving segmentation accuracy.
Collapse
Affiliation(s)
- Karim El Khoury
- Université Catholique de Louvain, Institute of Information and Communication Technologies, Electronics and Applied Mathematics, Louvain-La-Neuve, Belgium
| | - Martin Fockedey
- Université Catholique de Louvain, Institute of Information and Communication Technologies, Electronics and Applied Mathematics, Louvain-La-Neuve, Belgium
| | - Eliott Brion
- Université Catholique de Louvain, Institute of Information and Communication Technologies, Electronics and Applied Mathematics, Louvain-La-Neuve, Belgium
| | - Benoît Macq
- Université Catholique de Louvain, Institute of Information and Communication Technologies, Electronics and Applied Mathematics, Louvain-La-Neuve, Belgium
| |
Collapse
|
31
|
Barragán-Montero A, Javaid U, Valdés G, Nguyen D, Desbordes P, Macq B, Willems S, Vandewinckele L, Holmström M, Löfman F, Michiels S, Souris K, Sterpin E, Lee JA. Artificial intelligence and machine learning for medical imaging: A technology review. Phys Med 2021; 83:242-256. [PMID: 33979715 PMCID: PMC8184621 DOI: 10.1016/j.ejmp.2021.04.016] [Citation(s) in RCA: 94] [Impact Index Per Article: 31.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/06/2020] [Revised: 04/15/2021] [Accepted: 04/18/2021] [Indexed: 02/08/2023] Open
Abstract
Artificial intelligence (AI) has recently become a very popular buzzword, as a consequence of disruptive technical advances and impressive experimental results, notably in the field of image analysis and processing. In medicine, specialties where images are central, like radiology, pathology or oncology, have seized the opportunity and considerable efforts in research and development have been deployed to transfer the potential of AI to clinical applications. With AI becoming a more mainstream tool for typical medical imaging analysis tasks, such as diagnosis, segmentation, or classification, the key for a safe and efficient use of clinical AI applications relies, in part, on informed practitioners. The aim of this review is to present the basic technological pillars of AI, together with the state-of-the-art machine learning methods and their application to medical imaging. In addition, we discuss the new trends and future research directions. This will help the reader to understand how AI methods are now becoming an ubiquitous tool in any medical image analysis workflow and pave the way for the clinical implementation of AI-based solutions.
Collapse
Affiliation(s)
- Ana Barragán-Montero
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, UCLouvain, Belgium.
| | - Umair Javaid
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, UCLouvain, Belgium
| | - Gilmer Valdés
- Department of Radiation Oncology, Department of Epidemiology and Biostatistics, University of California, San Francisco, USA
| | - Dan Nguyen
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, UT Southwestern Medical Center, USA
| | - Paul Desbordes
- Information and Communication Technologies, Electronics and Applied Mathematics (ICTEAM), UCLouvain, Belgium
| | - Benoit Macq
- Information and Communication Technologies, Electronics and Applied Mathematics (ICTEAM), UCLouvain, Belgium
| | - Siri Willems
- ESAT/PSI, KU Leuven Belgium & MIRC, UZ Leuven, Belgium
| | | | | | | | - Steven Michiels
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, UCLouvain, Belgium
| | - Kevin Souris
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, UCLouvain, Belgium
| | - Edmond Sterpin
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, UCLouvain, Belgium; KU Leuven, Department of Oncology, Laboratory of Experimental Radiotherapy, Belgium
| | - John A Lee
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, UCLouvain, Belgium
| |
Collapse
|
32
|
Brion E, Léger J, Barragán-Montero AM, Meert N, Lee JA, Macq B. Domain adversarial networks and intensity-based data augmentation for male pelvic organ segmentation in cone beam CT. Comput Biol Med 2021; 131:104269. [PMID: 33639352 DOI: 10.1016/j.compbiomed.2021.104269] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2020] [Revised: 02/07/2021] [Accepted: 02/08/2021] [Indexed: 12/25/2022]
Abstract
In radiation therapy, a CT image is used to manually delineate the organs and plan the treatment. During the treatment, a cone beam CT (CBCT) is often acquired to monitor the anatomical modifications. For this purpose, automatic organ segmentation on CBCT is a crucial step. However, manual segmentations on CBCT are scarce, and models trained with CT data do not generalize well to CBCT images. We investigate adversarial networks and intensity-based data augmentation, two strategies leveraging large databases of annotated CTs to train neural networks for segmentation on CBCT. Adversarial networks consist of a 3D U-Net segmenter and a domain classifier. The proposed framework is aimed at encouraging the learning of filters producing more accurate segmentations on CBCT. Intensity-based data augmentation consists in modifying the training CT images to reduce the gap between CT and CBCT distributions. The proposed adversarial networks reach DSCs of 0.787, 0.447, and 0.660 for the bladder, rectum, and prostate respectively, which is an improvement over the DSCs of 0.749, 0.179, and 0.629 for "source only" training. Our brightness-based data augmentation reaches DSCs of 0.837, 0.701, and 0.734, which outperforms the morphons registration algorithms for the bladder (0.813) and rectum (0.653), while performing similarly on the prostate (0.731). The proposed adversarial training framework can be used for any segmentation application where training and test distributions differ. Our intensity-based data augmentation can be used for CBCT segmentation to help achieve the prescribed dose on target and lower the dose delivered to healthy organs.
Collapse
Affiliation(s)
- Eliott Brion
- ICTEAM, UCLouvain, Louvain-la-Neuve, 1348, Belgium.
| | - Jean Léger
- ICTEAM, UCLouvain, Louvain-la-Neuve, 1348, Belgium
| | | | - Nicolas Meert
- Hôpital André Vésale, Montigny-le-Tilleul, 6110, Belgium
| | - John A Lee
- ICTEAM, UCLouvain, Louvain-la-Neuve, 1348, Belgium; IREC/MIRO, UCLouvain, Brussels, 1200, Belgium
| | - Benoit Macq
- ICTEAM, UCLouvain, Louvain-la-Neuve, 1348, Belgium
| |
Collapse
|
33
|
Liu Z, Liu F, Chen W, Liu X, Hou X, Shen J, Guan H, Zhen H, Wang S, Chen Q, Chen Y, Zhang F. Automatic Segmentation of Clinical Target Volumes for Post-Modified Radical Mastectomy Radiotherapy Using Convolutional Neural Networks. Front Oncol 2021; 10:581347. [PMID: 33665160 PMCID: PMC7921705 DOI: 10.3389/fonc.2020.581347] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2020] [Accepted: 12/14/2020] [Indexed: 12/17/2022] Open
Abstract
Background This study aims to construct and validate a model based on convolutional neural networks (CNNs), which can fulfil the automatic segmentation of clinical target volumes (CTVs) of breast cancer for radiotherapy. Methods In this work, computed tomography (CT) scans of 110 patients who underwent modified radical mastectomies were collected. The CTV contours were confirmed by two experienced oncologists. A novel CNN was constructed to automatically delineate the CTV. Quantitative evaluation metrics were calculated, and a clinical evaluation was conducted to evaluate the performance of our model. Results The mean Dice similarity coefficient (DSC) of the proposed model was 0.90, and the 95th percentile Hausdorff distance (95HD) was 5.65 mm. The evaluation results of the two clinicians showed that 99.3% of the chest wall CTV slices could be accepted by clinician A, and this number was 98.9% for clinician B. In addition, 9/10 of patients had all slices accepted by clinician A, while 7/10 could be accepted by clinician B. The score differences between the AI (artificial intelligence) group and the GT (ground truth) group showed no statistically significant difference for either clinician. However, the score differences in the AI group were significantly different between the two clinicians. The Kappa consistency index was 0.259. It took 3.45 s to delineate the chest wall CTV using the model. Conclusion Our model could automatically generate the CTVs for breast cancer. AI-generated structures of the proposed model showed a trend that was comparable, or was even better, than those of human-generated structures. Additional multicentre evaluations should be performed for adequate validation before the model can be completely applied in clinical practice.
Collapse
Affiliation(s)
- Zhikai Liu
- Department of Radiation Oncology, Peking Union Medical College Hospital (CAMS), Beijing, China
| | - Fangjie Liu
- Department of Radiation Oncology, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Wanqi Chen
- Department of Radiation Oncology, Peking Union Medical College Hospital (CAMS), Beijing, China
| | - Xia Liu
- Department of Radiation Oncology, Peking Union Medical College Hospital (CAMS), Beijing, China
| | - Xiaorong Hou
- Department of Radiation Oncology, Peking Union Medical College Hospital (CAMS), Beijing, China
| | - Jing Shen
- Department of Radiation Oncology, Peking Union Medical College Hospital (CAMS), Beijing, China
| | - Hui Guan
- Department of Radiation Oncology, Peking Union Medical College Hospital (CAMS), Beijing, China
| | - Hongnan Zhen
- Department of Radiation Oncology, Peking Union Medical College Hospital (CAMS), Beijing, China
| | | | - Qi Chen
- MedMind Technology Co., Ltd., Beijing, China
| | - Yu Chen
- MedMind Technology Co., Ltd., Beijing, China
| | - Fuquan Zhang
- Department of Radiation Oncology, Peking Union Medical College Hospital (CAMS), Beijing, China
| |
Collapse
|
34
|
Dai X, Lei Y, Wang T, Dhabaan AH, McDonald M, Beitler JJ, Curran WJ, Zhou J, Liu T, Yang X. Head-and-neck organs-at-risk auto-delineation using dual pyramid networks for CBCT-guided adaptive radiotherapy. Phys Med Biol 2021; 66:045021. [PMID: 33412527 DOI: 10.1088/1361-6560/abd953] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Abstract
Organ-at-risk (OAR) delineation is a key step for cone-beam CT (CBCT) based adaptive radiotherapy planning that can be a time-consuming, labor-intensive, and subject-to-variability process. We aim to develop a fully automated approach aided by synthetic MRI for rapid and accurate CBCT multi-organ contouring in head-and-neck (HN) cancer patients. MRI has superb soft-tissue contrasts, while CBCT offers bony-structure contrasts. Using the complementary information provided by MRI and CBCT is expected to enable accurate multi-organ segmentation in HN cancer patients. In our proposed method, MR images are firstly synthesized using a pre-trained cycle-consistent generative adversarial network given CBCT. The features of CBCT and synthetic MRI (sMRI) are then extracted using dual pyramid networks for final delineation of organs. CBCT images and their corresponding manual contours were used as pairs to train and test the proposed model. Quantitative metrics including Dice similarity coefficient (DSC), Hausdorff distance 95% (HD95), mean surface distance, and residual mean square distance (RMS) were used to evaluate the proposed method. The proposed method was evaluated on a cohort of 65 HN cancer patients. CBCT images were collected from those patients who received proton therapy. Overall, DSC values of 0.87 ± 0.03, 0.79 ± 0.10/0.79 ± 0.11, 0.89 ± 0.08/0.89 ± 0.07, 0.90 ± 0.08, 0.75 ± 0.06/0.77 ± 0.06, 0.86 ± 0.13, 0.66 ± 0.14, 0.78 ± 0.05/0.77 ± 0.04, 0.96 ± 0.04, 0.89 ± 0.04/0.89 ± 0.04, 0.83 ± 0.02, and 0.84 ± 0.07 for commonly used OARs for treatment planning including brain stem, left/right cochlea, left/right eye, larynx, left/right lens, mandible, optic chiasm, left/right optic nerve, oral cavity, left/right parotid, pharynx, and spinal cord, respectively, were achieved. This study provides a rapid and accurate OAR auto-delineation approach, which can be used for adaptive radiation therapy.
Collapse
Affiliation(s)
- Xianjin Dai
- Department of Radiation Oncology, Emory University, Atlanta, GA, United States of America
| | | | | | | | | | | | | | | | | | | |
Collapse
|
35
|
Netherton TJ, Cardenas CE, Rhee DJ, Court LE, Beadle BM. The Emergence of Artificial Intelligence within Radiation Oncology Treatment Planning. Oncology 2020; 99:124-134. [PMID: 33352552 DOI: 10.1159/000512172] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2020] [Accepted: 10/07/2020] [Indexed: 11/19/2022]
Abstract
BACKGROUND The future of artificial intelligence (AI) heralds unprecedented change for the field of radiation oncology. Commercial vendors and academic institutions have created AI tools for radiation oncology, but such tools have not yet been widely adopted into clinical practice. In addition, numerous discussions have prompted careful thoughts about AI's impact upon the future landscape of radiation oncology: How can we preserve innovation, creativity, and patient safety? When will AI-based tools be widely adopted into the clinic? Will the need for clinical staff be reduced? How will these devices and tools be developed and regulated? SUMMARY In this work, we examine how deep learning, a rapidly emerging subset of AI, fits into the broader historical context of advancements made in radiation oncology and medical physics. In addition, we examine a representative set of deep learning-based tools that are being made available for use in external beam radiotherapy treatment planning and how these deep learning-based tools and other AI-based tools will impact members of the radiation treatment planning team. Key Messages: Compared to past transformative innovations explored in this article, such as the Monte Carlo method or intensity-modulated radiotherapy, the development and adoption of deep learning-based tools is occurring at faster rates and promises to transform practices of the radiation treatment planning team. However, accessibility to these tools will be determined by each clinic's access to the internet, web-based solutions, or high-performance computing hardware. As seen by the trends exhibited by many technologies, high dependence on new technology can result in harm should the product fail in an unexpected manner, be misused by the operator, or if the mitigation to an expected failure is not adequate. Thus, the need for developers and researchers to rigorously validate deep learning-based tools, for users to understand how to operate tools appropriately, and for professional bodies to develop guidelines for their use and maintenance is essential. Given that members of the radiation treatment planning team perform many tasks that are automatable, the use of deep learning-based tools, in combination with other automated treatment planning tools, may refocus tasks performed by the treatment planning team and may potentially reduce resource-related burdens for clinics with limited resources.
Collapse
Affiliation(s)
- Tucker J Netherton
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA, .,The University of Texas MD Anderson Graduate School of Biomedical Science, Houston, Texas, USA,
| | - Carlos E Cardenas
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Dong Joo Rhee
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA.,The University of Texas MD Anderson Graduate School of Biomedical Science, Houston, Texas, USA
| | - Laurence E Court
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Beth M Beadle
- Department of Radiation Oncology and Radiation Therapy, Stanford University, Stanford, California, USA
| |
Collapse
|
36
|
McNair H, Wiseman T, Joyce E, Peet B, Huddart R. International survey; current practice in On-line adaptive radiotherapy (ART) delivered using Magnetic Resonance Image (MRI) guidance. Tech Innov Patient Support Radiat Oncol 2020; 16:1-9. [PMID: 32995576 PMCID: PMC7501460 DOI: 10.1016/j.tipsro.2020.08.002] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2020] [Revised: 08/09/2020] [Accepted: 08/19/2020] [Indexed: 11/12/2022] Open
Abstract
BACKGROUND AND PURPOSE The uptake of new technologies has varied internationally and there have often been barriers to implementation. On-line adaptive radiotherapy (ART) promises to improve patient outcome. This survey focuses on the implementation phase of delivering ART and professional roles and responsibilities currently involved in the workflow and changes which may be expected in the future. MATERIALS AND METHODS A 38 question survey included aspects on current practice; professional responsibilities; benefits and barriers; and decision making and responsibilities. For the purposes of the questionnaire and paper, ART was considered where tumour and /or organs at risk were contoured and re-planning was performed on-line. The questionnaire was electronically distributed via radiotherapy networks. RESULTS Nineteen international responses were received. Europe (n = 11), United States of America (n = 4); Canada (n = 2), Australia (n = 1) and Hong Kong (n = 1). The majority of centres started using ART in either 2018 (n = 7) or 2019 (n = 6). Four centres started treating with ART between 2015 and 2017, and the first was in 2014. Centres initially treated prostate and oligometastases patients, expanding to treat prostate, oligometastases, pancreas and rectum. The majority of centres were working in conventional roles, however moving towards radiographers taking more responsibility in contouring organs at risk (OAR), target and dosimetry. The three most important criteria chosen by medical doctors to determine if ART should be used were overall gross anatomy changes of target and OAR, target not covered by planning target volume (PTV) and OAR close to the high dose area. There was no clear consensus on the minimum improvement in dose to target or reduction in dose to OAR to warrant adaption. CONCLUSION On-line ART has been implemented successfully internationally. Initial practice maintains conventional professional roles and responsibilities, however there is trend to changing roles for the future. There is little consensus regarding the triggers of adaption.
Collapse
Affiliation(s)
- H.A. McNair
- Institute of Cancer Research, United Kingdom
| | - T Wiseman
- Royal Marsden NHS Foundation Trust, United Kingdom
| | - E Joyce
- Royal Marsden NHS Foundation Trust, United Kingdom
| | - B Peet
- Royal Marsden NHS Foundation Trust, United Kingdom
| | | |
Collapse
|
37
|
Schreier J, Attanasi F, Laaksonen H. Generalization vs. Specificity: In Which Cases Should a Clinic Train its Own Segmentation Models? Front Oncol 2020; 10:675. [PMID: 32477941 PMCID: PMC7241256 DOI: 10.3389/fonc.2020.00675] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2020] [Accepted: 04/09/2020] [Indexed: 11/25/2022] Open
Abstract
As artificial intelligence for image segmentation becomes increasingly available, the question whether these solutions generalize between different hospitals and geographies arises. The present study addresses this question by comparing multi-institutional models to site-specific models. Using CT data sets from four clinics for organs-at-risk of the female breast, female pelvis and male pelvis, we differentiate between the effect from population differences and differences in clinical practice. Our study, thus, provides guidelines to hospitals, in which case the training of a custom, hospital-specific deep neural network is to be advised and when a network provided by a third-party can be used. The results show that for the organs of the female pelvis and the heart the segmentation quality is influenced solely on bases of the training set size, while the patient population variability affects the female breast segmentation quality above the effect of the training set size. In the comparison of site-specific contours on the male pelvis, we see that for a sufficiently large data set size, a custom, hospital-specific model outperforms a multi-institutional one on some of the organs. However, for small hospital-specific data sets a multi-institutional model provides the better segmentation quality.
Collapse
Affiliation(s)
- Jan Schreier
- Varian Medical Systems (United States), Palo Alto, CA, United States
| | | | | |
Collapse
|
38
|
Cross-Domain Data Augmentation for Deep-Learning-Based Male Pelvic Organ Segmentation in Cone Beam CT. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10031154] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
For prostate cancer patients, large organ deformations occurring between radiotherapy treatment sessions create uncertainty about the doses delivered to the tumor and surrounding healthy organs. Segmenting those regions on cone beam CT (CBCT) scans acquired on treatment day would reduce such uncertainties. In this work, a 3D U-net deep-learning architecture was trained to segment bladder, rectum, and prostate on CBCT scans. Due to the scarcity of contoured CBCT scans, the training set was augmented with CT scans already contoured in the current clinical workflow. Our network was then tested on 63 CBCT scans. The Dice similarity coefficient (DSC) increased significantly with the number of CBCT and CT scans in the training set, reaching 0.874 ± 0.096 , 0.814 ± 0.055 , and 0.758 ± 0.101 for bladder, rectum, and prostate, respectively. This was about 10% better than conventional approaches based on deformable image registration between planning CT and treatment CBCT scans, except for prostate. Interestingly, adding 74 CT scans to the CBCT training set allowed maintaining high DSCs, while halving the number of CBCT scans. Hence, our work showed that although CBCT scans included artifacts, cross-domain augmentation of the training set was effective and could rely on large datasets available for planning CT scans.
Collapse
|