1
|
Teplytska O, Ernst M, Koltermann LM, Valderrama D, Trunz E, Vaisband M, Hasenauer J, Fröhlich H, Jaehde U. Machine Learning Methods for Precision Dosing in Anticancer Drug Therapy: A Scoping Review. Clin Pharmacokinet 2024:10.1007/s40262-024-01409-9. [PMID: 39153056 DOI: 10.1007/s40262-024-01409-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/04/2024] [Indexed: 08/19/2024]
Abstract
INTRODUCTION In the last decade, various Machine Learning techniques have been proposed aiming to individualise the dose of anticancer drugs mostly based on a presumed drug effect or measured effect biomarkers. The aim of this scoping review was to comprehensively summarise the research status on the use of Machine Learning for precision dosing in anticancer drug therapy. METHODS This scoping review was conducted in accordance with the interim guidance by Cochrane and the Joanna Briggs Institute. We systematically searched the databases Medline (via PubMed), Embase and the Cochrane Library for research articles and reviews including results published after 2016. Results were reported according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) checklist. RESULTS A total of 17 relevant studies was identified. In 12 of the included studies, Reinforcement Learning methods were used, including Classical, Deep, Double Deep and Conservative Q-Learning and Fuzzy Reinforcement Learning. Furthermore, classical Machine Learning methods were compared in terms of their performance and an artificial intelligence platform based on parabolic equations was used to guide dosing prospectively and retrospectively, albeit only in a limited number of patients. Due to the significantly different algorithm structures, a meaningful comparison between the various Machine Learning approaches was not possible. CONCLUSION Overall, this review emphasises the clinical relevance of Machine Learning methods for anticancer drug dose optimisation, as many algorithms have shown promising results enabling model-free predictions with the potential to maximise efficacy and minimise toxicity when compared to standard protocols.
Collapse
Affiliation(s)
- Olga Teplytska
- Department of Clinical Pharmacy, Institute of Pharmacy, University of Bonn, An der Immenburg 4, 53121, Bonn, Germany
| | - Moritz Ernst
- Faculty of Medicine and University Hospital Cologne, Institute of Public Health, University of Cologne, Cologne, Germany
| | - Luca Marie Koltermann
- Department of Clinical Pharmacy, Institute of Pharmacy, University of Bonn, An der Immenburg 4, 53121, Bonn, Germany
| | - Diego Valderrama
- Department of Bioinformatics, Fraunhofer Institute for Algorithms and Scientific Computing (SCAI), Sankt Augustin, Germany
| | - Elena Trunz
- Institute of Computer Science II, Visual Computing, University of Bonn, Bonn, Germany
| | - Marc Vaisband
- Hausdorff Center for Mathematics, University of Bonn, Bonn, Germany
- Institute of Life & Medical Sciences (LIMES), University of Bonn, Bonn, Germany
- Department of Internal Medicine III with Haematology, Medical Oncology, Haemostaseology, Infectiology and Rheumatology, Oncologic Center, Salzburg Cancer Research Institute-Laboratory for Immunological and Molecular Cancer Research (SCRI-LIMCR), Paracelsus Medical University, Cancer Cluster Salzburg, Salzburg, Austria
| | - Jan Hasenauer
- Hausdorff Center for Mathematics, University of Bonn, Bonn, Germany
- Institute of Life & Medical Sciences (LIMES), University of Bonn, Bonn, Germany
| | - Holger Fröhlich
- Department of Bioinformatics, Fraunhofer Institute for Algorithms and Scientific Computing (SCAI), Sankt Augustin, Germany
- Bonn-Aachen International Center for Information Technology (B-IT), University of Bonn, Bonn, Germany
| | - Ulrich Jaehde
- Department of Clinical Pharmacy, Institute of Pharmacy, University of Bonn, An der Immenburg 4, 53121, Bonn, Germany.
| |
Collapse
|
2
|
Zhu J, Yan J, Zhang J, Yu L, Song A, Zheng Z, Chen Y, Wang S, Chen Q, Liu Z, Zhang F. Automatic segmentation of high-risk clinical target volume and organs at risk in brachytherapy of cervical cancer with a convolutional neural network. Cancer Radiother 2024:S1278-3218(24)00092-1. [PMID: 39147623 DOI: 10.1016/j.canrad.2024.03.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Revised: 11/26/2023] [Accepted: 03/14/2024] [Indexed: 08/17/2024]
Abstract
PURPOSE This study aimed to design an autodelineation model based on convolutional neural networks for generating high-risk clinical target volumes and organs at risk in image-guided adaptive brachytherapy for cervical cancer. MATERIALS AND METHODS A novel SERes-u-net was trained and tested using CT scans from 98 patients with locally advanced cervical cancer who underwent image-guided adaptive brachytherapy. The Dice similarity coefficient, 95th percentile Hausdorff distance, and clinical assessment were used for evaluation. RESULTS The mean Dice similarity coefficients of our model were 80.8%, 91.9%, 85.2%, 60.4%, and 82.8% for the high-risk clinical target volumes, bladder, rectum, sigmoid, and bowel loops, respectively. The corresponding 95th percentile Hausdorff distances were 5.23mm, 4.75mm, 4.06mm, 30.0mm, and 20.5mm. The evaluation results revealed that 99.3% of the convolutional neural networks-generated high-risk clinical target volumes slices were acceptable for oncologist A and 100% for oncologist B. Most segmentations of the organs at risk were clinically acceptable, except for the 25% sigmoid, which required significant revision in the opinion of oncologist A. There was a significant difference in the clinical evaluation of convolutional neural networks-generated high-risk clinical target volumes between the two oncologists (P<0.001), whereas the score differences of the organs at risk were not significant between the two oncologists. In the consistency evaluation, a large discrepancy was observed between senior and junior clinicians. About 40% of SERes-u-net-generated contours were thought to be better by junior clinicians. CONCLUSION The high-risk clinical target volumes and organs at risk of cervical cancer generated by the proposed convolutional neural networks model can be used clinically, potentially improving segmentation consistency and efficiency of contouring in image-guided adaptive brachytherapy workflow.
Collapse
Affiliation(s)
- J Zhu
- Department of Radiation Oncology, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academe of Medical Sciences & Peking Union Medical College, Beijing 100730, China
| | - J Yan
- Department of Radiation Oncology, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academe of Medical Sciences & Peking Union Medical College, Beijing 100730, China
| | - J Zhang
- Department of Radiation Oncology, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academe of Medical Sciences & Peking Union Medical College, Beijing 100730, China
| | - L Yu
- Department of Radiation Oncology, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academe of Medical Sciences & Peking Union Medical College, Beijing 100730, China
| | - A Song
- Department of Radiation Oncology, Cangzhou Central Hospital, Cangzhou, Hebei 061001, China
| | - Z Zheng
- Department of Radiation Oncology, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academe of Medical Sciences & Peking Union Medical College, Beijing 100730, China
| | - Y Chen
- MedMind Technology Co., Ltd., Beijing 100730, China
| | - S Wang
- MedMind Technology Co., Ltd., Beijing 100730, China
| | - Q Chen
- MedMind Technology Co., Ltd., Beijing 100730, China
| | - Z Liu
- Department of Radiation Oncology, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academe of Medical Sciences & Peking Union Medical College, Beijing 100730, China.
| | - F Zhang
- Department of Radiation Oncology, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academe of Medical Sciences & Peking Union Medical College, Beijing 100730, China.
| |
Collapse
|
3
|
Lee BM, Kim JS, Chang Y, Choi SH, Park JW, Byun HK, Kim YB, Lee IJ, Chang JS. Experience of Implementing Deep Learning-Based Automatic Contouring in Breast Radiation Therapy Planning: Insights From Over 2000 Cases. Int J Radiat Oncol Biol Phys 2024; 119:1579-1589. [PMID: 38431232 DOI: 10.1016/j.ijrobp.2024.02.041] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2023] [Revised: 02/12/2024] [Accepted: 02/18/2024] [Indexed: 03/05/2024]
Abstract
PURPOSE This study evaluated the impact and clinical utility of an auto-contouring system for radiation therapy treatments. METHODS AND MATERIALS The auto-contouring system was implemented in 2019. We evaluated data from 2428 patients who underwent adjuvant breast radiation therapy before and after the system's introduction. We collected the treatment's finalized contours, which were reviewed and revised by a multidisciplinary team. After implementation, the treatment contours underwent a finalization process that involved manual review and adjustment of the initial auto-contours. For the preimplementation group (n = 369), auto-contours were generated retrospectively. We compared the auto-contours and final contours using the Dice similarity coefficient (DSC) and the 95% Hausdorff distance (HD95). RESULTS We analyzed 22,215 structures from final and corresponding auto-contours. The final contours were generally larger, encompassing more slices in the superior or inferior directions. Among organs at risk (OAR), the heart, esophagus, spinal cord, and contralateral breast demonstrated significantly increased DSC and decreased HD95 postimplementation (all P < .05), except for the lungs, which presented inaccurate segmentation. Among target volumes, CTVn_L2, L3, L4, and the internal mammary node showed increased DSC and decreased HD95 postimplementation (all P < .05), although the increase was less pronounced than the OAR outcomes. The analysis also covered factors contributing to significant differences, pattern identification, and outlier detection. CONCLUSIONS In our study, the adoption of an auto-contouring system was associated with an increased reliance on automated settings, underscoring its utility and the potential risk of automation bias. Given these findings, we underscore the importance of considering the integration of stringent risk assessments and quality management strategies as a precautionary measure for the optimal use of such systems.
Collapse
Affiliation(s)
- Byung Min Lee
- Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, Republic of Korea; Department of Radiation Oncology, Uijeongbu St. Mary's Hospital, Catholic University of Korea, Seoul, Republic of Korea
| | - Jin Sung Kim
- Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, Republic of Korea
| | | | - Seo Hee Choi
- Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Jong Won Park
- Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Hwa Kyung Byun
- Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Yong Bae Kim
- Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Ik Jae Lee
- Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, Republic of Korea.
| | - Jee Suk Chang
- Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, Republic of Korea.
| |
Collapse
|
4
|
Moraitis A, Küper A, Tran-Gia J, Eberlein U, Chen Y, Seifert R, Shi K, Kim M, Herrmann K, Fragoso Costa P, Kersting D. Future Perspectives of Artificial Intelligence in Bone Marrow Dosimetry and Individualized Radioligand Therapy. Semin Nucl Med 2024; 54:460-469. [PMID: 39013673 DOI: 10.1053/j.semnuclmed.2024.06.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2024] [Accepted: 06/20/2024] [Indexed: 07/18/2024]
Abstract
Radioligand therapy is an emerging and effective treatment option for various types of malignancies, but may be intricately linked to hematological side effects such as anemia, lymphopenia or thrombocytopenia. The safety and efficacy of novel theranostic agents, targeting increasingly complex targets, can be well served by comprehensive dosimetry. However, optimization in patient management and patient selection based on risk-factors predicting adverse events and built upon reliable dose-response relations is still an open demand. In this context, artificial intelligence methods, especially machine learning and deep learning algorithms, may play a crucial role. This review provides an overview of upcoming opportunities for integrating artificial intelligence methods into the field of dosimetry in nuclear medicine by improving bone marrow and blood dosimetry accuracy, enabling early identification of potential hematological risk-factors, and allowing for adaptive treatment planning. It will further exemplify inspirational success stories from neighboring disciplines that may be translated to nuclear medicine practices, and will provide conceptual suggestions for future directions. In the future, we expect artificial intelligence-assisted (predictive) dosimetry combined with clinical parameters to pave the way towards truly personalized theranostics in radioligand therapy.
Collapse
Affiliation(s)
- Alexandros Moraitis
- Department of Nuclear Medicine, West German Cancer Center (WTZ), University Hospital Essen, University of Duisburg-Essen, Essen, Germany.
| | - Alina Küper
- Department of Nuclear Medicine, West German Cancer Center (WTZ), University Hospital Essen, University of Duisburg-Essen, Essen, Germany
| | - Johannes Tran-Gia
- Department of Nuclear Medicine, University Hospital Würzburg, Würzburg, Germany
| | - Uta Eberlein
- Department of Nuclear Medicine, University Hospital Würzburg, Würzburg, Germany
| | - Yizhou Chen
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Switzerland
| | - Robert Seifert
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Switzerland
| | - Kuangyu Shi
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Switzerland
| | - Moon Kim
- Institute for Artificial Intelligence in Medicine, University Hospital Essen, Essen, Germany
| | - Ken Herrmann
- Department of Nuclear Medicine, West German Cancer Center (WTZ), University Hospital Essen, University of Duisburg-Essen, Essen, Germany
| | - Pedro Fragoso Costa
- Department of Nuclear Medicine, West German Cancer Center (WTZ), University Hospital Essen, University of Duisburg-Essen, Essen, Germany
| | - David Kersting
- Department of Nuclear Medicine, West German Cancer Center (WTZ), University Hospital Essen, University of Duisburg-Essen, Essen, Germany
| |
Collapse
|
5
|
Bakx N, Van der Sangen M, Theuws J, Bluemink J, Hurkmans C. Comparison of the use of a clinically implemented deep learning segmentation model with the simulated study setting for breast cancer patients receiving radiotherapy. Acta Oncol 2024; 63:477-481. [PMID: 38899395 PMCID: PMC11332522 DOI: 10.2340/1651-226x.2024.34986] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2023] [Accepted: 04/24/2024] [Indexed: 06/21/2024]
Abstract
BACKGROUND Deep learning (DL) models for auto-segmentation in radiotherapy have been extensively studied in retrospective and pilot settings. However, these studies might not reflect the clinical setting. This study compares the use of a clinically implemented in-house trained DL segmentation model for breast cancer to a previously performed pilot study to assess possible differences in performance or acceptability. MATERIAL AND METHODS Sixty patients with whole breast radiotherapy, with or without an indication for locoregional radiotherapy were included. Structures were qualitatively scored by radiotherapy technologists and radiation oncologists. Quantitative evaluation was performed using dice-similarity coefficient (DSC), 95th percentile of Hausdorff Distance (95%HD) and surface DSC (sDSC), and time needed for generating, checking, and correcting structures was measured. RESULTS Ninety-three percent of all contours in clinic were scored as clinically acceptable or usable as a starting point, comparable to 92% achieved in the pilot study. Compared to the pilot study, no significant changes in time reduction were achieved for organs at risks (OARs). For target volumes, significantly more time was needed compared to the pilot study for patients including lymph node levels 1-4, although time reduction was still 33% compared to manual segmentation. Almost all contours have better DSC and 95%HD than inter-observer variations. Only CTVn4 scored worse for both metrics, and the thyroid had a higher 95%HD value. INTERPRETATION The use of the DL model in clinical practice is comparable to the pilot study, showing high acceptability rates and time reduction.
Collapse
Affiliation(s)
- Nienke Bakx
- Catharina Hospital, Department of Radiation Oncology, Eindhoven, The Netherlands.
| | | | - Jacqueline Theuws
- Catharina Hospital, Department of Radiation Oncology, Eindhoven, The Netherlands
| | - Johanna Bluemink
- Catharina Hospital, Department of Radiation Oncology, Eindhoven, The Netherlands
| | - Coen Hurkmans
- Catharina Hospital, Department of Radiation Oncology, Eindhoven, The Netherlands; Technical University Eindhoven, Departments of Applied Physics and Electrical Engineering, Eindhoven, The Netherlands
| |
Collapse
|
6
|
Frigau L, Conversano C, Antoch J. PARSEG: a computationally efficient approach for statistical validation of botanical seeds' images. Sci Rep 2024; 14:6052. [PMID: 38480768 PMCID: PMC10937986 DOI: 10.1038/s41598-024-56228-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Accepted: 03/04/2024] [Indexed: 03/17/2024] Open
Abstract
Human recognition and automated image validation are the most widely used approaches to validate the output of binary segmentation methods but, as the number of pixels in an image easily exceeds several million, they become highly demanding from both practical and computational standpoint. We propose a method, called PARSEG, which stands for PArtitioning, Random Selection, Estimation, and Generalization; being the basic steps within this procedure. Suggested method enables us to perform statistical validation of binary images by selecting the minimum number of pixels from the original image to be used for validation without deteriorating the effectiveness of the validation procedure. It utilizes binary classifiers to accomplish image validation and selects the optimal sample of pixels according to a specific objective function. As a result, the computational complexity of the validation experiment is substantially reduced. The procedure's effectiveness is illustrated by considering images composed of approximately 13 million pixels from the field of seed recognition. PARSEG provides roughly the same precision of the validation process when extended to the entire image, but it utilizes only about 4% of the original number of pixels, thus reducing, by about 90%, the computing time required to validate a binary segmented image.
Collapse
Affiliation(s)
- Luca Frigau
- Department of Economics and Business Sciences, University of Cagliari, Viale S. Ignazio da Laconi 17, 09123, Cagliari, Italy.
| | - Claudio Conversano
- Department of Economics and Business Sciences, University of Cagliari, Viale S. Ignazio da Laconi 17, 09123, Cagliari, Italy
| | - Jaromír Antoch
- Faculty of Mathematics and Physics, Charles University, Sokolovská 83, 186 75, Prague, Czech Republic
- Faculty of Informatics and Statistics, Department of Econometrics, Prague University of Economics and Business, Winston Churchill Square 1938/4, 130 67, Prague 3, Czech Republic
| |
Collapse
|
7
|
Cobanaj M, Corti C, Dee EC, McCullum L, Boldrini L, Schlam I, Tolaney SM, Celi LA, Curigliano G, Criscitiello C. Advancing equitable and personalized cancer care: Novel applications and priorities of artificial intelligence for fairness and inclusivity in the patient care workflow. Eur J Cancer 2024; 198:113504. [PMID: 38141549 DOI: 10.1016/j.ejca.2023.113504] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Accepted: 12/13/2023] [Indexed: 12/25/2023]
Abstract
Patient care workflows are highly multimodal and intertwined: the intersection of data outputs provided from different disciplines and in different formats remains one of the main challenges of modern oncology. Artificial Intelligence (AI) has the potential to revolutionize the current clinical practice of oncology owing to advancements in digitalization, database expansion, computational technologies, and algorithmic innovations that facilitate discernment of complex relationships in multimodal data. Within oncology, radiation therapy (RT) represents an increasingly complex working procedure, involving many labor-intensive and operator-dependent tasks. In this context, AI has gained momentum as a powerful tool to standardize treatment performance and reduce inter-observer variability in a time-efficient manner. This review explores the hurdles associated with the development, implementation, and maintenance of AI platforms and highlights current measures in place to address them. In examining AI's role in oncology workflows, we underscore that a thorough and critical consideration of these challenges is the only way to ensure equitable and unbiased care delivery, ultimately serving patients' survival and quality of life.
Collapse
Affiliation(s)
- Marisa Cobanaj
- National Center for Radiation Research in Oncology, OncoRay, Helmholtz-Zentrum Dresden-Rossendorf, Dresden, Germany
| | - Chiara Corti
- Breast Oncology Program, Dana-Farber Brigham Cancer Center, Boston, MA, USA; Harvard Medical School, Boston, MA, USA; Division of New Drugs and Early Drug Development for Innovative Therapies, European Institute of Oncology, IRCCS, Milan, Italy; Department of Oncology and Hematology-Oncology (DIPO), University of Milan, Milan, Italy.
| | - Edward C Dee
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Lucas McCullum
- Department of Radiation Oncology, MD Anderson Cancer Center, Houston, TX, USA
| | - Laura Boldrini
- Division of New Drugs and Early Drug Development for Innovative Therapies, European Institute of Oncology, IRCCS, Milan, Italy; Department of Oncology and Hematology-Oncology (DIPO), University of Milan, Milan, Italy
| | - Ilana Schlam
- Department of Hematology and Oncology, Tufts Medical Center, Boston, MA, USA; Harvard T.H. Chan School of Public Health, Boston, MA, USA
| | - Sara M Tolaney
- Breast Oncology Program, Dana-Farber Brigham Cancer Center, Boston, MA, USA; Harvard Medical School, Boston, MA, USA; Department of Medical Oncology, Dana-Farber Cancer Institute, Boston, MA, USA
| | - Leo A Celi
- Department of Medicine, Beth Israel Deaconess Medical Center, Boston, MA, USA; Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, MA, USA; Department of Biostatistics, Harvard T.H. Chan School of Public Health, Boston, MA, USA
| | - Giuseppe Curigliano
- Division of New Drugs and Early Drug Development for Innovative Therapies, European Institute of Oncology, IRCCS, Milan, Italy; Department of Oncology and Hematology-Oncology (DIPO), University of Milan, Milan, Italy
| | - Carmen Criscitiello
- Division of New Drugs and Early Drug Development for Innovative Therapies, European Institute of Oncology, IRCCS, Milan, Italy; Department of Oncology and Hematology-Oncology (DIPO), University of Milan, Milan, Italy
| |
Collapse
|
8
|
Bibault JE, Giraud P. Deep learning for automated segmentation in radiotherapy: a narrative review. Br J Radiol 2024; 97:13-20. [PMID: 38263838 PMCID: PMC11027240 DOI: 10.1093/bjr/tqad018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Revised: 08/10/2023] [Accepted: 10/27/2023] [Indexed: 01/25/2024] Open
Abstract
The segmentation of organs and structures is a critical component of radiation therapy planning, with manual segmentation being a laborious and time-consuming task. Interobserver variability can also impact the outcomes of radiation therapy. Deep neural networks have recently gained attention for their ability to automate segmentation tasks, with convolutional neural networks (CNNs) being a popular approach. This article provides a descriptive review of the literature on deep learning (DL) techniques for segmentation in radiation therapy planning. This review focuses on five clinical sub-sites and finds that U-net is the most commonly used CNN architecture. The studies using DL for image segmentation were included in brain, head and neck, lung, abdominal, and pelvic cancers. The majority of DL segmentation articles in radiation therapy planning have concentrated on normal tissue structures. N-fold cross-validation was commonly employed, without external validation. This research area is expanding quickly, and standardization of metrics and independent validation are critical to benchmarking and comparing proposed methods.
Collapse
Affiliation(s)
- Jean-Emmanuel Bibault
- Radiation Oncology Department, Georges Pompidou European Hospital, Assistance Publique—Hôpitaux de Paris, Université de Paris Cité, Paris, 75015, France
- INSERM UMR 1138, Centre de Recherche des Cordeliers, Paris, 75006, France
| | - Paul Giraud
- INSERM UMR 1138, Centre de Recherche des Cordeliers, Paris, 75006, France
- Radiation Oncology Department, Pitié Salpêtrière Hospital, Assistance Publique—Hôpitaux de Paris, Paris Sorbonne Universités, Paris, 75013, France
| |
Collapse
|
9
|
Babaeipour R, Ouriadov A, Fox MS. Deep Learning Approaches for Quantifying Ventilation Defects in Hyperpolarized Gas Magnetic Resonance Imaging of the Lung: A Review. Bioengineering (Basel) 2023; 10:1349. [PMID: 38135940 PMCID: PMC10740978 DOI: 10.3390/bioengineering10121349] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Revised: 11/06/2023] [Accepted: 11/21/2023] [Indexed: 12/24/2023] Open
Abstract
This paper provides an in-depth overview of Deep Neural Networks and their application in the segmentation and analysis of lung Magnetic Resonance Imaging (MRI) scans, specifically focusing on hyperpolarized gas MRI and the quantification of lung ventilation defects. An in-depth understanding of Deep Neural Networks is presented, laying the groundwork for the exploration of their use in hyperpolarized gas MRI and the quantification of lung ventilation defects. Five distinct studies are examined, each leveraging unique deep learning architectures and data augmentation techniques to optimize model performance. These studies encompass a range of approaches, including the use of 3D Convolutional Neural Networks, cascaded U-Net models, Generative Adversarial Networks, and nnU-net for hyperpolarized gas MRI segmentation. The findings highlight the potential of deep learning methods in the segmentation and analysis of lung MRI scans, emphasizing the need for consensus on lung ventilation segmentation methods.
Collapse
Affiliation(s)
- Ramtin Babaeipour
- School of Biomedical Engineering, Faculty of Engineering, The University of Western Ontario, London, ON N6A 3K7, Canada;
| | - Alexei Ouriadov
- School of Biomedical Engineering, Faculty of Engineering, The University of Western Ontario, London, ON N6A 3K7, Canada;
- Department of Physics and Astronomy, The University of Western Ontario, London, ON N6A 3K7, Canada;
- Lawson Health Research Institute, London, ON N6C 2R5, Canada
| | - Matthew S. Fox
- Department of Physics and Astronomy, The University of Western Ontario, London, ON N6A 3K7, Canada;
- Lawson Health Research Institute, London, ON N6C 2R5, Canada
| |
Collapse
|
10
|
Vivancos Bargalló H, Stick LB, Korreman SS, Kronborg C, Nielsen MM, Borgen AC, Offersen BV, Nørrevang O, Kallehauge JF. Classification of laterality and mastectomy/lumpectomy for breast cancer patients for improved performance of deep learning auto segmentation. Acta Oncol 2023; 62:1546-1550. [PMID: 37584197 DOI: 10.1080/0284186x.2023.2245965] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Accepted: 08/03/2023] [Indexed: 08/17/2023]
Affiliation(s)
- Helena Vivancos Bargalló
- Medical Physics department, Hospital de la Santa Creu i Sant Pau, Barcelona, Spain
- Danish Center for Particle Therapy, Aarhus University Hospital, Aarhus, Denmark
| | | | - Stine Sofia Korreman
- Danish Center for Particle Therapy, Aarhus University Hospital, Aarhus, Denmark
- Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
| | - Camilla Kronborg
- Danish Center for Particle Therapy, Aarhus University Hospital, Aarhus, Denmark
- Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
| | - Mathias M Nielsen
- Danish Center for Particle Therapy, Aarhus University Hospital, Aarhus, Denmark
| | | | - Birgitte Vrou Offersen
- Department of Experimental Clinical Oncologyy, Aarhus University Hospital, Aarhus, Denmark
- Department of Experimental Clinical Oncology, Aarhus University Hospital, Aarhus, Denmark
| | - Ole Nørrevang
- Danish Center for Particle Therapy, Aarhus University Hospital, Aarhus, Denmark
| | - Jesper F Kallehauge
- Danish Center for Particle Therapy, Aarhus University Hospital, Aarhus, Denmark
- Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
| |
Collapse
|
11
|
Mikalsen SG, Skjøtskift T, Flote VG, Hämäläinen NP, Heydari M, Rydén-Eilertsen K. Extensive clinical testing of Deep Learning Segmentation models for thorax and breast cancer radiotherapy planning. Acta Oncol 2023; 62:1184-1193. [PMID: 37883678 DOI: 10.1080/0284186x.2023.2270152] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2023] [Accepted: 10/04/2023] [Indexed: 10/28/2023]
Abstract
BACKGROUND The performance of deep learning segmentation (DLS) models for automatic organ extraction from CT images in the thorax and breast regions was investigated. Furthermore, the readiness and feasibility of integrating DLS into clinical practice were addressed by measuring the potential time savings and dosimetric impact. MATERIAL AND METHODS Thirty patients referred to radiotherapy for breast cancer were prospectively included. A total of 23 clinically relevant left- and right-sided organs were contoured manually on CT images according to ESTRO guidelines. Next, auto-segmentation was executed, and the geometric agreement between the auto-segmented and manually contoured organs was qualitatively assessed applying a scale in the range [0-not acceptable, 3-no corrections]. A quantitative validation was carried out by calculating Dice coefficients (DSC) and the 95% percentile of Hausdorff distances (HD95). The dosimetric impact of optimizing the treatment plans on the uncorrected DLS contours, was investigated from a dose coverage analysis using DVH values of the manually delineated contours as references. RESULTS The qualitative analysis showed that 93% of the DLS generated OAR contours did not need corrections, except for the heart where 67% of the contours needed corrections. The majority of DLS generated CTVs needed corrections, whereas a minority were deemed not acceptable. Still, using the DLS-model for CTV and heart delineation is on average 14 minutes faster. An average DSC=0.91 and H95=9.8 mm were found for the left and right breasts, respectively. Likewise, and average DSC in the range [0.66, 0.76]mm and HD95 in the range [7.04, 12.05]mm were found for the lymph nodes. CONCLUSION The validation showed that the DLS generated OAR contours can be used clinically. Corrections were required to most of the DLS generated CTVs, and therefore warrants more attention before possibly implementing the DLS models clinically.
Collapse
Affiliation(s)
| | | | | | | | - Mojgan Heydari
- Department of Medical Physics, Oslo University Hospital, Oslo, Norway
| | | |
Collapse
|
12
|
Hou Z, Gao S, Liu J, Yin Y, Zhang L, Han Y, Yan J, Li S. Clinical evaluation of deep learning-based automatic clinical target volume segmentation: a single-institution multi-site tumor experience. LA RADIOLOGIA MEDICA 2023; 128:1250-1261. [PMID: 37597126 DOI: 10.1007/s11547-023-01690-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/12/2023] [Accepted: 07/25/2023] [Indexed: 08/21/2023]
Abstract
PURPOSE The large variability in tumor appearance and shape makes manual delineation of the clinical target volume (CTV) time-consuming, and the results depend on the oncologists' experience. Whereas deep learning techniques have allowed oncologists to automate the CTV delineation, multi-site tumor analysis is often lacking in the literature. This study aimed to evaluate the deep learning models that automatically contour CTVs of tumors at various sites on computed tomography (CT) images from objective and subjective perspectives. METHODS AND MATERIALS 577 patients were selected for the present study, including nasopharyngeal (n = 34), esophageal (n = 40), breast-conserving surgery (BCS) (left-sided, n = 71; right-sided, n = 71), breast-radical mastectomy (BRM) (left-sided, n = 43; right-sided, n = 37), cervical (radical radiotherapy, n = 45; postoperative, n = 85), prostate (n = 42), and rectal (n = 109) carcinomas. Manually delineated CTV contours by radiation oncologists are served as ground truth. Four models were evaluated: Flexnet, Unet, Vnet, and Segresnet, which are commercially available in the medical product "AccuLearning AI model training platform". The data were divided into the training, validation, and testing set at a ratio of 5:1:4. The geometric metrics, including Dice Similarity Coefficient (DSC) and Hausdorff Distance (HD), were calculated for objective evaluation. For subjective assessment, oncologists rated the segmentation contours of the testing set visually. RESULTS High correlations were observed between automatic and manual contours. Based on the results of the independent test group, most of the patients achieved satisfactory quantitative results (DSC > 0.8), except for patients with esophageal carcinoma (DSC: 0.62-0.64). The subjective review indicated that 82.65% of predicted CTVs scored either as clinically accepting (8.68%) or requiring minor revision (73.97%), and no patients were scored as rejected. CONCLUSION This experimental work demonstrated that auto-generated contours could serve as an initial template to help oncologists save time in CTV delineation. The deep learning-based auto-segmentations achieve acceptable accuracy and show the potential to improve clinical efficiency for radiotherapy of a variety of cancer.
Collapse
Affiliation(s)
- Zhen Hou
- The Comprehensive Cancer Centre of Nanjing Drum Tower Hospital, Affiliated Hospital of Medical School, Nanjing University, Nanjing, 210000, Jiangsu, China
| | - Shanbao Gao
- The Comprehensive Cancer Centre of Nanjing Drum Tower Hospital, Affiliated Hospital of Medical School, Nanjing University, Nanjing, 210000, Jiangsu, China
| | - Juan Liu
- The Comprehensive Cancer Centre of Nanjing Drum Tower Hospital, Affiliated Hospital of Medical School, Nanjing University, Nanjing, 210000, Jiangsu, China
| | - Yicai Yin
- The Comprehensive Cancer Centre of Nanjing Drum Tower Hospital, Affiliated Hospital of Medical School, Nanjing University, Nanjing, 210000, Jiangsu, China
| | - Ling Zhang
- The Comprehensive Cancer Centre of Nanjing Drum Tower Hospital, Affiliated Hospital of Medical School, Nanjing University, Nanjing, 210000, Jiangsu, China
| | - Yongchao Han
- The Comprehensive Cancer Centre of Nanjing Drum Tower Hospital, Affiliated Hospital of Medical School, Nanjing University, Nanjing, 210000, Jiangsu, China
| | - Jing Yan
- The Comprehensive Cancer Centre of Nanjing Drum Tower Hospital, Affiliated Hospital of Medical School, Nanjing University, Nanjing, 210000, Jiangsu, China.
| | - Shuangshuang Li
- The Comprehensive Cancer Centre of Nanjing Drum Tower Hospital, Affiliated Hospital of Medical School, Nanjing University, Nanjing, 210000, Jiangsu, China.
| |
Collapse
|
13
|
Kazemimoghadam M, Yang Z, Chen M, Rahimi A, Kim N, Alluri P, Nwachukwu C, Lu W, Gu X. A deep learning approach for automatic delineation of clinical target volume in stereotactic partial breast irradiation (S-PBI). Phys Med Biol 2023; 68:10.1088/1361-6560/accf5e. [PMID: 37084739 PMCID: PMC10325028 DOI: 10.1088/1361-6560/accf5e] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2022] [Accepted: 04/21/2023] [Indexed: 04/23/2023]
Abstract
Accurate and efficient delineation of the clinical target volume (CTV) is of utmost significance in post-operative breast cancer radiotherapy. However, CTV delineation is challenging as the exact extent of microscopic disease encompassed by CTV is not visualizable in radiological images and remains uncertain. We proposed to mimic physicians' contouring practice for CTV segmentation in stereotactic partial breast irradiation (S-PBI) where CTV is derived from tumor bed volume (TBV) via a margin expansion followed by correcting the extensions for anatomical barriers of tumor invasion (e.g. skin, chest wall). We proposed a deep-learning model, where CT images and the corresponding TBV masks formed a multi-channel input for a 3D U-Net based architecture. The design guided the model to encode the location-related image features and directed the network to focus on TBV to initiate CTV segmentation. Gradient weighted class activation map (Grad-CAM) visualizations of the model predictions revealed that the extension rules and geometric/anatomical boundaries were learnt during model training to assist the network to limit the expansion to a certain distance from the chest wall and the skin. We retrospectively collected 175 prone CT images from 35 post-operative breast cancer patients who received 5-fraction partial breast irradiation regimen on GammaPod. The 35 patients were randomly split into training (25), validation (5) and test (5) sets. Our model achieved mean (standard deviation) of 0.94 (±0.02), 2.46 (±0.5) mm, and 0.53 (±0.14) mm for Dice similarity coefficient, 95th percentile Hausdorff distance, and average symmetric surface distance respectively on the test set. The results are promising for improving the efficiency and accuracy of CTV delineation during on-line treatment planning procedure.
Collapse
Affiliation(s)
- Mahdieh Kazemimoghadam
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas TX, 75390 USA
| | - Zi Yang
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas TX, 75390 USA
| | - Mingli Chen
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas TX, 75390 USA
| | - Asal Rahimi
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas TX, 75390 USA
| | - Nathan Kim
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas TX, 75390 USA
| | - Prasanna Alluri
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas TX, 75390 USA
| | - Chika Nwachukwu
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas TX, 75390 USA
| | - Weiguo Lu
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas TX, 75390 USA
| | - Xuejun Gu
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas TX, 75390 USA
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305
| |
Collapse
|
14
|
Hayashi N. [15. AI-assisted MRI Examination and Analysis]. Nihon Hoshasen Gijutsu Gakkai Zasshi 2023; 79:187-192. [PMID: 36804809 DOI: 10.6009/jjrt.2023-2154] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/21/2023]
Affiliation(s)
- Norio Hayashi
- School of Radiological Technology, Gunma Prefectural College of Health Sciences
| |
Collapse
|
15
|
Boubacar Goga A. Artificial Intelligence at the Service of Medical Imaging in the Detection of Breast Tumors. ARTIF INTELL 2023. [DOI: 10.5772/intechopen.108739] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023]
Abstract
Artificial intelligence is currently capable of imitating clinical reasoning in order to make a diagnosis, in particular that of breast cancer. This is possible, thanks to the exponential increase in medical images. Indeed, artificial intelligence systems are used to assist doctors and not replace them. Breast cancer is a cancerous tumor that can invade and destroy nearby tissue. Therefore, early and reliable detection of this disease is a great asset for the medical field. Some people use medical imaging techniques to diagnose this disease. Given the drawbacks of these techniques, diagnostic errors of doctors related to fatigue or inexperience, this work consists of showing how artificial intelligence methods, in particular artificial neural networks (ANN), deep learning (DL), support vector machines (SVM), expert systems, fuzzy logic can be applied on breast imaging, with the aim of improving the detection of this global scourge. Finally, the proposed system is composed of two (2) essential steps: the tumor detection phase and the diagnostic phase allowing the latter to decide whether the tumor is benign or malignant.
Collapse
|
16
|
Ranjbarzadeh R, Dorosti S, Jafarzadeh Ghoushchi S, Caputo A, Tirkolaee EB, Ali SS, Arshadi Z, Bendechache M. Breast tumor localization and segmentation using machine learning techniques: Overview of datasets, findings, and methods. Comput Biol Med 2023; 152:106443. [PMID: 36563539 DOI: 10.1016/j.compbiomed.2022.106443] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2022] [Revised: 11/24/2022] [Accepted: 12/15/2022] [Indexed: 12/23/2022]
Abstract
The Global Cancer Statistics 2020 reported breast cancer (BC) as the most common diagnosis of cancer type. Therefore, early detection of such type of cancer would reduce the risk of death from it. Breast imaging techniques are one of the most frequently used techniques to detect the position of cancerous cells or suspicious lesions. Computer-aided diagnosis (CAD) is a particular generation of computer systems that assist experts in detecting medical image abnormalities. In the last decades, CAD has applied deep learning (DL) and machine learning approaches to perform complex medical tasks in the computer vision area and improve the ability to make decisions for doctors and radiologists. The most popular and widely used technique of image processing in CAD systems is segmentation which consists of extracting the region of interest (ROI) through various techniques. This research provides a detailed description of the main categories of segmentation procedures which are classified into three classes: supervised, unsupervised, and DL. The main aim of this work is to provide an overview of each of these techniques and discuss their pros and cons. This will help researchers better understand these techniques and assist them in choosing the appropriate method for a given use case.
Collapse
Affiliation(s)
- Ramin Ranjbarzadeh
- School of Computing, Faculty of Engineering and Computing, Dublin City University, Ireland.
| | - Shadi Dorosti
- Department of Industrial Engineering, Urmia University of Technology, Urmia, Iran.
| | | | - Annalina Caputo
- School of Computing, Faculty of Engineering and Computing, Dublin City University, Ireland.
| | | | - Sadia Samar Ali
- Department of Industrial Engineering, Faculty of Engineering, King Abdulaziz University, Jeddah, Saudi Arabia.
| | - Zahra Arshadi
- Faculty of Electronics, Telecommunications and Physics Engineering, Polytechnic University, Turin, Italy.
| | - Malika Bendechache
- Lero & ADAPT Research Centres, School of Computer Science, University of Galway, Ireland.
| |
Collapse
|
17
|
Xie H, Chen Z, Deng J, Zhang J, Duan H, Li Q. Automatic segmentation of the gross target volume in radiotherapy for lung cancer using transresSEUnet 2.5D Network. J Transl Med 2022; 20:524. [PMID: 36371220 PMCID: PMC9652981 DOI: 10.1186/s12967-022-03732-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2022] [Accepted: 10/28/2022] [Indexed: 11/15/2022] Open
Abstract
Objective This paper intends to propose a method of using TransResSEUnet2.5D network for accurate automatic segmentation of the Gross Target Volume (GTV) in Radiotherapy for lung cancer. Methods A total of 11,370 computed tomograms (CT), deriving from 137 cases, of lung cancer patients under radiotherapy developed by radiotherapists were used as the training set; 1642 CT images in 20 cases were used as the validation set, and 1685 CT images in 20 cases were used as the test set. The proposed network was tuned and trained to obtain the best segmentation model and its performance was measured by the Dice Similarity Coefficient (DSC) and with 95% Hausdorff distance (HD95). Lastly, as to demonstrate the accuracy of the automatic segmentation of the network proposed in this study, all possible mirrors of the input images were put into Unet2D, Unet2.5D, Unet3D, ResSEUnet3D, ResSEUnet2.5D, and TransResUnet2.5D, and their respective segmentation performances were compared and assessed. Results The segmentation results of the test set showed that TransResSEUnet2.5D performed the best in the DSC (84.08 ± 0.04) %, HD95 (8.11 ± 3.43) mm and time (6.50 ± 1.31) s metrics compared to the other three networks. Conclusions The TransResSEUnet 2.5D proposed in this study can automatically segment the GTV of radiotherapy for lung cancer patients with more accuracy.
Collapse
|
18
|
Hussain S, Xi X, Ullah I, Inam SA, Naz F, Shaheed K, Ali SA, Tian C. A Discriminative Level Set Method with Deep Supervision for Breast Tumor Segmentation. Comput Biol Med 2022; 149:105995. [DOI: 10.1016/j.compbiomed.2022.105995] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Revised: 08/05/2022] [Accepted: 08/14/2022] [Indexed: 11/03/2022]
|
19
|
Breast MRI Tumor Automatic Segmentation and Triple-Negative Breast Cancer Discrimination Algorithm Based on Deep Learning. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:2541358. [PMID: 36092784 PMCID: PMC9453096 DOI: 10.1155/2022/2541358] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/20/2022] [Revised: 07/19/2022] [Accepted: 08/20/2022] [Indexed: 01/23/2023]
Abstract
Background Breast cancer is a kind of cancer that starts in the epithelial tissue of the breast. Breast cancer has been on the rise in recent years, with a younger generation developing the disease. Magnetic resonance imaging (MRI) plays an important role in breast tumor detection and treatment planning in today's clinical practice. As manual segmentation grows more time-consuming and the observed topic becomes more diversified, automated segmentation becomes more appealing. Methodology. For MRI breast tumor segmentation, we propose a CNN-SVM network. The labels from the trained convolutional neural network are output using a support vector machine in this technique. During the testing phase, the convolutional neural network's labeled output, as well as the test grayscale picture, is passed to the SVM classifier for accurate segmentation. Results We tested on the collected breast tumor dataset and found that our proposed combined CNN-SVM network achieved 0.93, 0.95, and 0.92 on DSC coefficient, PPV, and sensitivity index, respectively. We also compare with the segmentation frameworks of other papers, and the comparison results prove that our CNN-SVM network performs better and can accurately segment breast tumors. Conclusion Our proposed CNN-SVM combined network achieves good segmentation results on the breast tumor dataset. The method can adapt to the differences in breast tumors and segment breast tumors accurately and efficiently. It is of great significance for identifying triple-negative breast cancer in the future.
Collapse
|
20
|
Wang J, Chen Y, Xie H, Luo L, Tang Q. Evaluation of auto-segmentation for EBRT planning structures using deep learning-based workflow on cervical cancer. Sci Rep 2022; 12:13650. [PMID: 35953516 PMCID: PMC9372087 DOI: 10.1038/s41598-022-18084-0] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2022] [Accepted: 08/04/2022] [Indexed: 11/12/2022] Open
Abstract
Deep learning (DL) based approach aims to construct a full workflow solution for cervical cancer with external beam radiation therapy (EBRT) and brachytherapy (BT). The purpose of this study was to evaluate the accuracy of EBRT planning structures derived from DL based auto-segmentation compared with standard manual delineation. Auto-segmentation model based on convolutional neural networks (CNN) was developed to delineate clinical target volumes (CTVs) and organs at risk (OARs) in cervical cancer radiotherapy. A total of 300 retrospective patients from multiple cancer centers were used to train and validate the model, and 75 independent cases were selected as testing data. The accuracy of auto-segmented contours were evaluated using geometric and dosimetric metrics including dice similarity coefficient (DSC), 95% hausdorff distance (95%HD), jaccard coefficient (JC) and dose-volume index (DVI). The correlation between geometric metrics and dosimetric difference was performed by Spearman’s correlation analysis. The right and left kidney, bladder, right and left femoral head showed superior geometric accuracy (DSC: 0.88–0.93; 95%HD: 1.03 mm–2.96 mm; JC: 0.78–0.88), and the Bland–Altman test obtained dose agreement for these contours (P > 0.05) between manual and DL based methods. Wilcoxon’s signed-rank test indicated significant dosimetric differences in CTV, spinal cord and pelvic bone (P < 0.001). A strong correlation between the mean dose of pelvic bone and its 95%HD (R = 0.843, P < 0.001) was found in Spearman’s correlation analysis, and the remaining structures showed weak link between dosimetric difference and all of geometric metrics. Our auto-segmentation achieved a satisfied agreement for most EBRT planning structures, although the clinical acceptance of CTV was a concern. DL based auto-segmentation was an essential component in cervical cancer workflow which would generate the accurate contouring.
Collapse
Affiliation(s)
- Jiahao Wang
- Department of Radiation Oncology, Women's Hospital, School of Medicine, Zhejiang University, Hangzhou, 310006, Zhejiang, China
| | - Yuanyuan Chen
- Department of Radiation Oncology, Women's Hospital, School of Medicine, Zhejiang University, Hangzhou, 310006, Zhejiang, China
| | - Hongling Xie
- Department of Radiation Oncology, Women's Hospital, School of Medicine, Zhejiang University, Hangzhou, 310006, Zhejiang, China
| | - Lumeng Luo
- Department of Radiation Oncology, Women's Hospital, School of Medicine, Zhejiang University, Hangzhou, 310006, Zhejiang, China
| | - Qiu Tang
- Department of Radiation Oncology, Women's Hospital, School of Medicine, Zhejiang University, Hangzhou, 310006, Zhejiang, China.
| |
Collapse
|
21
|
Jin L, Chen Q, Shi A, Wang X, Ren R, Zheng A, Song P, Zhang Y, Wang N, Wang C, Wang N, Cheng X, Wang S, Ge H. Deep Learning for Automated Contouring of Gross Tumor Volumes in Esophageal Cancer. Front Oncol 2022; 12:892171. [PMID: 35924169 PMCID: PMC9339638 DOI: 10.3389/fonc.2022.892171] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2022] [Accepted: 06/21/2022] [Indexed: 12/03/2022] Open
Abstract
Purpose The aim of this study was to propose and evaluate a novel three-dimensional (3D) V-Net and two-dimensional (2D) U-Net mixed (VUMix-Net) architecture for a fully automatic and accurate gross tumor volume (GTV) in esophageal cancer (EC)–delineated contours. Methods We collected the computed tomography (CT) scans of 215 EC patients. 3D V-Net, 2D U-Net, and VUMix-Net were developed and further applied simultaneously to delineate GTVs. The Dice similarity coefficient (DSC) and 95th-percentile Hausdorff distance (95HD) were used as quantitative metrics to evaluate the performance of the three models in ECs from different segments. The CT data of 20 patients were randomly selected as the ground truth (GT) masks, and the corresponding delineation results were generated by artificial intelligence (AI). Score differences between the two groups (GT versus AI) and the evaluation consistency were compared. Results In all patients, there was a significant difference in the 2D DSCs from U-Net, V-Net, and VUMix-Net (p=0.01). In addition, VUMix-Net showed achieved better 3D-DSC and 95HD values. There was a significant difference among the 3D-DSC (mean ± STD) and 95HD values for upper-, middle-, and lower-segment EC (p<0.001), and the middle EC values were the best. In middle-segment EC, VUMix-Net achieved the highest 2D-DSC values (p<0.001) and lowest 95HD values (p=0.044). Conclusion The new model (VUMix-Net) showed certain advantages in delineating the GTVs of EC. Additionally, it can generate the GTVs of EC that meet clinical requirements and have the same quality as human-generated contours. The system demonstrated the best performance for the ECs of the middle segment.
Collapse
Affiliation(s)
- Linzhi Jin
- Department of Radiation Oncology, The Affiliated Cancer Hospital of Zhengzhou University & Henan Cancer Hospital, Zhengzhou, China
- Department of Radiation Oncology, Anyang Tumor Hospital, The Fourth Affiliated Hospital of Henan University of Science and Technology, Anyang, China
| | - Qi Chen
- Department of Research and Development, MedMind Technology Co, Ltd., Beijing, China
| | - Aiwei Shi
- Department of Research and Development, MedMind Technology Co, Ltd., Beijing, China
| | - Xiaomin Wang
- Department of Radiation Oncology, Anyang Tumor Hospital, The Fourth Affiliated Hospital of Henan University of Science and Technology, Anyang, China
| | - Runchuan Ren
- Department of Radiation Oncology, Anyang Tumor Hospital, The Fourth Affiliated Hospital of Henan University of Science and Technology, Anyang, China
| | - Anping Zheng
- Department of Radiation Oncology, Anyang Tumor Hospital, The Fourth Affiliated Hospital of Henan University of Science and Technology, Anyang, China
| | - Ping Song
- Department of Radiation Oncology, Anyang Tumor Hospital, The Fourth Affiliated Hospital of Henan University of Science and Technology, Anyang, China
| | - Yaowen Zhang
- Department of Radiation Oncology, Anyang Tumor Hospital, The Fourth Affiliated Hospital of Henan University of Science and Technology, Anyang, China
| | - Nan Wang
- Department of Radiation Oncology, The Affiliated Cancer Hospital of Zhengzhou University & Henan Cancer Hospital, Zhengzhou, China
| | - Chenyu Wang
- Department of Radiation Oncology, Anyang Tumor Hospital, The Fourth Affiliated Hospital of Henan University of Science and Technology, Anyang, China
| | - Nengchao Wang
- Department of Radiation Oncology, Anyang Tumor Hospital, The Fourth Affiliated Hospital of Henan University of Science and Technology, Anyang, China
| | - Xinyu Cheng
- Department of Radiation Oncology, Anyang Tumor Hospital, The Fourth Affiliated Hospital of Henan University of Science and Technology, Anyang, China
| | - Shaobin Wang
- Department of Research and Development, MedMind Technology Co, Ltd., Beijing, China
| | - Hong Ge
- Department of Radiation Oncology, The Affiliated Cancer Hospital of Zhengzhou University & Henan Cancer Hospital, Zhengzhou, China
- *Correspondence: Hong Ge,
| |
Collapse
|
22
|
Khalal DM, Behouch A, Azizi H, Maalej N. Automatic segmentation of thoracic CT images using three deep learning models. Cancer Radiother 2022; 26:1008-1015. [PMID: 35803861 DOI: 10.1016/j.canrad.2022.02.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2021] [Revised: 01/10/2022] [Accepted: 02/09/2022] [Indexed: 11/18/2022]
Abstract
PURPOSE Deep learning (DL) techniques are widely used in medical imaging and in particular for segmentation. Indeed, manual segmentation of organs at risk (OARs) is time-consuming and suffers from inter- and intra-observer segmentation variability. Image segmentation using DL has given very promising results. In this work, we present and compare the results of segmentation of OARs and a clinical target volume (CTV) in thoracic CT images using three DL models. MATERIALS AND METHODS We used CT images of 52 patients with breast cancer from a public dataset. Automatic segmentation of the lungs, the heart and a CTV was performed using three models based on the U-Net architecture. Three metrics were used to quantify and compare the segmentation results obtained with these models: the Dice similarity coefficient (DSC), the Jaccard coefficient (J) and the Hausdorff distance (HD). RESULTS The obtained values of DSC, J and HD were presented for each segmented organ and for the three models. Examples of automatic segmentation were presented and compared to the corresponding ground truth delineations. Our values were also compared to recent results obtained by other authors. CONCLUSION The performance of three DL models was evaluated for the delineation of the lungs, the heart and a CTV. This study showed clearly that these 2D models based on the U-Net architecture can be used to delineate organs in CT images with a good performance compared to other models. Generally, the three models present similar performances. Using a dataset with more CT images, the three models should give better results.
Collapse
Affiliation(s)
- D M Khalal
- Department of Physics, Faculty of Sciences, Laboratory of dosing, analysis and characterization in high resolution, Ferhat Abbas Sétif 1 University, El Baz campus, 19137 Sétif, Algeria.
| | - A Behouch
- Department of Physics, Faculty of Sciences, Laboratory of dosing, analysis and characterization in high resolution, Ferhat Abbas Sétif 1 University, El Baz campus, 19137 Sétif, Algeria
| | - H Azizi
- Department of Physics, Faculty of Sciences, Laboratory of dosing, analysis and characterization in high resolution, Ferhat Abbas Sétif 1 University, El Baz campus, 19137 Sétif, Algeria
| | - N Maalej
- Department of Physics, Khalifa University, Abu Dhabi, United Arab Emirates
| |
Collapse
|
23
|
Variability of Target Volumes and Organs at Risk Delineation in Breast Cancer Radiation Therapy: Quality Assurance Results of the Pretrial Benchmark Case for the POTENTIAL Trial. Pract Radiat Oncol 2022; 12:397-408. [DOI: 10.1016/j.prro.2021.12.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2021] [Revised: 12/23/2021] [Accepted: 12/29/2021] [Indexed: 11/17/2022]
|
24
|
Almberg SS, Lervåg C, Frengen J, Eidem M, Abramova T, Nordstrand C, Alsaker M, Tøndel H, Raj SX, Wanderås AD. Training, validation, and clinical implementation of a deep-learning segmentation model for radiotherapy of loco-regional breast cancer. Radiother Oncol 2022; 173:62-68. [DOI: 10.1016/j.radonc.2022.05.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2022] [Revised: 05/07/2022] [Accepted: 05/18/2022] [Indexed: 11/29/2022]
|
25
|
Qi X, Hu J, Zhang L, Bai S, Yi Z. Automated Segmentation of the Clinical Target Volume in the Planning CT for Breast Cancer Using Deep Neural Networks. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:3446-3456. [PMID: 32833659 DOI: 10.1109/tcyb.2020.3012186] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
3-D radiotherapy is an effective treatment modality for breast cancer. In 3-D radiotherapy, delineation of the clinical target volume (CTV) is an essential step in the establishment of treatment plans. However, manual delineation is subjective and time consuming. In this study, we propose an automated segmentation model based on deep neural networks for the breast cancer CTV in planning computed tomography (CT). Our model is composed of three stages that work in a cascade manner, making it applicable to real-world scenarios. The first stage determines which slices contain CTVs, as not all CT slices include breast lesions. The second stage detects the region of the human body in an entire CT slice, eliminating boundary areas, which may have side effects for the segmentation of the CTV. The third stage delineates the CTV. To permit the network to focus on the breast mass in the slice, a novel dynamically strided convolution operation, which shows better performance than standard convolution, is proposed. To train and evaluate the model, a large dataset containing 455 cases and 50 425 CT slices is constructed. The proposed model achieves an average dice similarity coefficient (DSC) of 0.802 and 0.801 for right-0 and left-sided breast, respectively. Our method shows superior performance to that of previous state-of-the-art approaches.
Collapse
|
26
|
Buelens P, Ir SW, I LV, Crijns W, Ir FM, Weltens CG. Clinical Evaluation of a Deep Learning Model for Segmentation of Target Volumes in Breast Cancer Radiotherapy. Radiother Oncol 2022; 171:84-90. [PMID: 35447286 DOI: 10.1016/j.radonc.2022.04.015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Revised: 04/08/2022] [Accepted: 04/11/2022] [Indexed: 12/24/2022]
Abstract
PURPOSE/OBJECTIVE(S) Precise segmentation of clinical target volumes (CTV) in breast cancer is indispensable for state-of-the art radiotherapy. Despite international guidelines, significant intra- and interobserver variability exists, negatively impacting treatment outcomes. The aim of this study is to evaluate the performance and efficiency of segmentation of CTVs in planning CT images of breast cancer patients using a 3D convolutional neural network (CNN) compared to the manual process. MATERIALS/METHODS An expert radiation oncologist (RO) segmented all CTVs separately according to international guidelines in 150 breast cancer patients. This data was used to create, train and validate a 3D CNN. The network's performance was additionally evaluated in a test set of 20 patients. Primary endpoints are quantitative and qualitative analysis of the segmentation data generated by the CNN for each level specifically as well as for the total PTV to be irradiated. The secondary endpoint is the evaluation of time efficiency. RESULTS In the test set, segmentation performance was best for the contralateral breast and the breast CTV and worst for Rotter's space and the internal mammary nodal (IMN) level. Analysis of impact on PTV resulted in non-significant over-segmentation of the primary PTV and significant under-segmentation of the nodal PTV, resulting in slight variations of overlap with OARs. Guideline consistency improved from 77.14% to 90.71% in favor of CNN segmentation while saving on average 24 minutes per patient with a median time of 35 minutes for pure manual segmentation. CONCLUSION 3D CNN based delineation for breast cancer radiotherapy is feasible and performant, as scored by quantitative and qualitative metrics.
Collapse
Affiliation(s)
- P Buelens
- KU Leuven - University of Leuven, Department of Oncology, Experimental Radiation Oncology, B-3000 Leuven, Belgium; University Hospitals Leuven, Department of Radiation Oncology, B-3000 Leuven, Belgium
| | - S Willems Ir
- Medical Imaging Research Center, University Hospitals Leuven, Leuven, Belgium; Department of Electrical Engineering, ESAT/PSI, KU Leuven, Leuven, Belgium
| | - L Vandewinckele I
- KU Leuven - University of Leuven, Department of Oncology, Experimental Radiation Oncology, B-3000 Leuven, Belgium; University Hospitals Leuven, Department of Radiation Oncology, B-3000 Leuven, Belgium
| | - W Crijns
- KU Leuven - University of Leuven, Department of Oncology, Experimental Radiation Oncology, B-3000 Leuven, Belgium; University Hospitals Leuven, Department of Radiation Oncology, B-3000 Leuven, Belgium
| | - F Maes Ir
- Medical Imaging Research Center, University Hospitals Leuven, Leuven, Belgium; Department of Electrical Engineering, ESAT/PSI, KU Leuven, Leuven, Belgium
| | - C G Weltens
- KU Leuven - University of Leuven, Department of Oncology, Experimental Radiation Oncology, B-3000 Leuven, Belgium; University Hospitals Leuven, Department of Radiation Oncology, B-3000 Leuven, Belgium.
| |
Collapse
|
27
|
Lim VT, Gacasan AC, Tuan JKL, Tan TWK, Li Y, Nei WL, Looi WS, Lin X, Tan HQ, Chua ECP, Pang EPP. Evaluation of inter- and intra-observer variations in prostate gland delineation using CT-alone versus CT/TPUS. Rep Pract Oncol Radiother 2022; 27:97-103. [PMID: 35402019 PMCID: PMC8989460 DOI: 10.5603/rpor.a2022.0004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2021] [Accepted: 11/20/2021] [Indexed: 11/25/2022] Open
Abstract
Background This study aims to explore the role of four-dimensional (4D) transperineal ultrasound (TPUS) in the contouring of prostate gland with planning computed tomography (CT) images, in the absence of magnetic resonance imaging (MRI). Materials and methods Five radiation oncologists (ROs) performed two rounds of prostate gland contouring (single-blinded) on CT-alone and CT/TPUS datasets obtained from 10 patients who underwent TPUS-guided external beam radiotherapy. Parameters include prostate volume, DICE similarity coefficient (DSC) and centroid position. Wilcoxon signed-rank test assessed the significance of inter-modality differences, and the intraclass correlation coefficient (ICC ) reflected inter- and intra-observer reliability of parameters. Results Inter-modality analysis revealed high agreement (based on DSC and centroid position) of prostate gland contours between CT-alone and CT/TPUS. Statistical significant difference was observed in the superior-inferior direction of the prostate centroid position (p = 0.011). All modalities yielded excellent inter-observer reliability of delineated prostate volume with ICC > 0.9, mean DSC > 0.8 and centroid position: CT-alone (ICC = 1.000) and CT/TPUS (ICC = 0.999) left-right (L/R); CT-alone (ICC = 0.999) and CT/TPUS (ICC = 0.998) anterior-posterior (A/P); CT-alone (ICC = 0.999) and CT/TPUS (ICC = 1.000) superior-inferior (S/I). Similarly, all modalities yielded excellent intra-observer reliability of delineated prostate volume, ICC > 0.9 and mean DSC > 0.8. Lastly, intra-observer reliability was excellent on both imaging modalities for the prostate centroid position, ICC > 0.9. Conclusion TPUS does not add significantly to the amount of anatomical information provided by CT images. However, TPUS can supplement planning CT to achieve a higher positional accuracy in the S/I direction if access to CT/MRI fusion is limited.
Collapse
Affiliation(s)
- Valerie Ting Lim
- Health and Social Sciences, Singapore Institute of Technology, Singapore
| | | | - Jeffrey Kit Loong Tuan
- Division of Radiation Oncology, National Cancer Centre Singapore, Singapore.,Duke-NUS Graduate Medical School, Singapore
| | - Terence Wee Kiat Tan
- Division of Radiation Oncology, National Cancer Centre Singapore, Singapore.,Duke-NUS Graduate Medical School, Singapore
| | - Youquan Li
- Division of Radiation Oncology, National Cancer Centre Singapore, Singapore.,Duke-NUS Graduate Medical School, Singapore
| | - Wen Long Nei
- Division of Radiation Oncology, National Cancer Centre Singapore, Singapore.,Duke-NUS Graduate Medical School, Singapore
| | - Wen Shen Looi
- Division of Radiation Oncology, National Cancer Centre Singapore, Singapore.,Duke-NUS Graduate Medical School, Singapore
| | - Xinying Lin
- Division of Radiation Oncology, National Cancer Centre Singapore, Singapore
| | - Hong Qi Tan
- Division of Radiation Oncology, National Cancer Centre Singapore, Singapore
| | | | - Eric Pei Ping Pang
- Division of Radiation Oncology, National Cancer Centre Singapore, Singapore.,Health and Social Sciences, Singapore Institute of Technology, Singapore
| |
Collapse
|
28
|
Yang G, Dai Z, Zhang Y, Zhu L, Tan J, Chen Z, Zhang B, Cai C, He Q, Li F, Wang X, Yang W. Multiscale Local Enhancement Deep Convolutional Networks for the Automated 3D Segmentation of Gross Tumor Volumes in Nasopharyngeal Carcinoma: A Multi-Institutional Dataset Study. Front Oncol 2022; 12:827991. [PMID: 35387126 PMCID: PMC8979212 DOI: 10.3389/fonc.2022.827991] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2021] [Accepted: 02/24/2022] [Indexed: 01/10/2023] Open
Abstract
Purpose Accurate segmentation of gross target volume (GTV) from computed tomography (CT) images is a prerequisite in radiotherapy for nasopharyngeal carcinoma (NPC). However, this task is very challenging due to the low contrast at the boundary of the tumor and the great variety of sizes and morphologies of tumors between different stages. Meanwhile, the data source also seriously affect the results of segmentation. In this paper, we propose a novel three-dimensional (3D) automatic segmentation algorithm that adopts cascaded multiscale local enhancement of convolutional neural networks (CNNs) and conduct experiments on multi-institutional datasets to address the above problems. Materials and Methods In this study, we retrospectively collected CT images of 257 NPC patients to test the performance of the proposed automatic segmentation model, and conducted experiments on two additional multi-institutional datasets. Our novel segmentation framework consists of three parts. First, the segmentation framework is based on a 3D Res-UNet backbone model that has excellent segmentation performance. Then, we adopt a multiscale dilated convolution block to enhance the receptive field and focus on the target area and boundary for segmentation improvement. Finally, a central localization cascade model for local enhancement is designed to concentrate on the GTV region for fine segmentation to improve the robustness. The Dice similarity coefficient (DSC), positive predictive value (PPV), sensitivity (SEN), average symmetric surface distance (ASSD) and 95% Hausdorff distance (HD95) are utilized as qualitative evaluation criteria to estimate the performance of our automated segmentation algorithm. Results The experimental results show that compared with other state-of-the-art methods, our modified version 3D Res-UNet backbone has excellent performance and achieves the best results in terms of the quantitative metrics DSC, PPR, ASSD and HD95, which reached 74.49 ± 7.81%, 79.97 ± 13.90%, 1.49 ± 0.65 mm and 5.06 ± 3.30 mm, respectively. It should be noted that the receptive field enhancement mechanism and cascade architecture can have a great impact on the stable output of automatic segmentation results with high accuracy, which is critical for an algorithm. The final DSC, SEN, ASSD and HD95 values can be increased to 76.23 ± 6.45%, 79.14 ± 12.48%, 1.39 ± 5.44mm, 4.72 ± 3.04mm. In addition, the outcomes of multi-institution experiments demonstrate that our model is robust and generalizable and can achieve good performance through transfer learning. Conclusions The proposed algorithm could accurately segment NPC in CT images from multi-institutional datasets and thereby may improve and facilitate clinical applications.
Collapse
Affiliation(s)
- Geng Yang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, China
- Department of Radiation Therapy, The Second Affiliated Hospital of Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Zhenhui Dai
- Department of Radiation Therapy, The Second Affiliated Hospital of Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Yiwen Zhang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, China
| | - Lin Zhu
- Department of Radiation Therapy, The Second Affiliated Hospital of Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Junwen Tan
- Department of Oncology, The Fourth Affiliated Hospital of Guangxi Medical University, Liuzhou, China
| | - Zefeiyun Chen
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, China
| | - Bailin Zhang
- Department of Radiation Therapy, The Second Affiliated Hospital of Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Chunya Cai
- Department of Radiation Therapy, The Second Affiliated Hospital of Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Qiang He
- Department of Radiation Therapy, The Second Affiliated Hospital of Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Fei Li
- Department of Radiation Therapy, The Second Affiliated Hospital of Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Xuetao Wang
- Department of Radiation Therapy, The Second Affiliated Hospital of Guangzhou University of Chinese Medicine, Guangzhou, China
- *Correspondence: Wei Yang, ; Xuetao Wang,
| | - Wei Yang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, China
- *Correspondence: Wei Yang, ; Xuetao Wang,
| |
Collapse
|
29
|
Yang B, Chen X, Li J, Zhu J, Men K, Dai J. A feasible method to evaluate deformable image registration with deep learning–based segmentation. Phys Med 2022; 95:50-56. [DOI: 10.1016/j.ejmp.2022.01.006] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/18/2021] [Revised: 01/20/2022] [Accepted: 01/20/2022] [Indexed: 12/18/2022] Open
|
30
|
Machine Learning in Medical Imaging – Clinical Applications and Challenges in Computer Vision. Artif Intell Med 2022. [DOI: 10.1007/978-981-19-1223-8_4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
31
|
Yu X, Jin F, Luo H, Lei Q, Wu Y. Gross Tumor Volume Segmentation for Stage III NSCLC Radiotherapy Using 3D ResSE-Unet. Technol Cancer Res Treat 2022; 21:15330338221090847. [PMID: 35443832 PMCID: PMC9047806 DOI: 10.1177/15330338221090847] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023] Open
Abstract
INTRODUCTION Radiotherapy is one of the most effective ways to treat lung cancer. Accurately delineating the gross target volume is a key step in the radiotherapy process. In current clinical practice, the target area is still delineated manually by radiologists, which is time-consuming and laborious. However, these problems can be better solved by deep learning-assisted automatic segmentation methods. METHODS In this paper, a 3D CNN model named 3D ResSE-Unet is proposed for gross tumor volume segmentation for stage III NSCLC radiotherapy. This model is based on 3D Unet and combines residual connection and channel attention mechanisms. Three-dimensional convolution operation and encoding-decoding structure are used to mine three-dimensional spatial information of tumors from computed tomography data. Inspired by ResNet and SE-Net, residual connection and channel attention mechanisms are used to improve segmentation performance. A total of 214 patients with stage III NSCLC were collected selectively and 148 cases were randomly selected as the training set, 30 cases as the validation set, and 36 cases as the testing set. The segmentation performance of models was evaluated by the testing set. In addition, the segmentation results of different depths of 3D Unet were analyzed. And the performance of 3D ResSE-Unet was compared with 3D Unet, 3D Res-Unet, and 3D SE-Unet. RESULTS Compared with other depths, 3D Unet with four downsampling depths is more suitable for our work. Compared with 3D Unet, 3D Res-Unet, and 3D SE-Unet, 3D ResSE-Unet can obtain superior results. Its dice similarity coefficient, 95th-percentile of Hausdorff distance, and average surface distance can reach 0.7367, 21.39mm, 4.962mm, respectively. And the average time cost of 3D ResSE-Unet to segment a patient is only about 10s. CONCLUSION The method proposed in this study provides a new tool for GTV auto-segmentation and may be useful for lung cancer radiotherapy.
Collapse
Affiliation(s)
- Xinhao Yu
- College of Bioengineering, 47913Chongqing University, Chongqing, China.,Department of radiation oncology, 605425Chongqing University Cancer Hospital, Chongqing, China
| | - Fu Jin
- Department of radiation oncology, 605425Chongqing University Cancer Hospital, Chongqing, China
| | - HuanLi Luo
- Department of radiation oncology, 605425Chongqing University Cancer Hospital, Chongqing, China
| | - Qianqian Lei
- Department of radiation oncology, 605425Chongqing University Cancer Hospital, Chongqing, China
| | - Yongzhong Wu
- Department of radiation oncology, 605425Chongqing University Cancer Hospital, Chongqing, China
| |
Collapse
|
32
|
Dai Z, Zhang Y, Zhu L, Tan J, Yang G, Zhang B, Cai C, Jin H, Meng H, Tan X, Jian W, Yang W, Wang X. Geometric and Dosimetric Evaluation of Deep Learning-Based Automatic Delineation on CBCT-Synthesized CT and Planning CT for Breast Cancer Adaptive Radiotherapy: A Multi-Institutional Study. Front Oncol 2021; 11:725507. [PMID: 34858813 PMCID: PMC8630628 DOI: 10.3389/fonc.2021.725507] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2021] [Accepted: 10/12/2021] [Indexed: 12/29/2022] Open
Abstract
Purpose We developed a deep learning model to achieve automatic multitarget delineation on planning CT (pCT) and synthetic CT (sCT) images generated from cone-beam CT (CBCT) images. The geometric and dosimetric impact of the model was evaluated for breast cancer adaptive radiation therapy. Methods We retrospectively analyzed 1,127 patients treated with radiotherapy after breast-conserving surgery from two medical institutions. The CBCT images for patient setup acquired utilizing breath-hold guided by optical surface monitoring system were used to generate sCT with a generative adversarial network. Organs at risk (OARs), clinical target volume (CTV), and tumor bed (TB) were delineated automatically with a 3D U-Net model on pCT and sCT images. The geometric accuracy of the model was evaluated with metrics, including Dice similarity coefficient (DSC) and 95% Hausdorff distance (HD95). Dosimetric evaluation was performed by quick dose recalculation on sCT images relying on gamma analysis and dose-volume histogram (DVH) parameters. The relationship between ΔD95, ΔV95 and DSC-CTV was assessed to quantify the clinical impact of the geometric changes of CTV. Results The ranges of DSC and HD95 were 0.73–0.97 and 2.22–9.36 mm for pCT, 0.63–0.95 and 2.30–19.57 mm for sCT from institution A, 0.70–0.97 and 2.10–11.43 mm for pCT from institution B, respectively. The quality of sCT was excellent with an average mean absolute error (MAE) of 71.58 ± 8.78 HU. The mean gamma pass rate (3%/3 mm criterion) was 91.46 ± 4.63%. DSC-CTV down to 0.65 accounted for a variation of more than 6% of V95 and 3 Gy of D95. DSC-CTV up to 0.80 accounted for a variation of less than 4% of V95 and 2 Gy of D95. The mean ΔD90/ΔD95 of CTV and TB were less than 2Gy/4Gy, 4Gy/5Gy for all the patients. The cardiac dose difference in left breast cancer cases was larger than that in right breast cancer cases. Conclusions The accurate multitarget delineation is achievable on pCT and sCT via deep learning. The results show that dose distribution needs to be considered to evaluate the clinical impact of geometric variations during breast cancer radiotherapy.
Collapse
Affiliation(s)
- Zhenhui Dai
- Department of Radiation Therapy, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Yiwen Zhang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
| | - Lin Zhu
- Department of Radiation Therapy, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Junwen Tan
- Department of Oncology, The Fourth Affiliated Hospital, Guangxi Medical University, Liuzhou, China
| | - Geng Yang
- Department of Radiation Therapy, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Bailin Zhang
- Department of Radiation Therapy, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Chunya Cai
- Department of Radiation Therapy, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Huaizhi Jin
- Department of Radiation Therapy, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Haoyu Meng
- Department of Radiation Therapy, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Xiang Tan
- Department of Radiation Therapy, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Wanwei Jian
- Department of Radiation Therapy, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Wei Yang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
| | - Xuetao Wang
- Department of Radiation Therapy, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| |
Collapse
|
33
|
Lin H, Xiao H, Dong L, Teo KBK, Zou W, Cai J, Li T. Deep learning for automatic target volume segmentation in radiation therapy: a review. Quant Imaging Med Surg 2021; 11:4847-4858. [PMID: 34888194 PMCID: PMC8611469 DOI: 10.21037/qims-21-168] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2021] [Accepted: 09/16/2021] [Indexed: 12/21/2022]
Abstract
Deep learning, a new branch of machine learning algorithm, has emerged as a fast growing trend in medical imaging and become the state-of-the-art method in various clinical applications such as Radiology, Histo-pathology and Radiation Oncology. Specifically in radiation oncology, deep learning has shown its power in performing automatic segmentation tasks in radiation therapy for Organs-At-Risks (OAR), given its potential in improving the efficiency of OAR contouring and reducing the inter- and intra-observer variabilities. The similar interests were shared for target volume segmentation, an essential step of radiation therapy treatment planning, where the gross tumor volume is defined and microscopic spread is encompassed. The deep learning-based automatic segmentation method has recently been expanded into target volume automatic segmentation. In this paper, the authors summarized the major deep learning architectures of supervised learning fashion related to target volume segmentation, reviewed the mechanism of each infrastructure, surveyed the use of these models in various imaging domains (including Computational Tomography with and without contrast, Magnetic Resonant Imaging and Positron Emission Tomography) and multiple clinical sites, and compared the performance of different models using standard geometric evaluation metrics. The paper concluded with a discussion of open challenges and potential paths of future research in target volume automatic segmentation and how it may benefit the clinical practice.
Collapse
Affiliation(s)
- Hui Lin
- Department of Radaition Oncology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiation Oncology, University of California, San Francisco, CA, USA
| | - Haonan Xiao
- Department of Health Technology & Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Lei Dong
- Department of Radaition Oncology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Kevin Boon-Keng Teo
- Department of Radaition Oncology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Wei Zou
- Department of Radaition Oncology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Jing Cai
- Department of Health Technology & Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Taoran Li
- Department of Radaition Oncology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| |
Collapse
|
34
|
Chen X, Yang B, Li J, Zhu J, Ma X, Chen D, Hu Z, Men K, Dai J. A deep-learning method for generating synthetic kV-CT and improving tumor segmentation for helical tomotherapy of nasopharyngeal carcinoma. Phys Med Biol 2021; 66. [PMID: 34700300 DOI: 10.1088/1361-6560/ac3345] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2021] [Accepted: 10/26/2021] [Indexed: 12/11/2022]
Abstract
Objective:Megavoltage computed tomography (MV-CT) is used for setup verification and adaptive radiotherapy in tomotherapy. However, its low contrast and high noise lead to poor image quality. This study aimed to develop a deep-learning-based method to generate synthetic kilovoltage CT (skV-CT) and then evaluate its ability to improve image quality and tumor segmentation.Approach:The planning kV-CT and MV-CT images of 270 patients with nasopharyngeal carcinoma (NPC) treated on an Accuray TomoHD system were used. An improved cycle-consistent adversarial network which used residual blocks as its generator was adopted to learn the mapping between MV-CT and kV-CT and then generate skV-CT from MV-CT. A Catphan 700 phantom and 30 patients with NPC were used to evaluate image quality. The quantitative indices included contrast-to-noise ratio (CNR), uniformity and signal-to-noise ratio (SNR) for the phantom and the structural similarity index measure (SSIM), mean absolute error (MAE), and peak signal-to-noise ratio (PSNR) for patients. Next, we trained three models for segmentation of the clinical target volume (CTV): MV-CT, skV-CT, and MV-CT combined with skV-CT. The segmentation accuracy was compared with indices of the dice similarity coefficient (DSC) and mean distance agreement (MDA).Mainresults:Compared with MV-CT, skV-CT showed significant improvement in CNR (184.0%), image uniformity (34.7%), and SNR (199.0%) in the phantom study and improved SSIM (1.7%), MAE (24.7%), and PSNR (7.5%) in the patient study. For CTV segmentation with only MV-CT, only skV-CT, and MV-CT combined with skV-CT, the DSCs were 0.75 ± 0.04, 0.78 ± 0.04, and 0.79 ± 0.03, respectively, and the MDAs (in mm) were 3.69 ± 0.81, 3.14 ± 0.80, and 2.90 ± 0.62, respectively.Significance:The proposed method improved the image quality of MV-CT and thus tumor segmentation in helical tomotherapy. The method potentially can benefit adaptive radiotherapy.
Collapse
Affiliation(s)
- Xinyuan Chen
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, People's Republic of China
| | - Bining Yang
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, People's Republic of China
| | - Jingwen Li
- Cloud Computing and Big Data Research Institute, China Academy of Information and Communications Technology, People's Republic of China
| | - Ji Zhu
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, People's Republic of China
| | - Xiangyu Ma
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, People's Republic of China
| | - Deqi Chen
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, People's Republic of China
| | - Zhihui Hu
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, People's Republic of China
| | - Kuo Men
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, People's Republic of China
| | - Jianrong Dai
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, People's Republic of China
| |
Collapse
|
35
|
Liu Z, Liu F, Chen W, Tao Y, Liu X, Zhang F, Shen J, Guan H, Zhen H, Wang S, Chen Q, Chen Y, Hou X. Automatic Segmentation of Clinical Target Volume and Organs-at-Risk for Breast Conservative Radiotherapy Using a Convolutional Neural Network. Cancer Manag Res 2021; 13:8209-8217. [PMID: 34754241 PMCID: PMC8572021 DOI: 10.2147/cmar.s330249] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2021] [Accepted: 10/04/2021] [Indexed: 12/14/2022] Open
Abstract
Objective Delineation of clinical target volume (CTV) and organs at risk (OARs) is important for radiotherapy but is time-consuming. We trained and evaluated a U-ResNet model to provide fast and consistent auto-segmentation. Methods We collected 160 patients’ CT scans with breast cancer who underwent breast-conserving surgery (BCS) and were treated with radiotherapy. CTV and OARs were delineated manually and were used for model training. The dice similarity coefficient (DSC) and 95th percentile Hausdorff distance (95HD) were used to assess the performance of our model. CTV and OARs were randomly selected as ground truth (GT) masks, and artificial intelligence (AI) masks were generated by the proposed model. Two clinicians randomly compared CTV score differences of the contour. The consistency between two clinicians was tested. Time cost for auto-delineation was evaluated. Results The mean DSC values of the proposed method were 0.94, 0.95, 0.94, 0.96, 0.96 and 0.93 for breast CTV, contralateral breast, heart, right lung, left lung and spinal cord, respectively. The mean 95HD values were 4.31mm, 3.59mm, 4.86mm, 3.18mm, 2.79mm and 4.37mm for the above structures, respectively. The average CTV scores for AI and GT were 2.89 versus 2.92 when evaluated by oncologist A (P=0.612), and 2.75 versus 2.83 by oncologist B (P=0.213), with no statistically significant differences. The consistency between two clinicians was poor (kappa=0.282). The time for auto-segmentation of CTV and OARs was 10.03 s. Conclusion Our proposed model (U-ResNet) can improve the efficiency and accuracy of delineation compared with U-Net, performing equally well with the segmentation generated by oncologists.
Collapse
Affiliation(s)
- Zhikai Liu
- Department of Radiation Oncology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, 100730, People's Republic of China
| | - Fangjie Liu
- Department of Radiation Oncology, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, 510060, People's Republic of China
| | - Wanqi Chen
- Department of Radiation Oncology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, 100730, People's Republic of China
| | - Yinjie Tao
- Department of Radiation Oncology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, 100730, People's Republic of China
| | - Xia Liu
- Department of Radiation Oncology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, 100730, People's Republic of China
| | - Fuquan Zhang
- Department of Radiation Oncology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, 100730, People's Republic of China
| | - Jing Shen
- Department of Radiation Oncology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, 100730, People's Republic of China
| | - Hui Guan
- Department of Radiation Oncology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, 100730, People's Republic of China
| | - Hongnan Zhen
- Department of Radiation Oncology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, 100730, People's Republic of China
| | - Shaobin Wang
- MedMind Technology Co., Ltd., Beijing, 100055, People's Republic of China
| | - Qi Chen
- MedMind Technology Co., Ltd., Beijing, 100055, People's Republic of China
| | - Yu Chen
- MedMind Technology Co., Ltd., Beijing, 100055, People's Republic of China
| | - Xiaorong Hou
- Department of Radiation Oncology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, 100730, People's Republic of China
| |
Collapse
|
36
|
Chen Z, Lin L, Wu C, Li C, Xu R, Sun Y. Artificial intelligence for assisting cancer diagnosis and treatment in the era of precision medicine. Cancer Commun (Lond) 2021; 41:1100-1115. [PMID: 34613667 PMCID: PMC8626610 DOI: 10.1002/cac2.12215] [Citation(s) in RCA: 58] [Impact Index Per Article: 19.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2021] [Revised: 07/10/2021] [Accepted: 09/01/2021] [Indexed: 12/12/2022] Open
Abstract
Over the past decade, artificial intelligence (AI) has contributed substantially to the resolution of various medical problems, including cancer. Deep learning (DL), a subfield of AI, is characterized by its ability to perform automated feature extraction and has great power in the assimilation and evaluation of large amounts of complicated data. On the basis of a large quantity of medical data and novel computational technologies, AI, especially DL, has been applied in various aspects of oncology research and has the potential to enhance cancer diagnosis and treatment. These applications range from early cancer detection, diagnosis, classification and grading, molecular characterization of tumors, prediction of patient outcomes and treatment responses, personalized treatment, automatic radiotherapy workflows, novel anti-cancer drug discovery, and clinical trials. In this review, we introduced the general principle of AI, summarized major areas of its application for cancer diagnosis and treatment, and discussed its future directions and remaining challenges. As the adoption of AI in clinical use is increasing, we anticipate the arrival of AI-powered cancer care.
Collapse
Affiliation(s)
- Zi‐Hang Chen
- Department of Radiation OncologyState Key Laboratory of Oncology in South ChinaCollaborative Innovation Center for Cancer MedicineGuangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and TherapySun Yat‐sen University Cancer CenterGuangzhouGuangdong510060P. R. China
- Zhongshan School of MedicineSun Yat‐sen UniversityGuangzhouGuangdong510080P. R. China
| | - Li Lin
- Department of Radiation OncologyState Key Laboratory of Oncology in South ChinaCollaborative Innovation Center for Cancer MedicineGuangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and TherapySun Yat‐sen University Cancer CenterGuangzhouGuangdong510060P. R. China
| | - Chen‐Fei Wu
- Department of Radiation OncologyState Key Laboratory of Oncology in South ChinaCollaborative Innovation Center for Cancer MedicineGuangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and TherapySun Yat‐sen University Cancer CenterGuangzhouGuangdong510060P. R. China
| | - Chao‐Feng Li
- Artificial Intelligence LaboratoryState Key Laboratory of Oncology in South ChinaCollaborative Innovation Center for Cancer MedicineGuangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and TherapySun Yat‐sen University Cancer CenterGuangzhouGuangdong510060P. R. China
| | - Rui‐Hua Xu
- Department of Medical OncologyState Key Laboratory of Oncology in South ChinaCollaborative Innovation Center for Cancer MedicineGuangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and TherapySun Yat‐sen University Cancer CenterGuangzhouGuangdong510060P. R. China
| | - Ying Sun
- Department of Radiation OncologyState Key Laboratory of Oncology in South ChinaCollaborative Innovation Center for Cancer MedicineGuangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and TherapySun Yat‐sen University Cancer CenterGuangzhouGuangdong510060P. R. China
| |
Collapse
|
37
|
Chen M, Wu S, Zhao W, Zhou Y, Zhou Y, Wang G. Application of deep learning to auto-delineation of target volumes and organs at risk in radiotherapy. Cancer Radiother 2021; 26:494-501. [PMID: 34711488 DOI: 10.1016/j.canrad.2021.08.020] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2021] [Revised: 07/30/2021] [Accepted: 08/04/2021] [Indexed: 11/28/2022]
Abstract
The technological advancement heralded the arrival of precision radiotherapy (RT), thereby increasing the therapeutic ratio and decreasing the side effects from treatment. Contour of target volumes (TV) and organs at risk (OARs) in RT is a complicated process. In recent years, automatic contouring of TV and OARs has rapidly developed due to the advances in deep learning (DL). This technology has the potential to save time and to reduce intra- or inter-observer variability. In this paper, the authors provide an overview of RT, introduce the concept of DL, summarize the data characteristics of the included literature, summarize the possible challenges for DL in the future, and discuss the possible research directions.
Collapse
Affiliation(s)
- M Chen
- Department of Radiation Oncology, First Affiliated Hospital, Bengbu Medical College, Bengbu, Anhui 233004, China
| | - S Wu
- Department of Radiation Oncology, First Affiliated Hospital, Bengbu Medical College, Bengbu, Anhui 233004, China
| | - W Zhao
- Bengbu Medical College, Bengbu, Anhui 233030, China
| | - Y Zhou
- Department of Radiation Oncology, First Affiliated Hospital, Bengbu Medical College, Bengbu, Anhui 233004, China
| | - Y Zhou
- Department of Radiation Oncology, First Affiliated Hospital, Bengbu Medical College, Bengbu, Anhui 233004, China
| | - G Wang
- Department of Radiation Oncology, First Affiliated Hospital, Bengbu Medical College, Bengbu, Anhui 233004, China.
| |
Collapse
|
38
|
Byun HK, Chang JS, Choi MS, Chun J, Jung J, Jeong C, Kim JS, Chang Y, Chung SY, Lee S, Kim YB. Evaluation of deep learning-based autosegmentation in breast cancer radiotherapy. Radiat Oncol 2021; 16:203. [PMID: 34649569 PMCID: PMC8518257 DOI: 10.1186/s13014-021-01923-1] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2021] [Accepted: 09/27/2021] [Indexed: 12/22/2022] Open
Abstract
Purpose To study the performance of a proposed deep learning-based autocontouring system in delineating organs at risk (OARs) in breast radiotherapy with a group of experts. Methods Eleven experts from two institutions delineated nine OARs in 10 cases of adjuvant radiotherapy after breast-conserving surgery. Autocontours were then provided to the experts for correction. Overall, 110 manual contours, 110 corrected autocontours, and 10 autocontours of each type of OAR were analyzed. The Dice similarity coefficient (DSC) and Hausdorff distance (HD) were used to compare the degree of agreement between the best manual contour (chosen by an independent expert committee) and each autocontour, corrected autocontour, and manual contour. Higher DSCs and lower HDs indicated a better geometric overlap. The amount of time reduction using the autocontouring system was examined. User satisfaction was evaluated using a survey. Results Manual contours, corrected autocontours, and autocontours had a similar accuracy in the average DSC value (0.88 vs. 0.90 vs. 0.90). The accuracy of autocontours ranked the second place, based on DSCs, and the first place, based on HDs among the manual contours. Interphysician variations among the experts were reduced in corrected autocontours, compared to variations in manual contours (DSC: 0.89–0.90 vs. 0.87–0.90; HD: 4.3–5.8 mm vs. 5.3–7.6 mm). Among the manual delineations, the breast contours had the largest variations, which improved most significantly with the autocontouring system. The total mean times for nine OARs were 37 min for manual contours and 6 min for corrected autocontours. The results of the survey revealed good user satisfaction. Conclusions The autocontouring system had a similar performance in OARs as that of the experts’ manual contouring. This system can be valuable in improving the quality of breast radiotherapy and reducing interphysician variability in clinical practice. Supplementary Information The online version contains supplementary material available at 10.1186/s13014-021-01923-1.
Collapse
Affiliation(s)
- Hwa Kyung Byun
- Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, 50-1 Yonsei-ro, Seodaemun-gu, Seoul, 03722, South Korea
| | - Jee Suk Chang
- Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, 50-1 Yonsei-ro, Seodaemun-gu, Seoul, 03722, South Korea.
| | - Min Seo Choi
- Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, 50-1 Yonsei-ro, Seodaemun-gu, Seoul, 03722, South Korea
| | - Jaehee Chun
- Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, 50-1 Yonsei-ro, Seodaemun-gu, Seoul, 03722, South Korea
| | - Jinhong Jung
- Department of Radiation Oncology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, 05505, South Korea.
| | - Chiyoung Jeong
- Department of Radiation Oncology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, 05505, South Korea
| | - Jin Sung Kim
- Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, 50-1 Yonsei-ro, Seodaemun-gu, Seoul, 03722, South Korea
| | | | - Seung Yeun Chung
- Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, 50-1 Yonsei-ro, Seodaemun-gu, Seoul, 03722, South Korea.,Department of Radiation Oncology, Ajou University School of Medicine, Suwon, South Korea
| | - Seungryul Lee
- Yonsei University College of Medicine, Seoul, South Korea
| | - Yong Bae Kim
- Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, 50-1 Yonsei-ro, Seodaemun-gu, Seoul, 03722, South Korea
| |
Collapse
|
39
|
Feng R, Zheng X, Gao T, Chen J, Wang W, Chen DZ, Wu J. Interactive Few-Shot Learning: Limited Supervision, Better Medical Image Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:2575-2588. [PMID: 33606628 DOI: 10.1109/tmi.2021.3060551] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Many known supervised deep learning methods for medical image segmentation suffer an expensive burden of data annotation for model training. Recently, few-shot segmentation methods were proposed to alleviate this burden, but such methods often showed poor adaptability to the target tasks. By prudently introducing interactive learning into the few-shot learning strategy, we develop a novel few-shot segmentation approach called Interactive Few-shot Learning (IFSL), which not only addresses the annotation burden of medical image segmentation models but also tackles the common issues of the known few-shot segmentation methods. First, we design a new few-shot segmentation structure, called Medical Prior-based Few-shot Learning Network (MPrNet), which uses only a few annotated samples (e.g., 10 samples) as support images to guide the segmentation of query images without any pre-training. Then, we propose an Interactive Learning-based Test Time Optimization Algorithm (IL-TTOA) to strengthen our MPrNet on the fly for the target task in an interactive fashion. To our best knowledge, our IFSL approach is the first to allow few-shot segmentation models to be optimized and strengthened on the target tasks in an interactive and controllable manner. Experiments on four few-shot segmentation tasks show that our IFSL approach outperforms the state-of-the-art methods by more than 20% in the DSC metric. Specifically, the interactive optimization algorithm (IL-TTOA) further contributes ~10% DSC improvement for the few-shot segmentation models.
Collapse
|
40
|
Park J, Choi B, Ko J, Chun J, Park I, Lee J, Kim J, Kim J, Eom K, Kim JS. Deep-Learning-Based Automatic Segmentation of Head and Neck Organs for Radiation Therapy in Dogs. Front Vet Sci 2021; 8:721612. [PMID: 34552975 PMCID: PMC8450455 DOI: 10.3389/fvets.2021.721612] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2021] [Accepted: 08/09/2021] [Indexed: 11/24/2022] Open
Abstract
Purpose: This study was conducted to develop a deep learning-based automatic segmentation (DLBAS) model of head and neck organs for radiotherapy (RT) in dogs, and to evaluate the feasibility for delineating the RT planning. Materials and Methods: The segmentation indicated that there were potentially 15 organs at risk (OARs) in the head and neck of dogs. Post-contrast computed tomography (CT) was performed in 90 dogs. The training and validation sets comprised 80 CT data sets, including 20 test sets. The accuracy of the segmentation was assessed using both the Dice similarity coefficient (DSC) and the Hausdorff distance (HD), and by referencing the expert contours as the ground truth. An additional 10 clinical test sets with relatively large displacement or deformation of organs were selected for verification in cancer patients. To evaluate the applicability in cancer patients, and the impact of expert intervention, three methods–HA, DLBAS, and the readjustment of the predicted data obtained via the DLBAS of the clinical test sets (HA_DLBAS)–were compared. Results: The DLBAS model (in the 20 test sets) showed reliable DSC and HD values; it also had a short contouring time of ~3 s. The average (mean ± standard deviation) DSC (0.83 ± 0.04) and HD (2.71 ± 1.01 mm) values were similar to those of previous human studies. The DLBAS was highly accurate and had no large displacement of head and neck organs. However, the DLBAS in the 10 clinical test sets showed lower DSC (0.78 ± 0.11) and higher HD (4.30 ± 3.69 mm) values than those of the test sets. The HA_DLBAS was comparable to both the HA (DSC: 0.85 ± 0.06 and HD: 2.74 ± 1.18 mm) and DLBAS presented better comparison metrics and decreased statistical deviations (DSC: 0.94 ± 0.03 and HD: 2.30 ± 0.41 mm). In addition, the contouring time of HA_DLBAS (30 min) was less than that of HA (80 min). Conclusion: In conclusion, HA_DLBAS method and the proposed DLBAS was highly consistent and robust in its performance. Thus, DLBAS has great potential as a single or supportive tool to the key process in RT planning.
Collapse
Affiliation(s)
- Jeongsu Park
- Department of Veterinary Medical Imaging, College of Veterinary Medicine, Konkuk University, Seoul, South Korea
| | - Byoungsu Choi
- Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, South Korea
| | - Jaeeun Ko
- Department of Veterinary Medical Imaging, College of Veterinary Medicine, Konkuk University, Seoul, South Korea
| | - Jaehee Chun
- Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, South Korea
| | - Inkyung Park
- Department of Integrative Medicine, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, South Korea
| | - Juyoung Lee
- Department of Integrative Medicine, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, South Korea
| | - Jayon Kim
- Department of Veterinary Medical Imaging, College of Veterinary Medicine, Konkuk University, Seoul, South Korea
| | - Jaehwan Kim
- Department of Veterinary Medical Imaging, College of Veterinary Medicine, Konkuk University, Seoul, South Korea
| | - Kidong Eom
- Department of Veterinary Medical Imaging, College of Veterinary Medicine, Konkuk University, Seoul, South Korea
| | - Jin Sung Kim
- Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, South Korea
| |
Collapse
|
41
|
Tsang YM, Routsis DS. Adapting for Adaptive Radiotherapy (ART): The need to evolve our roles as Therapeutic Radiographers. Radiography (Lond) 2021; 27 Suppl 1:S39-S42. [PMID: 34535353 DOI: 10.1016/j.radi.2021.08.004] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2021] [Revised: 08/16/2021] [Accepted: 08/16/2021] [Indexed: 11/30/2022]
Abstract
OBJECTIVES 4D Adaptive Radiotherapy (4D-ART) has been stated as the future baseline standard-of-care for technical radiotherapy. Its goal is to optimise radiation dose received by 'adapting' to changes 'seen' in each individual patient, for each treatment delivery (fraction), throughout each treatment delivery. The drive for technological developments to achieve this is ongoing. To enhance the potential benefits, we should consider other aspects of the processes needed: How do changes in clinical practices and processes affect the role of the Therapeutic Radiographer? The aim is to raise the need to explore questions of Therapeutic Radiographers roles and responsibilities within 4D-ART. KEY FINDINGS Moving from using current predictive strategies (such as plan-of-the-day) to being able to dynamically adapt (real-time/4D-ART) for patient changes requires rapid clinical judgements to be made. The question becomes 'who makes these decisions'? Currently Therapeutic Radiographers maybe ideally placed for this. Dynamically adaptive radiotherapy requires Radiographers to have clinical decisions-making skills and authorities within the multi-professional team (MPT). It is not sufficient to train radiographers in the 'how' to use 4D-ART techniques and technologies; the ability to make good clinical judgments comes from understanding the principles supporting this concept by understanding the 'why'. CONCLUSION To support future service needs and ongoing developments within ART, Radiographer's roles need to adapt and evolve, as well as the way their role is perceived within the MPT. We need to provide Radiographers with the education required, abilities and authorities to act. IMPLICATIONS FOR PRACTICE Role revision is required to include greater responsibility for clinical decision making for implementing 4D-ART practices.
Collapse
Affiliation(s)
- Y M Tsang
- East and North Hertfordshire NHS Trust, Radiotherapy, Mount Vernon Cancer Centre, Northwood, Middlesex, HA6 2RN, UK
| | - D S Routsis
- Cambridge University Hospitals NHS Foundation Trust, Addenbrooke's Hospital, Radiotherapy Department, Hill's Road, Cambridge, UK.
| |
Collapse
|
42
|
Liu Z, Chen W, Guan H, Zhen H, Shen J, Liu X, Liu A, Li R, Geng J, You J, Wang W, Li Z, Zhang Y, Chen Y, Du J, Chen Q, Chen Y, Wang S, Zhang F, Qiu J. An Adversarial Deep-Learning-Based Model for Cervical Cancer CTV Segmentation With Multicenter Blinded Randomized Controlled Validation. Front Oncol 2021; 11:702270. [PMID: 34490103 PMCID: PMC8417437 DOI: 10.3389/fonc.2021.702270] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2021] [Accepted: 07/29/2021] [Indexed: 12/31/2022] Open
Abstract
Purpose To propose a novel deep-learning-based auto-segmentation model for CTV delineation in cervical cancer and to evaluate whether it can perform comparably well to manual delineation by a three-stage multicenter evaluation framework. Methods An adversarial deep-learning-based auto-segmentation model was trained and configured for cervical cancer CTV contouring using CT data from 237 patients. Then CT scans of additional 20 consecutive patients with locally advanced cervical cancer were collected to perform a three-stage multicenter randomized controlled evaluation involving nine oncologists from six medical centers. This evaluation system is a combination of objective performance metrics, radiation oncologist assessment, and finally the head-to-head Turing imitation test. Accuracy and effectiveness were evaluated step by step. The intra-observer consistency of each oncologist was also tested. Results In stage-1 evaluation, the mean DSC and the 95HD value of the proposed model were 0.88 and 3.46 mm, respectively. In stage-2, the oncologist grading evaluation showed the majority of AI contours were comparable to the GT contours. The average CTV scores for AI and GT were 2.68 vs. 2.71 in week 0 (P = .206), and 2.62 vs. 2.63 in week 2 (P = .552), with no significant statistical differences. In stage-3, the Turing imitation test showed that the percentage of AI contours, which were judged to be better than GT contours by ≥5 oncologists, was 60.0% in week 0 and 42.5% in week 2. Most oncologists demonstrated good consistency between the 2 weeks (P > 0.05). Conclusions The tested AI model was demonstrated to be accurate and comparable to the manual CTV segmentation in cervical cancer patients when assessed by our three-stage evaluation framework.
Collapse
Affiliation(s)
- Zhikai Liu
- Department of Radiation Oncology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Wanqi Chen
- Department of Nuclear Medicine, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Hui Guan
- Department of Radiation Oncology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Hongnan Zhen
- Department of Radiation Oncology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Jing Shen
- Department of Radiation Oncology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Xia Liu
- Department of Radiation Oncology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - An Liu
- Department of Radiation Oncology, City of Hope National Medical Center, Duarte, CA, United States
| | - Richard Li
- Department of Radiation Oncology, City of Hope National Medical Center, Duarte, CA, United States
| | - Jianhao Geng
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Department of Radiation Oncology, Peking University Cancer Hospital and Institute, Beijing, China
| | - Jing You
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Department of Radiation Oncology, Peking University Cancer Hospital and Institute, Beijing, China
| | - Weihu Wang
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Department of Radiation Oncology, Peking University Cancer Hospital and Institute, Beijing, China
| | - Zhouyu Li
- Department of Radiation Oncology, Affiliated Cancer Hospital & Institute of Guangzhou Medical University, Guangzhou, China
| | - Yongfeng Zhang
- Department of Radiation Oncology, The Fourth Hospital of Jilin University (FAW General Hospital), Jilin, China
| | - Yuanyuan Chen
- Oncology Department, Cangzhou Hospital of Integrated Traditional Chinese and Western Medicine, Hebei, China
| | - Junjie Du
- Department of Radiation Oncology, Yangquan First People's Hospital, Shanxi, China
| | - Qi Chen
- Research and Development Department, MedMind Technology Co., Ltd., Beijing, China
| | - Yu Chen
- Research and Development Department, MedMind Technology Co., Ltd., Beijing, China
| | - Shaobin Wang
- Research and Development Department, MedMind Technology Co., Ltd., Beijing, China
| | - Fuquan Zhang
- Department of Radiation Oncology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Jie Qiu
- Department of Radiation Oncology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| |
Collapse
|
43
|
Kazemimoghadam M, Chi W, Rahimi A, Kim N, Alluri P, Nwachukwu C, Lu W, Gu X. Saliency-guided deep learning network for automatic tumor bed volume delineation in post-operative breast irradiation. Phys Med Biol 2021; 66:10.1088/1361-6560/ac176d. [PMID: 34298539 PMCID: PMC8639319 DOI: 10.1088/1361-6560/ac176d] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2021] [Accepted: 07/23/2021] [Indexed: 11/12/2022]
Abstract
Efficient, reliable and reproducible target volume delineation is a key step in the effective planning of breast radiotherapy. However, post-operative breast target delineation is challenging as the contrast between the tumor bed volume (TBV) and normal breast tissue is relatively low in CT images. In this study, we propose to mimic the marker-guidance procedure in manual target delineation. We developed a saliency-based deep learning segmentation (SDL-Seg) algorithm for accurate TBV segmentation in post-operative breast irradiation. The SDL-Seg algorithm incorporates saliency information in the form of markers' location cues into a U-Net model. The design forces the model to encode the location-related features, which underscores regions with high saliency levels and suppresses low saliency regions. The saliency maps were generated by identifying markers on CT images. Markers' location were then converted to probability maps using a distance transformation coupled with a Gaussian filter. Subsequently, the CT images and the corresponding saliency maps formed a multi-channel input for the SDL-Seg network. Our in-house dataset was comprised of 145 prone CT images from 29 post-operative breast cancer patients, who received 5-fraction partial breast irradiation (PBI) regimen on GammaPod. The 29 patients were randomly split into training (19), validation (5) and test (5) sets. The performance of the proposed method was compared against basic U-Net. Our model achieved mean (standard deviation) of 76.4(±2.7) %, 6.76(±1.83) mm, and 1.9(±0.66) mm for Dice similarity coefficient, 95 percentile Hausdorff distance, and average symmetric surface distance respectively on the test set with computation time of below 11 seconds per one CT volume. SDL-Seg showed superior performance relative to basic U-Net for all the evaluation metrics while preserving low computation cost. The findings demonstrate that SDL-Seg is a promising approach for improving the efficiency and accuracy of the on-line treatment planning procedure of PBI, such as GammaPod based PBI.
Collapse
Affiliation(s)
- Mahdieh Kazemimoghadam
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
| | - Weicheng Chi
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
- School of Software Engineering, South China University of Technology, Guangzhou, Guangdong 510006, People's Republic of China
| | - Asal Rahimi
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
| | - Nathan Kim
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
| | - Prasanna Alluri
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
| | - Chika Nwachukwu
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
| | | | - Xuejun Gu
- Stanford University, Palo Alto, CA, United States of America
| |
Collapse
|
44
|
Samarasinghe G, Jameson M, Vinod S, Field M, Dowling J, Sowmya A, Holloway L. Deep learning for segmentation in radiation therapy planning: a review. J Med Imaging Radiat Oncol 2021; 65:578-595. [PMID: 34313006 DOI: 10.1111/1754-9485.13286] [Citation(s) in RCA: 32] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2021] [Accepted: 06/29/2021] [Indexed: 12/21/2022]
Abstract
Segmentation of organs and structures, as either targets or organs-at-risk, has a significant influence on the success of radiation therapy. Manual segmentation is a tedious and time-consuming task for clinicians, and inter-observer variability can affect the outcomes of radiation therapy. The recent hype over deep neural networks has added many powerful auto-segmentation methods as variations of convolutional neural networks (CNN). This paper presents a descriptive review of the literature on deep learning techniques for segmentation in radiation therapy planning. The most common CNN architecture across the four clinical sub sites considered was U-net, with the majority of deep learning segmentation articles focussed on head and neck normal tissue structures. The most common data sets were CT images from an inhouse source, along with some public data sets. N-fold cross-validation was commonly employed; however, not all work separated training, test and validation data sets. This area of research is expanding rapidly. To facilitate comparisons of proposed methods and benchmarking, consistent use of appropriate metrics and independent validation should be carefully considered.
Collapse
Affiliation(s)
- Gihan Samarasinghe
- School of Computer Science and Engineering, University of New South Wales, Sydney, New South Wales, Australia.,Ingham Institute for Applied Medical Research and South Western Sydney Clinical School, UNSW, Liverpool, New South Wales, Australia
| | - Michael Jameson
- Genesiscare, Sydney, New South Wales, Australia.,St Vincent's Clinical School, University of New South Wales, Sydney, New South Wales, Australia
| | - Shalini Vinod
- Ingham Institute for Applied Medical Research and South Western Sydney Clinical School, UNSW, Liverpool, New South Wales, Australia.,Liverpool Cancer Therapy Centre, Liverpool Hospital, Liverpool, New South Wales, Australia
| | - Matthew Field
- Ingham Institute for Applied Medical Research and South Western Sydney Clinical School, UNSW, Liverpool, New South Wales, Australia.,Liverpool Cancer Therapy Centre, Liverpool Hospital, Liverpool, New South Wales, Australia
| | - Jason Dowling
- Commonwealth Scientific and Industrial Research Organisation, Australian E-Health Research Centre, Herston, Queensland, Australia
| | - Arcot Sowmya
- School of Computer Science and Engineering, University of New South Wales, Sydney, New South Wales, Australia
| | - Lois Holloway
- Ingham Institute for Applied Medical Research and South Western Sydney Clinical School, UNSW, Liverpool, New South Wales, Australia.,Liverpool Cancer Therapy Centre, Liverpool Hospital, Liverpool, New South Wales, Australia
| |
Collapse
|
45
|
Shusharina N, Söderberg J, Lidberg D, Niyazi M, Shih HA, Bortfeld T. Accounting for uncertainties in the position of anatomical barriers used to define the clinical target volume. Phys Med Biol 2021; 66. [PMID: 34171846 DOI: 10.1088/1361-6560/ac0ea3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2021] [Accepted: 06/25/2021] [Indexed: 11/11/2022]
Abstract
The definition of the clinical target volume (CTV) is becoming the weakest link in the radiotherapy chain. CTV definition consensus guidelines include the geometric expansion beyond the visible gross tumor volume, while avoiding anatomical barriers. In a previous publication we described how to implement these consensus guidelines using deep learning and graph search techniques in a computerized CTV auto-delineation process. In this paper we address the remaining problem of how to deal with uncertainties in positions of the anatomical barriers. The objective was to develop an algorithm that implements the consensus guidelines on considering barrier uncertainties. Our approach is to perform multiple expansions using the fast marching method with barriers in place or removed at different stages of the expansion. We validate the algorithm in a computational phantom and compare manually generated with automated CTV contours, both taking barrier uncertainties into account.
Collapse
Affiliation(s)
- Nadya Shusharina
- Division of Radiation Biophysics, Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, United States of America
| | | | | | - Maximilian Niyazi
- Department of Radiation Oncology, University Hospital, LMU Munich, Munich, Germany.,German Cancer Consortium (DKTK), Partner Site Munich, Munich, Germany
| | - Helen A Shih
- Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, United States of America
| | - Thomas Bortfeld
- Division of Radiation Biophysics, Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, United States of America
| |
Collapse
|
46
|
Liu X, Li KW, Yang R, Geng LS. Review of Deep Learning Based Automatic Segmentation for Lung Cancer Radiotherapy. Front Oncol 2021; 11:717039. [PMID: 34336704 PMCID: PMC8323481 DOI: 10.3389/fonc.2021.717039] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2021] [Accepted: 06/21/2021] [Indexed: 12/14/2022] Open
Abstract
Lung cancer is the leading cause of cancer-related mortality for males and females. Radiation therapy (RT) is one of the primary treatment modalities for lung cancer. While delivering the prescribed dose to tumor targets, it is essential to spare the tissues near the targets-the so-called organs-at-risk (OARs). An optimal RT planning benefits from the accurate segmentation of the gross tumor volume and surrounding OARs. Manual segmentation is a time-consuming and tedious task for radiation oncologists. Therefore, it is crucial to develop automatic image segmentation to relieve radiation oncologists of the tedious contouring work. Currently, the atlas-based automatic segmentation technique is commonly used in clinical routines. However, this technique depends heavily on the similarity between the atlas and the image segmented. With significant advances made in computer vision, deep learning as a part of artificial intelligence attracts increasing attention in medical image automatic segmentation. In this article, we reviewed deep learning based automatic segmentation techniques related to lung cancer and compared them with the atlas-based automatic segmentation technique. At present, the auto-segmentation of OARs with relatively large volume such as lung and heart etc. outperforms the organs with small volume such as esophagus. The average Dice similarity coefficient (DSC) of lung, heart and liver are over 0.9, and the best DSC of spinal cord reaches 0.9. However, the DSC of esophagus ranges between 0.71 and 0.87 with a ragged performance. In terms of the gross tumor volume, the average DSC is below 0.8. Although deep learning based automatic segmentation techniques indicate significant superiority in many aspects compared to manual segmentation, various issues still need to be solved. We discussed the potential issues in deep learning based automatic segmentation including low contrast, dataset size, consensus guidelines, and network design. Clinical limitations and future research directions of deep learning based automatic segmentation were discussed as well.
Collapse
Affiliation(s)
- Xi Liu
- School of Physics, Beihang University, Beijing, China
| | - Kai-Wen Li
- School of Physics, Beihang University, Beijing, China
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, School of Medicine and Engineering, Key Laboratory of Big Data-Based Precision Medicine, Ministry of Industry and Information Technology, Beihang University, Beijing, China
| | - Ruijie Yang
- Department of Radiation Oncology, Peking University Third Hospital, Beijing, China
| | - Li-Sheng Geng
- School of Physics, Beihang University, Beijing, China
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, School of Medicine and Engineering, Key Laboratory of Big Data-Based Precision Medicine, Ministry of Industry and Information Technology, Beihang University, Beijing, China
- Beijing Key Laboratory of Advanced Nuclear Materials and Physics, Beihang University, Beijing, China
- School of Physics and Microelectronics, Zhengzhou University, Zhengzhou, China
| |
Collapse
|
47
|
Field M, Hardcastle N, Jameson M, Aherne N, Holloway L. Machine learning applications in radiation oncology. PHYSICS & IMAGING IN RADIATION ONCOLOGY 2021; 19:13-24. [PMID: 34307915 PMCID: PMC8295850 DOI: 10.1016/j.phro.2021.05.007] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/06/2020] [Revised: 05/19/2021] [Accepted: 05/22/2021] [Indexed: 12/23/2022]
Abstract
Machine learning technology has a growing impact on radiation oncology with an increasing presence in research and industry. The prevalence of diverse data including 3D imaging and the 3D radiation dose delivery presents potential for future automation and scope for treatment improvements for cancer patients. Harnessing this potential requires standardization of tools and data, and focused collaboration between fields of expertise. The rapid advancement of radiation oncology treatment technologies presents opportunities for machine learning integration with investments targeted towards data quality, data extraction, software, and engagement with clinical expertise. In this review, we provide an overview of machine learning concepts before reviewing advances in applying machine learning to radiation oncology and integrating these techniques into the radiation oncology workflows. Several key areas are outlined in the radiation oncology workflow where machine learning has been applied and where it can have a significant impact in terms of efficiency, consistency in treatment and overall treatment outcomes. This review highlights that machine learning has key early applications in radiation oncology due to the repetitive nature of many tasks that also currently have human review. Standardized data management of routinely collected imaging and radiation dose data are also highlighted as enabling engagement in research utilizing machine learning and the ability integrate these technologies into clinical workflow to benefit patients. Physicists need to be part of the conversation to facilitate this technical integration.
Collapse
Affiliation(s)
- Matthew Field
- South Western Sydney Clinical School, Faculty of Medicine, University of New South Wales, Sydney, NSW, Australia.,Ingham Institute for Applied Medical Research, Sydney, NSW, Australia
| | - Nicholas Hardcastle
- Physical Sciences, Peter MacCallum Cancer Centre, Melbourne, VIC, Australia.,Centre for Medical Radiation Physics, University of Wollongong, Wollongong, NSW, Australia
| | - Michael Jameson
- GenesisCare, Alexandria, NSW, Australia.,St Vincent's Clinical School, Faculty of Medicine, University of New South Wales, Australia
| | - Noel Aherne
- Mid North Coast Cancer Institute, NSW, Australia.,Rural Clinical School, Faculty of Medicine, University of New South Wales, Sydney, NSW, Australia
| | - Lois Holloway
- South Western Sydney Clinical School, Faculty of Medicine, University of New South Wales, Sydney, NSW, Australia.,Ingham Institute for Applied Medical Research, Sydney, NSW, Australia.,Cancer Therapy Centre, Liverpool Hospital, Sydney, NSW, Australia.,Centre for Medical Radiation Physics, University of Wollongong, Wollongong, NSW, Australia
| |
Collapse
|
48
|
Oya M, Sugimoto S, Sasai K, Yokoyama K. Investigation of clinical target volume segmentation for whole breast irradiation using three-dimensional convolutional neural networks with gradient-weighted class activation mapping. Radiol Phys Technol 2021; 14:238-247. [PMID: 34132994 DOI: 10.1007/s12194-021-00620-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2020] [Revised: 05/11/2021] [Accepted: 05/25/2021] [Indexed: 12/22/2022]
Abstract
This study aims to implement three-dimensional convolutional neural networks (3D-CNN) for clinical target volume (CTV) segmentation for whole breast irradiation and investigate the focus of 3D-CNNs during decision-making using gradient-weighted class activation mapping (Grad-CAM). A 3D-UNet CNN was adopted to conduct automatic segmentation of the CTV for breast cancer. The 3D-UNet was trained using three datasets of left-, right-, and both left- and right-sided breast cancer patients. Segmentation accuracy was evaluated using the Dice similarity coefficient (DSC). Grad-CAM was applied to trained CNNs. The DSCs for the datasets of the left-, right-, and both left- and right-sided breasts were on an average 0.88, 0.89, and 0.85, respectively. The Grad-CAM heatmaps showed that the 3D-UNet used for segmentation determined the CTV region from the target-side breast tissue and by referring to the opposite-side breast. Although the size of the dataset was limited, DSC ≥ 0.85 was achieved for the segmentation of breast CTV using the 3D-UNet. Grad-CAM indicates the applicable scope and limitations of using a CNN by indicating the focus of such networks during decision-making.
Collapse
Affiliation(s)
- Megumi Oya
- Department of Epidemiology and Environmental Health, Juntendo University Graduate School of Medicine, 2-1-1 Hongo, Bunkyo-ku, Tokyo, 113-8421, Japan
| | - Satoru Sugimoto
- Department of Radiation Oncology, Juntendo University Graduate School of Medicine, 2-1-1 Hongo, Bunkyo-ku, Tokyo, 113-8421, Japan.
| | - Keisuke Sasai
- Department of Radiation Oncology, Juntendo University Graduate School of Medicine, 2-1-1 Hongo, Bunkyo-ku, Tokyo, 113-8421, Japan
| | - Kazuhito Yokoyama
- Department of Epidemiology and Environmental Health, Juntendo University Graduate School of Medicine, 2-1-1 Hongo, Bunkyo-ku, Tokyo, 113-8421, Japan.,Department of Epidemiology and Social Medicine, International University of Health and Welfare Graduate School of Public Health, 4-1-26 Akasaka, Minato-ku, Tokyo, 107-8402, Japan
| |
Collapse
|
49
|
Kiser KJ, Barman A, Stieb S, Fuller CD, Giancardo L. Novel Autosegmentation Spatial Similarity Metrics Capture the Time Required to Correct Segmentations Better Than Traditional Metrics in a Thoracic Cavity Segmentation Workflow. J Digit Imaging 2021; 34:541-553. [PMID: 34027588 PMCID: PMC8329111 DOI: 10.1007/s10278-021-00460-3] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2020] [Revised: 03/28/2021] [Accepted: 05/04/2021] [Indexed: 12/20/2022] Open
Abstract
Automated segmentation templates can save clinicians time compared to de novo segmentation but may still take substantial time to review and correct. It has not been thoroughly investigated which automated segmentation-corrected segmentation similarity metrics best predict clinician correction time. Bilateral thoracic cavity volumes in 329 CT scans were segmented by a UNet-inspired deep learning segmentation tool and subsequently corrected by a fourth-year medical student. Eight spatial similarity metrics were calculated between the automated and corrected segmentations and associated with correction times using Spearman’s rank correlation coefficients. Nine clinical variables were also associated with metrics and correction times using Spearman’s rank correlation coefficients or Mann–Whitney U tests. The added path length, false negative path length, and surface Dice similarity coefficient correlated better with correction time than traditional metrics, including the popular volumetric Dice similarity coefficient (respectively ρ = 0.69, ρ = 0.65, ρ = − 0.48 versus ρ = − 0.25; correlation p values < 0.001). Clinical variables poorly represented in the autosegmentation tool’s training data were often associated with decreased accuracy but not necessarily with prolonged correction time. Metrics used to develop and evaluate autosegmentation tools should correlate with clinical time saved. To our knowledge, this is only the second investigation of which metrics correlate with time saved. Validation of our findings is indicated in other anatomic sites and clinical workflows. Novel spatial similarity metrics may be preferable to traditional metrics for developing and evaluating autosegmentation tools that are intended to save clinicians time.
Collapse
Affiliation(s)
- Kendall J. Kiser
- Department of Radiation Oncology, Washington University School of Medicine in St. Louis, St. Louis, MO USA
| | - Arko Barman
- Center for Precision Health, UT Health School of Biomedical Informatics, Houston, TX USA
| | - Sonja Stieb
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX USA
| | - Clifton D. Fuller
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX USA
| | - Luca Giancardo
- Center for Precision Health, UT Health School of Biomedical Informatics, Houston, TX USA
| |
Collapse
|
50
|
Yakar M, Etiz D. Artificial intelligence in radiation oncology. Artif Intell Med Imaging 2021; 2:13-31. [DOI: 10.35711/aimi.v2.i2.13] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/04/2021] [Revised: 03/30/2021] [Accepted: 04/20/2021] [Indexed: 02/06/2023] Open
Abstract
Artificial intelligence (AI) is a computer science that tries to mimic human-like intelligence in machines that use computer software and algorithms to perform specific tasks without direct human input. Machine learning (ML) is a subunit of AI that uses data-driven algorithms that learn to imitate human behavior based on a previous example or experience. Deep learning is an ML technique that uses deep neural networks to create a model. The growth and sharing of data, increasing computing power, and developments in AI have initiated a transformation in healthcare. Advances in radiation oncology have produced a significant amount of data that must be integrated with computed tomography imaging, dosimetry, and imaging performed before each fraction. Of the many algorithms used in radiation oncology, has advantages and limitations with different computational power requirements. The aim of this review is to summarize the radiotherapy (RT) process in workflow order by identifying specific areas in which quality and efficiency can be improved by ML. The RT stage is divided into seven stages: patient evaluation, simulation, contouring, planning, quality control, treatment application, and patient follow-up. A systematic evaluation of the applicability, limitations, and advantages of AI algorithms has been done for each stage.
Collapse
Affiliation(s)
- Melek Yakar
- Department of Radiation Oncology, Eskisehir Osmangazi University Faculty of Medicine, Eskisehir 26040, Turkey
- Center of Research and Application for Computer Aided Diagnosis and Treatment in Health, Eskisehir Osmangazi University, Eskisehir 26040, Turkey
| | - Durmus Etiz
- Department of Radiation Oncology, Eskisehir Osmangazi University Faculty of Medicine, Eskisehir 26040, Turkey
- Center of Research and Application for Computer Aided Diagnosis and Treatment in Health, Eskisehir Osmangazi University, Eskisehir 26040, Turkey
| |
Collapse
|