1
|
Erdur AC, Rusche D, Scholz D, Kiechle J, Fischer S, Llorián-Salvador Ó, Buchner JA, Nguyen MQ, Etzel L, Weidner J, Metz MC, Wiestler B, Schnabel J, Rueckert D, Combs SE, Peeken JC. Deep learning for autosegmentation for radiotherapy treatment planning: State-of-the-art and novel perspectives. Strahlenther Onkol 2024:10.1007/s00066-024-02262-2. [PMID: 39105745 DOI: 10.1007/s00066-024-02262-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2024] [Accepted: 06/13/2024] [Indexed: 08/07/2024]
Abstract
The rapid development of artificial intelligence (AI) has gained importance, with many tools already entering our daily lives. The medical field of radiation oncology is also subject to this development, with AI entering all steps of the patient journey. In this review article, we summarize contemporary AI techniques and explore the clinical applications of AI-based automated segmentation models in radiotherapy planning, focusing on delineation of organs at risk (OARs), the gross tumor volume (GTV), and the clinical target volume (CTV). Emphasizing the need for precise and individualized plans, we review various commercial and freeware segmentation tools and also state-of-the-art approaches. Through our own findings and based on the literature, we demonstrate improved efficiency and consistency as well as time savings in different clinical scenarios. Despite challenges in clinical implementation such as domain shifts, the potential benefits for personalized treatment planning are substantial. The integration of mathematical tumor growth models and AI-based tumor detection further enhances the possibilities for refining target volumes. As advancements continue, the prospect of one-stop-shop segmentation and radiotherapy planning represents an exciting frontier in radiotherapy, potentially enabling fast treatment with enhanced precision and individualization.
Collapse
Affiliation(s)
- Ayhan Can Erdur
- Institute for Artificial Intelligence and Informatics in Medicine, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany.
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany.
| | - Daniel Rusche
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
| | - Daniel Scholz
- Institute for Artificial Intelligence and Informatics in Medicine, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Department of Neuroradiology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
| | - Johannes Kiechle
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Institute for Computational Imaging and AI in Medicine, Technical University of Munich, Lichtenberg Str. 2a, 85748, Garching, Bavaria, Germany
- Munich Center for Machine Learning (MCML), Technical University of Munich, Arcisstraße 21, 80333, Munich, Bavaria, Germany
- Konrad Zuse School of Excellence in Reliable AI (relAI), Technical University of Munich, Walther-von-Dyck-Straße 10, 85748, Garching, Bavaria, Germany
| | - Stefan Fischer
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Institute for Computational Imaging and AI in Medicine, Technical University of Munich, Lichtenberg Str. 2a, 85748, Garching, Bavaria, Germany
- Munich Center for Machine Learning (MCML), Technical University of Munich, Arcisstraße 21, 80333, Munich, Bavaria, Germany
| | - Óscar Llorián-Salvador
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Department for Bioinformatics and Computational Biology - i12, Technical University of Munich, Boltzmannstraße 3, 85748, Garching, Bavaria, Germany
- Institute of Organismic and Molecular Evolution, Johannes Gutenberg University Mainz (JGU), Hüsch-Weg 15, 55128, Mainz, Rhineland-Palatinate, Germany
| | - Josef A Buchner
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
| | - Mai Q Nguyen
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
| | - Lucas Etzel
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Institute of Radiation Medicine (IRM), Helmholtz Zentrum, Ingolstädter Landstraße 1, 85764, Oberschleißheim, Bavaria, Germany
| | - Jonas Weidner
- Institute for Artificial Intelligence and Informatics in Medicine, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Department of Neuroradiology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
| | - Marie-Christin Metz
- Department of Neuroradiology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
| | - Benedikt Wiestler
- Department of Neuroradiology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
| | - Julia Schnabel
- Institute for Computational Imaging and AI in Medicine, Technical University of Munich, Lichtenberg Str. 2a, 85748, Garching, Bavaria, Germany
- Munich Center for Machine Learning (MCML), Technical University of Munich, Arcisstraße 21, 80333, Munich, Bavaria, Germany
- Konrad Zuse School of Excellence in Reliable AI (relAI), Technical University of Munich, Walther-von-Dyck-Straße 10, 85748, Garching, Bavaria, Germany
- Institute of Machine Learning in Biomedical Imaging, Helmholtz Munich, Ingolstädter Landstraße 1, 85764, Neuherberg, Bavaria, Germany
- School of Biomedical Engineering & Imaging Sciences, King's College London, Strand, WC2R 2LS, London, London, UK
| | - Daniel Rueckert
- Institute for Artificial Intelligence and Informatics in Medicine, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Faculty of Engineering, Department of Computing, Imperial College London, Exhibition Rd, SW7 2BX, London, London, UK
| | - Stephanie E Combs
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Institute of Radiation Medicine (IRM), Helmholtz Zentrum, Ingolstädter Landstraße 1, 85764, Oberschleißheim, Bavaria, Germany
- Partner Site Munich, German Consortium for Translational Cancer Research (DKTK), Munich, Bavaria, Germany
| | - Jan C Peeken
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Institute of Radiation Medicine (IRM), Helmholtz Zentrum, Ingolstädter Landstraße 1, 85764, Oberschleißheim, Bavaria, Germany
- Partner Site Munich, German Consortium for Translational Cancer Research (DKTK), Munich, Bavaria, Germany
| |
Collapse
|
2
|
Yu H, Yang Z, Zhang Z, Wang T, Ran M, Wang Z, Liu L, Liu Y, Zhang Y. Multiple organ segmentation framework for brain metastasis radiotherapy. Comput Biol Med 2024; 177:108637. [PMID: 38824789 DOI: 10.1016/j.compbiomed.2024.108637] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2024] [Revised: 04/24/2024] [Accepted: 05/18/2024] [Indexed: 06/04/2024]
Abstract
Radiotherapy is a preferred treatment for brain metastases, which kills cancer cells via high doses of radiation meanwhile hardly avoiding damage to surrounding healthy cells. Therefore, the delineation of organs-at-risk (OARs) is vital in treatment planning to minimize radiation-induced toxicity. However, the following aspects make OAR delineation a challenging task: extremely imbalanced organ sizes, ambiguous boundaries, and complex anatomical structures. To alleviate these challenges, we imitate how specialized clinicians delineate OARs and present a novel cascaded multi-OAR segmentation framework, called OAR-SegNet. OAR-SegNet comprises two distinct levels of segmentation networks: an Anatomical-Prior-Guided network (APG-Net) and a Point-Cloud-Guided network (PCG-Net). Specifically, APG-Net handles segmentation for all organs, where multi-view segmentation modules and a deep prior loss are designed under the guidance of prior knowledge. After APG-Net, PCG-Net refines small organs through the mini-segmentation and the point-cloud alignment heads. The mini-segmentation head is further equipped with the deep prior feature. Extensive experiments were conducted to demonstrate the superior performance of the proposed method compared to other state-of-the-art medical segmentation methods.
Collapse
Affiliation(s)
- Hui Yu
- College of Computer Science, Sichuan University, China
| | - Ziyuan Yang
- College of Computer Science, Sichuan University, China
| | | | - Tao Wang
- College of Computer Science, Sichuan University, China
| | - Maoson Ran
- College of Computer Science, Sichuan University, China
| | - Zhiwen Wang
- College of Computer Science, Sichuan University, China
| | - Lunxin Liu
- Department of Neurosurgery, West China Hospital of Sichuan University, China
| | - Yan Liu
- College of Electrical Engineering, Sichuan University, China.
| | - Yi Zhang
- School of Cyber Science and Engineering, Sichuan University, China
| |
Collapse
|
3
|
Kakkos I, Vagenas TP, Zygogianni A, Matsopoulos GK. Towards Automation in Radiotherapy Planning: A Deep Learning Approach for the Delineation of Parotid Glands in Head and Neck Cancer. Bioengineering (Basel) 2024; 11:214. [PMID: 38534488 DOI: 10.3390/bioengineering11030214] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2023] [Revised: 02/19/2024] [Accepted: 02/22/2024] [Indexed: 03/28/2024] Open
Abstract
The delineation of parotid glands in head and neck (HN) carcinoma is critical to assess radiotherapy (RT) planning. Segmentation processes ensure precise target position and treatment precision, facilitate monitoring of anatomical changes, enable plan adaptation, and enhance overall patient safety. In this context, artificial intelligence (AI) and deep learning (DL) have proven exceedingly effective in precisely outlining tumor tissues and, by extension, the organs at risk. This paper introduces a DL framework using the AttentionUNet neural network for automatic parotid gland segmentation in HN cancer. Extensive evaluation of the model is performed in two public and one private dataset, while segmentation accuracy is compared with other state-of-the-art DL segmentation schemas. To assess replanning necessity during treatment, an additional registration method is implemented on the segmentation output, aligning images of different modalities (Computed Tomography (CT) and Cone Beam CT (CBCT)). AttentionUNet outperforms similar DL methods (Dice Similarity Coefficient: 82.65% ± 1.03, Hausdorff Distance: 6.24 mm ± 2.47), confirming its effectiveness. Moreover, the subsequent registration procedure displays increased similarity, providing insights into the effects of RT procedures for treatment planning adaptations. The implementation of the proposed methods indicates the effectiveness of DL not only for automatic delineation of the anatomical structures, but also for the provision of information for adaptive RT support.
Collapse
Affiliation(s)
- Ioannis Kakkos
- Biomedical Engineering Laboratory, National Technical University of Athens, 15773 Athens, Greece
| | - Theodoros P Vagenas
- Biomedical Engineering Laboratory, National Technical University of Athens, 15773 Athens, Greece
| | - Anna Zygogianni
- Radiation Oncology Unit, 1st Department of Radiology, ARETAIEION University Hospital, 11528 Athens, Greece
| | - George K Matsopoulos
- Biomedical Engineering Laboratory, National Technical University of Athens, 15773 Athens, Greece
| |
Collapse
|
4
|
Doolan PJ, Charalambous S, Roussakis Y, Leczynski A, Peratikou M, Benjamin M, Ferentinos K, Strouthos I, Zamboglou C, Karagiannis E. A clinical evaluation of the performance of five commercial artificial intelligence contouring systems for radiotherapy. Front Oncol 2023; 13:1213068. [PMID: 37601695 PMCID: PMC10436522 DOI: 10.3389/fonc.2023.1213068] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Accepted: 07/17/2023] [Indexed: 08/22/2023] Open
Abstract
Purpose/objectives Auto-segmentation with artificial intelligence (AI) offers an opportunity to reduce inter- and intra-observer variability in contouring, to improve the quality of contours, as well as to reduce the time taken to conduct this manual task. In this work we benchmark the AI auto-segmentation contours produced by five commercial vendors against a common dataset. Methods and materials The organ at risk (OAR) contours generated by five commercial AI auto-segmentation solutions (Mirada (Mir), MVision (MV), Radformation (Rad), RayStation (Ray) and TheraPanacea (Ther)) were compared to manually-drawn expert contours from 20 breast, 20 head and neck, 20 lung and 20 prostate patients. Comparisons were made using geometric similarity metrics including volumetric and surface Dice similarity coefficient (vDSC and sDSC), Hausdorff distance (HD) and Added Path Length (APL). To assess the time saved, the time taken to manually draw the expert contours, as well as the time to correct the AI contours, were recorded. Results There are differences in the number of CT contours offered by each AI auto-segmentation solution at the time of the study (Mir 99; MV 143; Rad 83; Ray 67; Ther 86), with all offering contours of some lymph node levels as well as OARs. Averaged across all structures, the median vDSCs were good for all systems and compared favorably with existing literature: Mir 0.82; MV 0.88; Rad 0.86; Ray 0.87; Ther 0.88. All systems offer substantial time savings, ranging between: breast 14-20 mins; head and neck 74-93 mins; lung 20-26 mins; prostate 35-42 mins. The time saved, averaged across all structures, was similar for all systems: Mir 39.8 mins; MV 43.6 mins; Rad 36.6 min; Ray 43.2 mins; Ther 45.2 mins. Conclusions All five commercial AI auto-segmentation solutions evaluated in this work offer high quality contours in significantly reduced time compared to manual contouring, and could be used to render the radiotherapy workflow more efficient and standardized.
Collapse
Affiliation(s)
- Paul J. Doolan
- Department of Medical Physics, German Oncology Center, Limassol, Cyprus
| | | | - Yiannis Roussakis
- Department of Medical Physics, German Oncology Center, Limassol, Cyprus
| | - Agnes Leczynski
- Department of Radiation Oncology, German Oncology Center, Limassol, Cyprus
| | - Mary Peratikou
- Department of Radiation Oncology, German Oncology Center, Limassol, Cyprus
| | - Melka Benjamin
- Department of Radiation Oncology, German Oncology Center, Limassol, Cyprus
| | - Konstantinos Ferentinos
- Department of Radiation Oncology, German Oncology Center, Limassol, Cyprus
- School of Medicine, European University Cyprus, Nicosia, Cyprus
| | - Iosif Strouthos
- Department of Radiation Oncology, German Oncology Center, Limassol, Cyprus
- School of Medicine, European University Cyprus, Nicosia, Cyprus
| | - Constantinos Zamboglou
- Department of Radiation Oncology, German Oncology Center, Limassol, Cyprus
- School of Medicine, European University Cyprus, Nicosia, Cyprus
- Department of Radiation Oncology, Medical Center – University of Freiberg, Freiberg, Germany
| | - Efstratios Karagiannis
- Department of Radiation Oncology, German Oncology Center, Limassol, Cyprus
- School of Medicine, European University Cyprus, Nicosia, Cyprus
| |
Collapse
|
5
|
Franzese C, Dei D, Lambri N, Teriaca MA, Badalamenti M, Crespi L, Tomatis S, Loiacono D, Mancosu P, Scorsetti M. Enhancing Radiotherapy Workflow for Head and Neck Cancer with Artificial Intelligence: A Systematic Review. J Pers Med 2023; 13:946. [PMID: 37373935 DOI: 10.3390/jpm13060946] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2023] [Revised: 06/01/2023] [Accepted: 06/01/2023] [Indexed: 06/29/2023] Open
Abstract
BACKGROUND Head and neck cancer (HNC) is characterized by complex-shaped tumors and numerous organs at risk (OARs), inducing challenging radiotherapy (RT) planning, optimization, and delivery. In this review, we provided a thorough description of the applications of artificial intelligence (AI) tools in the HNC RT process. METHODS The PubMed database was queried, and a total of 168 articles (2016-2022) were screened by a group of experts in radiation oncology. The group selected 62 articles, which were subdivided into three categories, representing the whole RT workflow: (i) target and OAR contouring, (ii) planning, and (iii) delivery. RESULTS The majority of the selected studies focused on the OARs segmentation process. Overall, the performance of AI models was evaluated using standard metrics, while limited research was found on how the introduction of AI could impact clinical outcomes. Additionally, papers usually lacked information about the confidence level associated with the predictions made by the AI models. CONCLUSIONS AI represents a promising tool to automate the RT workflow for the complex field of HNC treatment. To ensure that the development of AI technologies in RT is effectively aligned with clinical needs, we suggest conducting future studies within interdisciplinary groups, including clinicians and computer scientists.
Collapse
Affiliation(s)
- Ciro Franzese
- Department of Biomedical Sciences, Humanitas University, via Rita Levi Montalcini 4, Pieve Emanuele, 20072 Milan, Italy
- IRCCS Humanitas Research Hospital, Radiotherapy and Radiosurgery Department, via Manzoni 56, Rozzano, 20089 Milan, Italy
| | - Damiano Dei
- Department of Biomedical Sciences, Humanitas University, via Rita Levi Montalcini 4, Pieve Emanuele, 20072 Milan, Italy
- IRCCS Humanitas Research Hospital, Radiotherapy and Radiosurgery Department, via Manzoni 56, Rozzano, 20089 Milan, Italy
| | - Nicola Lambri
- Department of Biomedical Sciences, Humanitas University, via Rita Levi Montalcini 4, Pieve Emanuele, 20072 Milan, Italy
- IRCCS Humanitas Research Hospital, Radiotherapy and Radiosurgery Department, via Manzoni 56, Rozzano, 20089 Milan, Italy
| | - Maria Ausilia Teriaca
- IRCCS Humanitas Research Hospital, Radiotherapy and Radiosurgery Department, via Manzoni 56, Rozzano, 20089 Milan, Italy
| | - Marco Badalamenti
- IRCCS Humanitas Research Hospital, Radiotherapy and Radiosurgery Department, via Manzoni 56, Rozzano, 20089 Milan, Italy
| | - Leonardo Crespi
- Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, 20133 Milan, Italy
- Centre for Health Data Science, Human Technopole, 20157 Milan, Italy
| | - Stefano Tomatis
- IRCCS Humanitas Research Hospital, Radiotherapy and Radiosurgery Department, via Manzoni 56, Rozzano, 20089 Milan, Italy
| | - Daniele Loiacono
- Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, 20133 Milan, Italy
| | - Pietro Mancosu
- IRCCS Humanitas Research Hospital, Radiotherapy and Radiosurgery Department, via Manzoni 56, Rozzano, 20089 Milan, Italy
| | - Marta Scorsetti
- Department of Biomedical Sciences, Humanitas University, via Rita Levi Montalcini 4, Pieve Emanuele, 20072 Milan, Italy
- IRCCS Humanitas Research Hospital, Radiotherapy and Radiosurgery Department, via Manzoni 56, Rozzano, 20089 Milan, Italy
| |
Collapse
|
6
|
Qiu Z, Olberg S, den Hertog D, Ajdari A, Bortfeld T, Pursley J. Online adaptive planning methods for intensity-modulated radiotherapy. Phys Med Biol 2023; 68:10.1088/1361-6560/accdb2. [PMID: 37068488 PMCID: PMC10637515 DOI: 10.1088/1361-6560/accdb2] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2022] [Accepted: 04/17/2023] [Indexed: 04/19/2023]
Abstract
Online adaptive radiation therapy aims at adapting a patient's treatment plan to their current anatomy to account for inter-fraction variations before daily treatment delivery. As this process needs to be accomplished while the patient is immobilized on the treatment couch, it requires time-efficient adaptive planning methods to generate a quality daily treatment plan rapidly. The conventional planning methods do not meet the time requirement of online adaptive radiation therapy because they often involve excessive human intervention, significantly prolonging the planning phase. This article reviews the planning strategies employed by current commercial online adaptive radiation therapy systems, research on online adaptive planning, and artificial intelligence's potential application to online adaptive planning.
Collapse
Affiliation(s)
- Zihang Qiu
- Department of Business Analytics, University of Amsterdam, The Netherlands
- Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, United States of America
| | - Sven Olberg
- Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, United States of America
| | - Dick den Hertog
- Department of Business Analytics, University of Amsterdam, The Netherlands
| | - Ali Ajdari
- Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, United States of America
| | - Thomas Bortfeld
- Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, United States of America
| | - Jennifer Pursley
- Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, United States of America
| |
Collapse
|
7
|
Gao H, Lyu M, Zhao X, Yang F, Bai X. Contour-aware network with class-wise convolutions for 3D abdominal multi-organ segmentation. Med Image Anal 2023; 87:102838. [PMID: 37196536 DOI: 10.1016/j.media.2023.102838] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2022] [Revised: 03/21/2023] [Accepted: 05/05/2023] [Indexed: 05/19/2023]
Abstract
Accurate delineation of multiple organs is a critical process for various medical procedures, which could be operator-dependent and time-consuming. Existing organ segmentation methods, which were mainly inspired by natural image analysis techniques, might not fully exploit the traits of the multi-organ segmentation task and could not accurately segment the organs with various shapes and sizes simultaneously. In this work, the characteristics of multi-organ segmentation are considered: the global count, position and scale of organs are generally predictable, while their local shape and appearance are volatile. Thus, we supplement the region segmentation backbone with a contour localization task to increase the certainty along delicate boundaries. Meantime, each organ has exclusive anatomical traits, which motivates us to deal with class variability with class-wise convolutions to highlight organ-specific features and suppress irrelevant responses at different field-of-views. To validate our method with adequate amounts of patients and organs, we constructed a multi-center dataset, which contains 110 3D CT scans with 24,528 axial slices, and provided voxel-level manual segmentations of 14 abdominal organs, which adds up to 1,532 3D structures in total. Extensive ablation and visualization studies on it validate the effectiveness of the proposed method. Quantitative analysis shows that we achieve state-of-the-art performance for most abdominal organs, and obtain 3.63 mm 95% Hausdorff Distance and 83.32% Dice Similarity Coefficient on an average.
Collapse
Affiliation(s)
- Hongjian Gao
- Image Processing Center, Beihang University, Beijing 102206, China
| | - Mengyao Lyu
- School of Software, Tsinghua University, Beijing 100084, China; Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing 100084, China
| | - Xinyue Zhao
- School of Medical Imaging, Xuzhou Medical University, Xuzhou 221004, China
| | - Fan Yang
- Image Processing Center, Beihang University, Beijing 102206, China
| | - Xiangzhi Bai
- Image Processing Center, Beihang University, Beijing 102206, China; State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing 100191, China; Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing 100191, China.
| |
Collapse
|
8
|
Systematic Review of Tumor Segmentation Strategies for Bone Metastases. Cancers (Basel) 2023; 15:cancers15061750. [PMID: 36980636 PMCID: PMC10046265 DOI: 10.3390/cancers15061750] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2023] [Revised: 03/09/2023] [Accepted: 03/10/2023] [Indexed: 03/18/2023] Open
Abstract
Purpose: To investigate the segmentation approaches for bone metastases in differentiating benign from malignant bone lesions and characterizing malignant bone lesions. Method: The literature search was conducted in Scopus, PubMed, IEEE and MedLine, and Web of Science electronic databases following the guidelines of Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). A total of 77 original articles, 24 review articles, and 1 comparison paper published between January 2010 and March 2022 were included in the review. Results: The results showed that most studies used neural network-based approaches (58.44%) and CT-based imaging (50.65%) out of 77 original articles. However, the review highlights the lack of a gold standard for tumor boundaries and the need for manual correction of the segmentation output, which largely explains the absence of clinical translation studies. Moreover, only 19 studies (24.67%) specifically mentioned the feasibility of their proposed methods for use in clinical practice. Conclusion: Development of tumor segmentation techniques that combine anatomical information and metabolic activities is encouraging despite not having an optimal tumor segmentation method for all applications or can compensate for all the difficulties built into data limitations.
Collapse
|
9
|
Peng Y, Liu Y, Shen G, Chen Z, Chen M, Miao J, Zhao C, Deng J, Qi Z, Deng X. Improved accuracy of auto-segmentation of organs at risk in radiotherapy planning for nasopharyngeal carcinoma based on fully convolutional neural network deep learning. Oral Oncol 2023; 136:106261. [PMID: 36446186 DOI: 10.1016/j.oraloncology.2022.106261] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2022] [Revised: 11/13/2022] [Accepted: 11/19/2022] [Indexed: 11/27/2022]
Abstract
OBJECTIVE We examined a modified encoder-decoder architecture-based fully convolutional neural network, OrganNet, for simultaneous auto-segmentation of 24 organs at risk (OARs) in the head and neck, followed by validation tests and evaluation of clinical application. MATERIALS AND METHODS Computed tomography (CT) images from 310 radiotherapy plans were used as the experimental data set, of which 260 and 50 were used as the training and test sets, respectively. An improved U-Net architecture was established by introducing a batch normalization layer, residual squeeze-and-excitation layer, and unique organ-specific loss function for deep learning training. The performance of the trained network model was evaluated by comparing the manual-delineation and the STAPLE contour of 10 physicians from different centers. RESULTS Our model achieved good segmentation in all 24 OARs in nasopharyngeal cancer radiotherapy plan CT images, with an average Dice similarity coefficient of 83.75%. Specifically, the mean Dice coefficients in large-volume organs (brainstem, spinal cord, left/right parotid glands, left/right temporal lobes, and left/right mandibles) were 84.97% - 95.00%, and in small-volume organs (pituitary, lens, optic nerve, and optic chiasma) were 55.46% - 91.56%. respectively. Using the STAPLE contours as standard contour, the OrganNet achieved comparable or better DICE in organ segmentation then that of the manual-delineation as well. CONCLUSION The established OrganNet enables simultaneous automatic segmentation of multiple targets on CT images of the head and neck radiotherapy plans, effectively improves the accuracy of U-Net based segmentation for OARs, especially for small-volume organs.
Collapse
Affiliation(s)
- Yinglin Peng
- Department of Radiation Oncology, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, China; School of Biomedical Engineering, Sun Yat-sen University, Guangzhou, China
| | - Yimei Liu
- Department of Radiation Oncology, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, China
| | - Guanzhu Shen
- Department of Radiation Oncology, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| | - Zijie Chen
- Shenying Medical Technology (Shenzhen) Co., Ltd., Shenzhen, Guangdong, China
| | - Meining Chen
- Department of Radiation Oncology, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, China
| | - Jingjing Miao
- Department of Radiation Oncology, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, China
| | - Chong Zhao
- Department of Radiation Oncology, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, China
| | - Jincheng Deng
- Shenying Medical Technology (Shenzhen) Co., Ltd., Shenzhen, Guangdong, China
| | - Zhenyu Qi
- Department of Radiation Oncology, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, China.
| | - Xiaowu Deng
- Department of Radiation Oncology, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, China.
| |
Collapse
|
10
|
Khan N, Peterson AC, Aubert B, Morris A, Atkins PR, Lenz AL, Anderson AE, Elhabian SY. Statistical multi-level shape models for scalable modeling of multi-organ anatomies. Front Bioeng Biotechnol 2023; 11:1089113. [PMID: 36873362 PMCID: PMC9978224 DOI: 10.3389/fbioe.2023.1089113] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Accepted: 02/06/2023] [Indexed: 02/18/2023] Open
Abstract
Statistical shape modeling is an indispensable tool in the quantitative analysis of anatomies. Particle-based shape modeling (PSM) is a state-of-the-art approach that enables the learning of population-level shape representation from medical imaging data (e.g., CT, MRI) and the associated 3D models of anatomy generated from them. PSM optimizes the placement of a dense set of landmarks (i.e., correspondence points) on a given shape cohort. PSM supports multi-organ modeling as a particular case of the conventional single-organ framework via a global statistical model, where multi-structure anatomy is considered as a single structure. However, global multi-organ models are not scalable for many organs, induce anatomical inconsistencies, and result in entangled shape statistics where modes of shape variation reflect both within- and between-organ variations. Hence, there is a need for an efficient modeling approach that can capture the inter-organ relations (i.e., pose variations) of the complex anatomy while simultaneously optimizing the morphological changes of each organ and capturing the population-level statistics. This paper leverages the PSM approach and proposes a new approach for correspondence-point optimization of multiple organs that overcomes these limitations. The central idea of multilevel component analysis, is that the shape statistics consists of two mutually orthogonal subspaces: the within-organ subspace and the between-organ subspace. We formulate the correspondence optimization objective using this generative model. We evaluate the proposed method using synthetic shape data and clinical data for articulated joint structures of the spine, foot and ankle, and hip joint.
Collapse
Affiliation(s)
- Nawazish Khan
- Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, UT, United States
- School of Computing, University of Utah, Salt Lake City, UT, United States
- *Correspondence: Nawazish Khan ,
| | - Andrew C. Peterson
- Department of Orthopaedics, School of Medicine, University of Utah, Salt Lake City, UT, United States
| | | | - Alan Morris
- Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, UT, United States
| | - Penny R. Atkins
- Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, UT, United States
- Department of Orthopaedics, School of Medicine, University of Utah, Salt Lake City, UT, United States
| | - Amy L. Lenz
- Department of Orthopaedics, School of Medicine, University of Utah, Salt Lake City, UT, United States
| | - Andrew E. Anderson
- Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, UT, United States
- Department of Orthopaedics, School of Medicine, University of Utah, Salt Lake City, UT, United States
| | - Shireen Y. Elhabian
- Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, UT, United States
- School of Computing, University of Utah, Salt Lake City, UT, United States
| |
Collapse
|
11
|
Costea M, Zlate A, Durand M, Baudier T, Grégoire V, Sarrut D, Biston MC. Comparison of atlas-based and deep learning methods for organs at risk delineation on head-and-neck CT images using an automated treatment planning system. Radiother Oncol 2022; 177:61-70. [PMID: 36328093 DOI: 10.1016/j.radonc.2022.10.029] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2022] [Revised: 10/21/2022] [Accepted: 10/23/2022] [Indexed: 11/06/2022]
Abstract
BACKGROUND AND PURPOSE To investigate the performance of head-and-neck (HN) organs-at-risk (OAR) automatic segmentation (AS) using four atlas-based (ABAS) and two deep learning (DL) solutions. MATERIAL AND METHODS All patients underwent iodine contrast-enhanced planning CT. Fourteen OAR were manually delineated. DL.1 and DL.2 solutions were trained with 63 mono-centric patients and > 1000 multi-centric patients, respectively. Ten and 15 patients with varied anatomies were selected for the atlas library and for testing, respectively. The evaluation was based on geometric indices (DICE coefficient and 95th percentile-Hausdorff Distance (HD95%)), time needed for manual corrections and clinical dosimetric endpoints obtained using automated treatment planning. RESULTS Both DICE and HD95% results indicated that DL algorithms generally performed better compared with ABAS algorithms for automatic segmentation of HN OAR. However, the hybrid-ABAS (ABAS.3) algorithm sometimes provided the highest agreement to the reference contours compared with the 2 DL. Compared with DL.2 and ABAS.3, DL.1 contours were the fastest to correct. For the 3 solutions, the differences in dose distributions obtained using AS contours and AS + manually corrected contours were not statistically significant. High dose differences could be observed when OAR contours were at short distances to the targets. However, this was not always interrelated. CONCLUSION DL methods generally showed higher delineation accuracy compared with ABAS methods for AS segmentation of HN OAR. Most ABAS contours had high conformity to the reference but were more time consuming than DL algorithms, especially when considering the computing time and the time spent on manual corrections.
Collapse
Affiliation(s)
- Madalina Costea
- Centre Léon Bérard, 28 rue Laennec, 69373 LYON Cedex 08, France; CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Université Lyon 1, Villeurbanne, France
| | | | - Morgane Durand
- Centre Léon Bérard, 28 rue Laennec, 69373 LYON Cedex 08, France
| | - Thomas Baudier
- Centre Léon Bérard, 28 rue Laennec, 69373 LYON Cedex 08, France; CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Université Lyon 1, Villeurbanne, France
| | | | - David Sarrut
- Centre Léon Bérard, 28 rue Laennec, 69373 LYON Cedex 08, France; CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Université Lyon 1, Villeurbanne, France
| | - Marie-Claude Biston
- Centre Léon Bérard, 28 rue Laennec, 69373 LYON Cedex 08, France; CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Université Lyon 1, Villeurbanne, France.
| |
Collapse
|
12
|
Yan C, Guo B, Tendulkar R, Xia P. Contour similarity and its implication on inverse prostate SBRT treatment planning. J Appl Clin Med Phys 2022; 24:e13809. [PMID: 36300837 PMCID: PMC9924104 DOI: 10.1002/acm2.13809] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2020] [Revised: 08/01/2022] [Accepted: 09/13/2022] [Indexed: 11/11/2022] Open
Abstract
PURPOSE Success of auto-segmentation is measured by the similarity between auto and manual contours that is often quantified by Dice coefficient (DC). The dosimetric impact of contour variability on inverse planning has been rarely reported. The main aim of this study is to investigate whether automatically generated organs-at-risk (OARs) could be used in inverse prostate stereotactic body radiation therapy (SBRT) planning and whether the dosimetric parameters are still clinically acceptable after radiation oncologists modify the OARs. METHODS AND MATERIALS Planning computed tomography images from 10 patients treated with SBRT for prostate cancer were selected and automatically segmented by commercially available atlas-based software. The automatically generated OAR contours were compared with the manually drawn contours. Two volumetric modulated arc therapy (VMAT) plans, autoRec-VMAT (where only automatically generated rectums were used in optimization) and autoAll-VMAT (where automatically generated OARs were used in inverse optimization) were generated. Dosimetric parameters based on the manually drawn PTV and OARs were compared with the clinically approved plans. RESULTS The DCs for the rectum contours varied from 0.55 to 0.74 with a mean value of 0.665. Differences of D95 of the PTV between autoRec-VMAT and manu-VMAT plans varied from 0.03% to -2.85% with a mean value of -0.64%. Differences of D0.03cc of manual rectum between the two plans varied from -0.86% to 9.94% with a mean value of 2.71%. D95 of PTV between autoAll-VMAT and manu-VMAT plans varied from 0.28% to -2.9% with a mean value -0.83%. Differences of D0.03cc of manual rectum between the two plans varied from -0.76% to 6.72% with a mean value of 2.62%. CONCLUSION Our study implies that it is possible to use unedited automatically generated OARs to perform initial inverse prostate SBRT planning. After radiation oncologists modify/approve the OARs, the plan qualities based on the manually drawn OARs are still clinically acceptable, and a re-optimization may not be needed.
Collapse
Affiliation(s)
- Chenyu Yan
- Department of Radiation OncologyCleveland Clinic FoundationClevelandOhioUSA
| | - Bingqi Guo
- Department of Radiation OncologyCleveland Clinic FoundationClevelandOhioUSA
| | - Rahul Tendulkar
- Department of Radiation OncologyCleveland Clinic FoundationClevelandOhioUSA
| | - Ping Xia
- Department of Radiation OncologyCleveland Clinic FoundationClevelandOhioUSA
| |
Collapse
|
13
|
Watkins WT, Qing K, Han C, Hui S, Liu A. Auto-segmentation for total marrow irradiation. Front Oncol 2022; 12:970425. [PMID: 36110933 PMCID: PMC9468379 DOI: 10.3389/fonc.2022.970425] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2022] [Accepted: 07/21/2022] [Indexed: 11/13/2022] Open
Abstract
Purpose To evaluate the accuracy and efficiency of Artificial-Intelligence (AI) segmentation in Total Marrow Irradiation (TMI) including contours throughout the head and neck (H&N), thorax, abdomen, and pelvis. Methods An AI segmentation software was clinically introduced for total body contouring in TMI including 27 organs at risk (OARs) and 4 planning target volumes (PTVs). This work compares the clinically utilized contours to the AI-TMI contours for 21 patients. Structure and image dicom data was used to generate comparisons including volumetric, spatial, and dosimetric variations between the AI- and human-edited contour sets. Conventional volume and surface measures including the Sørensen-Dice coefficient (Dice) and the 95th% Hausdorff Distance (HD95) were used, and novel efficiency metrics were introduced. The clinical efficiency gains were estimated by the percentage of the AI-contour-surface within 1mm of the clinical contour surface. An unedited AI-contour has an efficiency gain=100%, an AI-contour with 70% of its surface<1mm from a clinical contour has an efficiency gain of 70%. The dosimetric deviations were estimated from the clinical dose distribution to compute the dose volume histogram (DVH) for all structures. Results A total of 467 contours were compared in the 21 patients. In PTVs, contour surfaces deviated by >1mm in 38.6% ± 23.1% of structures, an average efficiency gain of 61.4%. Deviations >5mm were detected in 12.0% ± 21.3% of the PTV contours. In OARs, deviations >1mm were detected in 24.4% ± 27.1% of the structure surfaces and >5mm in 7.2% ± 18.0%; an average clinical efficiency gain of 75.6%. In H&N OARs, efficiency gains ranged from 42% in optic chiasm to 100% in eyes (unedited in all cases). In thorax, average efficiency gains were >80% in spinal cord, heart, and both lungs. Efficiency gains ranged from 60-70% in spleen, stomach, rectum, and bowel and 75-84% in liver, kidney, and bladder. DVH differences exceeded 0.05 in 109/467 curves at any dose level. The most common 5%-DVH variations were in esophagus (86%), rectum (48%), and PTVs (22%). Conclusions AI auto-segmentation software offers a powerful solution for enhanced efficiency in TMI treatment planning. Whole body segmentation including PTVs and normal organs was successful based on spatial and dosimetric comparison.
Collapse
Affiliation(s)
- William Tyler Watkins
- Department of Radiation Oncology, City of Hope National Medical Center, Duarte, CA, United States
| | | | | | | | | |
Collapse
|
14
|
Lowther N, Louwe R, Yuen J, Hardcastle N, Yeo A, Jameson M. MIRSIG position paper: the use of image registration and fusion algorithms in radiotherapy. Phys Eng Sci Med 2022; 45:421-428. [PMID: 35522369 PMCID: PMC9239966 DOI: 10.1007/s13246-022-01125-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/28/2022] [Indexed: 12/12/2022]
Abstract
The report of the American Association of Physicists in Medicine (AAPM) Task Group No. 132 published in 2017 reviewed rigid image registration and deformable image registration (DIR) approaches and solutions to provide recommendations for quality assurance and quality control of clinical image registration and fusion techniques in radiotherapy. However, that report did not include the use of DIR for advanced applications such as dose warping or warping of other matrices of interest. Considering that DIR warping tools are now readily available, discussions were hosted by the Medical Image Registration Special Interest Group (MIRSIG) of the Australasian College of Physical Scientists & Engineers in Medicine in 2018 to form a consensus on best practice guidelines. This position statement authored by MIRSIG endorses the recommendations of the report of AAPM task group 132 and expands on the best practice advice from the 'Deforming to Best Practice' MIRSIG publication to provide guidelines on the use of DIR for advanced applications.
Collapse
Affiliation(s)
- Nicholas Lowther
- Department of Radiation Oncology, Wellington Blood and Cancer Centre, Wellington, New Zealand
| | - Rob Louwe
- Holland Proton Therapy Centre, Delft, Netherlands
| | - Johnson Yuen
- St George Hospital Cancer Care Centre, Kogarah, New South Wales, 2217, Australia
- South Western Clinical School, University of New South Wales, Sydney, Australia
- Ingham Institute for Applied Medical Research, Sydney, NSW, Australia
| | - Nicholas Hardcastle
- Physical Sciences, Peter MacCallum Cancer Centre, Melbourne, VIC, Australia
- Centre for Medical Radiation Physics, University of Wollongong, Wollongong, NSW, Australia
- The Sir Peter MacCallum Department of Oncology, The University of Melbourne, Melbourne, VIC, Australia
| | - Adam Yeo
- Physical Sciences, Peter MacCallum Cancer Centre, Melbourne, VIC, Australia
- School of Applied Sciences, RMIT University, Melbourne, VIC, Australia
| | - Michael Jameson
- GenesisCare, Sydney, NSW, 2015, Australia.
- St Vincent's Clinical School, University of New South Wales, Sydney, Australia.
| |
Collapse
|
15
|
Rao D, K P, Singh R, J V. Automated segmentation of the larynx on computed tomography images: a review. Biomed Eng Lett 2022; 12:175-183. [PMID: 35529346 PMCID: PMC9046475 DOI: 10.1007/s13534-022-00221-3] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2021] [Revised: 01/29/2022] [Accepted: 02/15/2022] [Indexed: 11/03/2022] Open
Abstract
The larynx, or the voice-box, is a common site of occurrence of Head and Neck cancers. Yet, automated segmentation of the larynx has been receiving very little attention. Segmentation of organs is an essential step in cancer treatment-planning. Computed Tomography scans are routinely used to assess the extent of tumor spread in the Head and Neck as they are fast to acquire and tolerant to some movement. This paper reviews various automated detection and segmentation methods used for the larynx on Computed Tomography images. Image registration and deep learning approaches to segmenting the laryngeal anatomy are compared, highlighting their strengths and shortcomings. A list of available annotated laryngeal computed tomography datasets is compiled for encouraging further research. Commercial software currently available for larynx contouring are briefed in our work. We conclude that the lack of standardisation on larynx boundaries and the complexity of the relatively small structure makes automated segmentation of the larynx on computed tomography images a challenge. Reliable computer aided intervention in the contouring and segmentation process will help clinicians easily verify their findings and look for oversight in diagnosis. This review is useful for research that works with artificial intelligence in Head and Neck cancer, specifically that deals with the segmentation of laryngeal anatomy. Supplementary Information The online version contains supplementary material available at 10.1007/s13534-022-00221-3.
Collapse
Affiliation(s)
- Divya Rao
- Department of Information and Communication Technology, Manipal Institute of Technology, Manipal Academy of Higher Education, 576104 Manipal, India
- Department of Otorhinolaryngology, Kasturba Medical College, Manipal Academy of Higher Education, 576104 Manipal, India
| | - Prakashini K
- Department of Radiodiagnosis and Imaging, Kasturba Medical College, Manipal Academy of Higher Education, 576104 Manipal, India
| | - Rohit Singh
- Department of Otorhinolaryngology, Kasturba Medical College, Manipal Academy of Higher Education, 576104 Manipal, India
| | - Vijayananda J
- Data Science and Artificial Intelligence, 560045 Philips, Bangalore, India
| |
Collapse
|
16
|
Huang B, Ye Y, Xu Z, Cai Z, He Y, Zhong Z, Liu L, Chen X, Chen H, Huang B. 3D Lightweight Network for Simultaneous Registration and Segmentation of Organs-at-Risk in CT Images of Head and Neck Cancer. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:951-964. [PMID: 34784272 DOI: 10.1109/tmi.2021.3128408] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Image-guided radiation therapy (IGRT) is the most effective treatment for head and neck cancer. The successful implementation of IGRT requires accurate delineation of organ-at-risk (OAR) in the computed tomography (CT) images. In routine clinical practice, OARs are manually segmented by oncologists, which is time-consuming, laborious, and subjective. To assist oncologists in OAR contouring, we proposed a three-dimensional (3D) lightweight framework for simultaneous OAR registration and segmentation. The registration network was designed to align a selected OAR template to a new image volume for OAR localization. A region of interest (ROI) selection layer then generated ROIs of OARs from the registration results, which were fed into a multiview segmentation network for accurate OAR segmentation. To improve the performance of registration and segmentation networks, a centre distance loss was designed for the registration network, an ROI classification branch was employed for the segmentation network, and further, context information was incorporated to iteratively promote both networks' performance. The segmentation results were further refined with shape information for final delineation. We evaluated registration and segmentation performances of the proposed framework using three datasets. On the internal dataset, the Dice similarity coefficient (DSC) of registration and segmentation was 69.7% and 79.6%, respectively. In addition, our framework was evaluated on two external datasets and gained satisfactory performance. These results showed that the 3D lightweight framework achieved fast, accurate and robust registration and segmentation of OARs in head and neck cancer. The proposed framework has the potential of assisting oncologists in OAR delineation.
Collapse
|
17
|
Kawahara D, Tsuneda M, Ozawa S, Okamoto H, Nakamura M, Nishio T, Saito A, Nagata Y. Stepwise deep neural network (stepwise-net) for head and neck auto-segmentation on CT images. Comput Biol Med 2022; 143:105295. [PMID: 35168082 DOI: 10.1016/j.compbiomed.2022.105295] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2021] [Revised: 01/08/2022] [Accepted: 02/02/2022] [Indexed: 11/20/2022]
Abstract
OBJECTIVE The current study aims to propose the auto-segmentation model on CT images of head and neck cancer using a stepwise deep neural network (stepwise-net). MATERIAL AND METHODS Six normal tissue structures in the head and neck region of 3D CT images: Brainstem, optic nerve, parotid glands (left and right), and submandibular glands (left and right) were segmented with deep learning. In addition to a conventional convolutional neural network (CNN) on U-net, a stepwise neural network (stepwise-network) was developed. The stepwise-network was based on 3D FCN. We designed two networks in the stepwise-network. One is identifying the target region for the segmentation with the low-resolution images. Then, the target region is cropped, which used for the input image for the prediction of the segmentation. These were compared with a clinical used atlas-based segmentation. RESULTS The DSCs of the stepwise-net was significantly higher than the atlas-based method for all organ at risk structures. Similarly, the JSCs of the stepwise-net was significantly higher than the atlas-based methods for all organ at risk structures. The Hausdorff distance (HD) was significantly smaller than the atlas-based method for all organ at-risk structures. For the comparison of the stepwise-net and U-net, the stepwise-net had a higher DSC and JSC and a smaller HD than the conventional U-net. CONCLUSIONS We found that the stepwise-network plays a role is superior to conventional U-net-based and atlas-based segmentation. Our proposed model that is a potentially valuable method for improving the efficiency of head and neck radiotherapy treatment planning.
Collapse
Affiliation(s)
- Daisuke Kawahara
- Department of Radiation Oncology, Graduate School of Biomedical Health Sciences, Hiroshima University, Hiroshima, 734-8551, Japan.
| | - Masato Tsuneda
- Department of Radiation Oncology, MR Linac ART Division, Graduate School of Medicine, Chiba University, Chiba, 260-8670, Japan
| | - Shuichi Ozawa
- Hiroshima High-Precision Radiotherapy Cancer Center, Hiroshima, 732-0057, Japan
| | - Hiroyuki Okamoto
- Department of Medical Physics, National Cancer Center Hospital, Tokyo, 104-0045, Japan
| | - Mitsuhiro Nakamura
- Division of Medical Physics, Department of Information Technology and Medical Engineering, Human Health Sciences, Graduate School of Medicine, Kyoto University, Kyoto, 606-8507, Japan
| | - Teiji Nishio
- Medical Physics Laboratory, Division of Health Science, Graduate School of Medicine, Osaka University, Osaka, 565-0871, Japan
| | - Akito Saito
- Department of Radiation Oncology, Graduate School of Biomedical Health Sciences, Hiroshima University, Hiroshima, 734-8551, Japan
| | - Yasushi Nagata
- Department of Radiation Oncology, Graduate School of Biomedical Health Sciences, Hiroshima University, Hiroshima, 734-8551, Japan; Hiroshima High-Precision Radiotherapy Cancer Center, Hiroshima, 732-0057, Japan
| |
Collapse
|
18
|
Yang Y, Huang R, Lv G, Hu Z, Shan G, Zhang J, Bai X, Liu P, Li H, Chen M. Automatic segmentation of the clinical target volume and organs at risk for rectal cancer radiotherapy using structure-contextual representations based on 3D high-resolution network. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103362] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
19
|
Deep Learning-Based Automatic Segmentation of Mandible and Maxilla in Multi-Center CT Images. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12031358] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Abstract
Sophisticated segmentation of the craniomaxillofacial bones (the mandible and maxilla) in computed tomography (CT) is essential for diagnosis and treatment planning for craniomaxillofacial surgeries. Conventional manual segmentation is time-consuming and challenging due to intrinsic properties of craniomaxillofacial bones and head CT such as the variance in the anatomical structures, low contrast of soft tissue, and artifacts caused by metal implants. However, data-driven segmentation methods, including deep learning, require a large consistent dataset, which creates a bottleneck in their clinical applications due to limited datasets. In this study, we propose a deep learning approach for the automatic segmentation of the mandible and maxilla in CT images and enhanced the compatibility for multi-center datasets. Four multi-center datasets acquired by various conditions were applied to create a scenario where the model was trained with one dataset and evaluated with the other datasets. For the neural network, we designed a hierarchical, parallel and multi-scale residual block to the U-Net (HPMR-U-Net). To evaluate the performance, segmentation with in-house dataset and with external datasets from multi-center were conducted in comparison to three other neural networks: U-Net, Res-U-Net and mU-Net. The results suggest that the segmentation performance of HPMR-U-Net is comparable to that of other models, with superior data compatibility.
Collapse
|
20
|
Iyer A, Thor M, Onochie I, Hesse J, Zakeri K, LoCastro E, Jiang J, Veeraraghavan H, Elguindi S, Lee NY, Deasy JO, Apte AP. Prospectively-validated deep learning model for segmenting swallowing and chewing structures in CT. Phys Med Biol 2022; 67:10.1088/1361-6560/ac4000. [PMID: 34874302 PMCID: PMC8911366 DOI: 10.1088/1361-6560/ac4000] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2021] [Accepted: 12/03/2021] [Indexed: 01/19/2023]
Abstract
Objective.Delineating swallowing and chewing structures aids in radiotherapy (RT) treatment planning to limit dysphagia, trismus, and speech dysfunction. We aim to develop an accurate and efficient method to automate this process.Approach.CT scans of 242 head and neck (H&N) cancer patients acquired from 2004 to 2009 at our institution were used to develop auto-segmentation models for the masseters, medial pterygoids, larynx, and pharyngeal constrictor muscle using DeepLabV3+. A cascaded framework was used, wherein models were trained sequentially to spatially constrain each structure group based on prior segmentations. Additionally, an ensemble of models, combining contextual information from axial, coronal, and sagittal views was used to improve segmentation accuracy. Prospective evaluation was conducted by measuring the amount of manual editing required in 91 H&N CT scans acquired February-May 2021.Main results. Medians and inter-quartile ranges of Dice similarity coefficients (DSC) computed on the retrospective testing set (N = 24) were 0.87 (0.85-0.89) for the masseters, 0.80 (0.79-0.81) for the medial pterygoids, 0.81 (0.79-0.84) for the larynx, and 0.69 (0.67-0.71) for the constrictor. Auto-segmentations, when compared to two sets of manual segmentations in 10 randomly selected scans, showed better agreement (DSC) with each observer than inter-observer DSC. Prospective analysis showed most manual modifications needed for clinical use were minor, suggesting auto-contouring could increase clinical efficiency. Trained segmentation models are available for research use upon request viahttps://github.com/cerr/CERR/wiki/Auto-Segmentation-models.Significance.We developed deep learning-based auto-segmentation models for swallowing and chewing structures in CT and demonstrated its potential for use in treatment planning to limit complications post-RT. To the best of our knowledge, this is the only prospectively-validated deep learning-based model for segmenting chewing and swallowing structures in CT. Segmentation models have been made open-source to facilitate reproducibility and multi-institutional research.
Collapse
Affiliation(s)
- Aditi Iyer
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, United States of America
| | - Maria Thor
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, United States of America
| | - Ifeanyirochukwu Onochie
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, United States of America
| | - Jennifer Hesse
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, United States of America
| | - Kaveh Zakeri
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, United States of America
| | - Eve LoCastro
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, United States of America
| | - Jue Jiang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, United States of America
| | - Harini Veeraraghavan
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, United States of America
| | - Sharif Elguindi
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, United States of America
| | - Nancy Y Lee
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, United States of America
| | - Joseph O Deasy
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, United States of America
| | - Aditya P Apte
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, United States of America
| |
Collapse
|
21
|
Kalantar R, Lin G, Winfield JM, Messiou C, Lalondrelle S, Blackledge MD, Koh DM. Automatic Segmentation of Pelvic Cancers Using Deep Learning: State-of-the-Art Approaches and Challenges. Diagnostics (Basel) 2021; 11:1964. [PMID: 34829310 PMCID: PMC8625809 DOI: 10.3390/diagnostics11111964] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Revised: 10/14/2021] [Accepted: 10/19/2021] [Indexed: 12/18/2022] Open
Abstract
The recent rise of deep learning (DL) and its promising capabilities in capturing non-explicit detail from large datasets have attracted substantial research attention in the field of medical image processing. DL provides grounds for technological development of computer-aided diagnosis and segmentation in radiology and radiation oncology. Amongst the anatomical locations where recent auto-segmentation algorithms have been employed, the pelvis remains one of the most challenging due to large intra- and inter-patient soft-tissue variabilities. This review provides a comprehensive, non-systematic and clinically-oriented overview of 74 DL-based segmentation studies, published between January 2016 and December 2020, for bladder, prostate, cervical and rectal cancers on computed tomography (CT) and magnetic resonance imaging (MRI), highlighting the key findings, challenges and limitations.
Collapse
Affiliation(s)
- Reza Kalantar
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SM2 5NG, UK; (R.K.); (J.M.W.); (C.M.); (S.L.); (D.-M.K.)
| | - Gigin Lin
- Department of Medical Imaging and Intervention, Chang Gung Memorial Hospital at Linkou and Chang Gung University, 5 Fuhsing St., Guishan, Taoyuan 333, Taiwan;
| | - Jessica M. Winfield
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SM2 5NG, UK; (R.K.); (J.M.W.); (C.M.); (S.L.); (D.-M.K.)
- Department of Radiology, The Royal Marsden Hospital, London SW3 6JJ, UK
| | - Christina Messiou
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SM2 5NG, UK; (R.K.); (J.M.W.); (C.M.); (S.L.); (D.-M.K.)
- Department of Radiology, The Royal Marsden Hospital, London SW3 6JJ, UK
| | - Susan Lalondrelle
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SM2 5NG, UK; (R.K.); (J.M.W.); (C.M.); (S.L.); (D.-M.K.)
- Department of Radiology, The Royal Marsden Hospital, London SW3 6JJ, UK
| | - Matthew D. Blackledge
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SM2 5NG, UK; (R.K.); (J.M.W.); (C.M.); (S.L.); (D.-M.K.)
| | - Dow-Mu Koh
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SM2 5NG, UK; (R.K.); (J.M.W.); (C.M.); (S.L.); (D.-M.K.)
- Department of Radiology, The Royal Marsden Hospital, London SW3 6JJ, UK
| |
Collapse
|
22
|
Ntakolia C, Kokkotis C, Moustakidis S, Tsaopoulos D. Identification of most important features based on a fuzzy ensemble technique: Evaluation on joint space narrowing progression in knee osteoarthritis patients. Int J Med Inform 2021; 156:104614. [PMID: 34662820 DOI: 10.1016/j.ijmedinf.2021.104614] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2021] [Revised: 09/10/2021] [Accepted: 10/07/2021] [Indexed: 11/30/2022]
Abstract
OBJECTIVE Feature selection (FS) is a crucial and at the same time challenging processing step that aims to reduce the dimensionality of complex classification or regression problems. Various techniques have been proposed in the literature to address this challenge with emphasis to medical applications. However, each one of the existing FS algorithms come with its own advantages and disadvantages introducing a certain level of bias. MATERIALS AND METHODS To avoid bias and alleviate the defectiveness of single feature selection results, an ensemble FS methodology is proposed in this paper that aggregates the results of several FS algorithms (filter, wrapper and embedded ones). Fuzzy logic is employed to combine multiple feature importance scores thus leading to a more robust selection of informative features. The proposed fuzzy ensemble FS methodology was applied on the problem of knee osteoarthritis (KOA) prediction with special emphasis on the progression of joint space narrowing (JSN). The proposed FS methodology was integrated into an end-to-end machine learning pipeline and a thorough experimental evaluation was conducted using data from the Osteoarthritis Initiative (OAI) database. Several classifiers were investigated for their suitability in the task of JSN prediction and the best performing model was then post-hoc analyzed by using the SHAP method. RESULTS The results showed that the proposed method presented a better and more stable performance in contrast to other competitive feature selection methods, leading to an average accuracy of 78.14% using XG Boost at 31 selected features. The post-hoc explainability highlighted the important features that contribute to the classification of patients with JSN progression. CONCLUSIONS The proposed fuzzy feature selection approach improves the performance of the predictive models by selecting a small optimal subset of features compared to popular feature selection methods.
Collapse
Affiliation(s)
- Charis Ntakolia
- Hellenic National Center of COVID-19 Impact on Youth, University Mental Health Research Institute, Greece; School of Naval Architecture and Marine Engineering, National Technical University of Athens, 15772, Greece.
| | - Christos Kokkotis
- Institute for Bio-Economy and Agri-Technology, Center for Research and Technology Hellas, 38333, Greece; TEFAA, Department of Physical Education and Sport Science, University of Thessaly, 42100, Greece.
| | | | - Dimitrios Tsaopoulos
- Institute for Bio-Economy and Agri-Technology, Center for Research and Technology Hellas, 38333, Greece.
| |
Collapse
|
23
|
Zhang Z, Zhao T, Gay H, Zhang W, Sun B. Weaving attention U-net: A novel hybrid CNN and attention-based method for organs-at-risk segmentation in head and neck CT images. Med Phys 2021; 48:7052-7062. [PMID: 34655077 DOI: 10.1002/mp.15287] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2021] [Revised: 08/31/2021] [Accepted: 09/26/2021] [Indexed: 11/07/2022] Open
Abstract
PURPOSE In radiotherapy planning, manual contouring is labor-intensive and time-consuming. Accurate and robust automated segmentation models improve the efficiency and treatment outcome. We aim to develop a novel hybrid deep learning approach, combining convolutional neural networks (CNNs) and the self-attention mechanism, for rapid and accurate multi-organ segmentation on head and neck computed tomography (CT) images. METHODS Head and neck CT images with manual contours of 115 patients were retrospectively collected and used. We set the training/validation/testing ratio to 81/9/25 and used the 10-fold cross-validation strategy to select the best model parameters. The proposed hybrid model segmented 10 organs-at-risk (OARs) altogether for each case. The performance of the model was evaluated by three metrics, that is, the Dice Similarity Coefficient (DSC), Hausdorff distance 95% (HD95), and mean surface distance (MSD). We also tested the performance of the model on the head and neck 2015 challenge dataset and compared it against several state-of-the-art automated segmentation algorithms. RESULTS The proposed method generated contours that closely resemble the ground truth for 10 OARs. On the head and neck 2015 challenge dataset, the DSC scores of these OARs were 0.91 ± 0.02, 0.73 ± 0.10, 0.95 ± 0.03, 0.76 ± 0.08, 0.79 ± 0.05, 0.87 ± 0.05, 0.86 ± 0.08, 0.87 ± 0.03, and 0.87 ± 0.07 for brain stem, chiasm, mandible, left/right optic nerve, left/right submandibular, and left/right parotid, respectively. Our results of the new weaving attention U-net (WAU-net) demonstrate superior or similar performance on the segmentation of head and neck CT images. CONCLUSIONS We developed a deep learning approach that integrates the merits of CNNs and the self-attention mechanism. The proposed WAU-net can efficiently capture local and global dependencies and achieves state-of-the-art performance on the head and neck multi-organ segmentation task.
Collapse
Affiliation(s)
- Zhuangzhuang Zhang
- Department of Computer Science and Engineering, Washington University, St. Louis, Missouri, USA
| | - Tianyu Zhao
- Department of Radiation Oncology, Washington University School of Medicine, St. Louis, Missouri, USA
| | - Hiram Gay
- Department of Radiation Oncology, Washington University School of Medicine, St. Louis, Missouri, USA
| | - Weixiong Zhang
- Department of Computer Science and Engineering, Washington University, St. Louis, Missouri, USA
| | - Baozhou Sun
- Department of Radiation Oncology, Washington University School of Medicine, St. Louis, Missouri, USA
| |
Collapse
|
24
|
Nikolov S, Blackwell S, Zverovitch A, Mendes R, Livne M, De Fauw J, Patel Y, Meyer C, Askham H, Romera-Paredes B, Kelly C, Karthikesalingam A, Chu C, Carnell D, Boon C, D'Souza D, Moinuddin SA, Garie B, McQuinlan Y, Ireland S, Hampton K, Fuller K, Montgomery H, Rees G, Suleyman M, Back T, Hughes CO, Ledsam JR, Ronneberger O. Clinically Applicable Segmentation of Head and Neck Anatomy for Radiotherapy: Deep Learning Algorithm Development and Validation Study. J Med Internet Res 2021; 23:e26151. [PMID: 34255661 PMCID: PMC8314151 DOI: 10.2196/26151] [Citation(s) in RCA: 118] [Impact Index Per Article: 39.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2020] [Revised: 02/10/2021] [Accepted: 04/30/2021] [Indexed: 12/16/2022] Open
Abstract
BACKGROUND Over half a million individuals are diagnosed with head and neck cancer each year globally. Radiotherapy is an important curative treatment for this disease, but it requires manual time to delineate radiosensitive organs at risk. This planning process can delay treatment while also introducing interoperator variability, resulting in downstream radiation dose differences. Although auto-segmentation algorithms offer a potentially time-saving solution, the challenges in defining, quantifying, and achieving expert performance remain. OBJECTIVE Adopting a deep learning approach, we aim to demonstrate a 3D U-Net architecture that achieves expert-level performance in delineating 21 distinct head and neck organs at risk commonly segmented in clinical practice. METHODS The model was trained on a data set of 663 deidentified computed tomography scans acquired in routine clinical practice and with both segmentations taken from clinical practice and segmentations created by experienced radiographers as part of this research, all in accordance with consensus organ at risk definitions. RESULTS We demonstrated the model's clinical applicability by assessing its performance on a test set of 21 computed tomography scans from clinical practice, each with 21 organs at risk segmented by 2 independent experts. We also introduced surface Dice similarity coefficient, a new metric for the comparison of organ delineation, to quantify the deviation between organ at risk surface contours rather than volumes, better reflecting the clinical task of correcting errors in automated organ segmentations. The model's generalizability was then demonstrated on 2 distinct open-source data sets, reflecting different centers and countries to model training. CONCLUSIONS Deep learning is an effective and clinically applicable technique for the segmentation of the head and neck anatomy for radiotherapy. With appropriate validation studies and regulatory approvals, this system could improve the efficiency, consistency, and safety of radiotherapy pathways.
Collapse
Affiliation(s)
| | | | | | - Ruheena Mendes
- University College London Hospitals NHS Foundation Trust, London, United Kingdom
| | | | | | | | | | | | | | | | | | | | - Dawn Carnell
- University College London Hospitals NHS Foundation Trust, London, United Kingdom
| | - Cheng Boon
- Clatterbridge Cancer Centre NHS Foundation Trust, Liverpool, United Kingdom
| | - Derek D'Souza
- University College London Hospitals NHS Foundation Trust, London, United Kingdom
| | - Syed Ali Moinuddin
- University College London Hospitals NHS Foundation Trust, London, United Kingdom
| | | | | | | | | | | | | | - Geraint Rees
- University College London, London, United Kingdom
| | | | | | | | | | | |
Collapse
|
25
|
Lei W, Mei H, Sun Z, Ye S, Gu R, Wang H, Huang R, Zhang S, Zhang S, Wang G. Automatic segmentation of organs-at-risk from head-and-neck CT using separable convolutional neural network with hard-region-weighted loss. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.01.135] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
26
|
Hague C, McPartlin A, Lee LW, Hughes C, Mullan D, Beasley W, Green A, Price G, Whitehurst P, Slevin N, van Herk M, West C, Chuter R. An evaluation of MR based deep learning auto-contouring for planning head and neck radiotherapy. Radiother Oncol 2021; 158:112-117. [PMID: 33636229 DOI: 10.1016/j.radonc.2021.02.018] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2020] [Revised: 02/02/2021] [Accepted: 02/15/2021] [Indexed: 10/22/2022]
Abstract
INTRODUCTION Auto contouring models help consistently define volumes and reduce clinical workload. This study aimed to evaluate the cross acquisition of a Magnetic Resonance (MR) deep learning auto contouring model for organ at risk (OAR) delineation in head and neck radiotherapy. METHODS Two auto contouring models were evaluated using deep learning contouring expert (DLCExpert) for OAR delineation: a CT model (modelCT) and an MR model (modelMRI). Models were trained to generate auto contours for the bilateral parotid glands and submandibular glands. Auto-contours for modelMRI were trained on diagnostic images and tested on 10 diagnostic, 10 MR radiotherapy planning (RTP), eight MR-Linac (MRL) scans and, by modelCT, on 10 CT planning scans. Goodness of fit scores, dice similarity coefficient (DSC) and distance to agreement (DTA) were calculated for comparison. RESULTS ModelMRI contours improved the mean DSC and DTA compared with manual contours for the bilateral parotid glands and submandibular glands on the diagnostic and RTP MRs compared with the MRL sequence. There were statistically significant differences seen for modelMRI compared to modelCT for the left parotid (mean DTA 2.3 v 2.8 mm), right parotid (mean DTA 1.9 v 2.7 mm), left submandibular gland (mean DTA 2.2 v 2.4 mm) and right submandibular gland (mean DTA 1.6 v 3.2 mm). CONCLUSION A deep learning MR auto-contouring model shows promise for OAR auto-contouring with statistically improved performance vs a CT based model. Performance is affected by the method of MR acquisition and further work is needed to improve its use with MRL images.
Collapse
Affiliation(s)
- C Hague
- Department of Head and Neck Clinical Oncology, The Christie NHS Foundation Trust, Manchester, UK.
| | - A McPartlin
- Department of Head and Neck Clinical Oncology, The Christie NHS Foundation Trust, Manchester, UK.
| | - L W Lee
- Department of Head and Neck Clinical Oncology, The Christie NHS Foundation Trust, Manchester, UK.
| | - C Hughes
- Department of Head and Neck Clinical Oncology, The Christie NHS Foundation Trust, Manchester, UK.
| | - D Mullan
- Department of Radiology, The Christie NHS Foundation Trust, Manchester, UK.
| | - W Beasley
- Christie Medical Physics and Engineering, The Christie NHS Foundation Trust, Manchester, UK.
| | - A Green
- Division of Cancer Sciences, Faculty of Biology, Medicine and Heath, University of Manchester, Manchester Academic Health Science Centre, The Christie NHS Foundation Trust, Manchester, UK.
| | - G Price
- Division of Cancer Sciences, Faculty of Biology, Medicine and Heath, University of Manchester, Manchester Academic Health Science Centre, The Christie NHS Foundation Trust, Manchester, UK.
| | - P Whitehurst
- Christie Medical Physics and Engineering, The Christie NHS Foundation Trust, Manchester, UK.
| | - N Slevin
- Department of Head and Neck Clinical Oncology, The Christie NHS Foundation Trust, Manchester, UK
| | - M van Herk
- Christie Medical Physics and Engineering, The Christie NHS Foundation Trust, Manchester, UK; Division of Cancer Sciences, Faculty of Biology, Medicine and Heath, University of Manchester, Manchester Academic Health Science Centre, The Christie NHS Foundation Trust, Manchester, UK.
| | - C West
- Division of Cancer Sciences, Faculty of Biology, Medicine and Heath, University of Manchester, Manchester Academic Health Science Centre, The Christie NHS Foundation Trust, Manchester, UK.
| | - R Chuter
- Christie Medical Physics and Engineering, The Christie NHS Foundation Trust, Manchester, UK; Division of Cancer Sciences, Faculty of Biology, Medicine and Heath, University of Manchester, Manchester Academic Health Science Centre, The Christie NHS Foundation Trust, Manchester, UK.
| |
Collapse
|
27
|
Zhang J, Yang Y, Shao K, Bai X, Fang M, Shan G, Chen M. Fully convolutional network-based multi-output model for automatic segmentation of organs at risk in thorax. Sci Prog 2021; 104:368504211020161. [PMID: 34053337 PMCID: PMC10454972 DOI: 10.1177/00368504211020161] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
Abstract
PURPOSE To propose a multi-output fully convolutional network (MOFCN) to segment bilateral lung, heart and spinal cord in the planning thoracic computed tomography (CT) slices automatically and simultaneously. METHODS The MOFCN includes two components: one main backbone and three branches. The main backbone extracts the features about lung, heart and spinal cord. The extracted features are transferred to three branches which correspond to three organs respectively. The longest branch to segment spinal cord is nine layers, including input and output layers. The MOFCN was evaluated on 19,277 CT slices from 966 patients with cancer in the thorax. In these slices, the organs at risk (OARs) were delineated and validated by experienced radiation oncologists, and served as ground truth for training and evaluation. The data from 61 randomly chosen patients were used for training and validation. The remaining 905 patients' slices were used for testing. The metric used to evaluate the similarity between the auto-segmented organs and their ground truth was Dice. Besides, we compared the MOFCN with other published models. To assess the distinct output design and the impact of layer number and dilated convolution, we compared MOFCN with a multi-label learning model and its variants. By analyzing the not good performances, we suggested possible solutions. RESULTS MOFCN achieved Dice of 0.95 ± 0.02 for lung, 0.91 ± 0.03 for heart and 0.87 ± 0.06 for spinal cord. Compared to other models, MOFCN could achieve a comparable accuracy with the least time cost. CONCLUSION The results demonstrated the MOFCN's effectiveness. It uses less parameters to delineate three OARs simultaneously and automatically, and thus shows a relatively low requirement for hardware and has potential for broad application.
Collapse
Affiliation(s)
- Jie Zhang
- Institute of Cancer and Medicine, Chinese Academy of Sciences, Hangzhou, China
- Department of Radiation Physics, Cancer Hospital of the University of Chinese Academy of Sciences, Hangzhou, China
- Department of Radiation Physics, Zhejiang Cancer Hospital, Hangzhou, China
| | - Yiwei Yang
- Institute of Cancer and Medicine, Chinese Academy of Sciences, Hangzhou, China
- Department of Radiation Physics, Cancer Hospital of the University of Chinese Academy of Sciences, Hangzhou, China
- Department of Radiation Physics, Zhejiang Cancer Hospital, Hangzhou, China
| | - Kainan Shao
- Institute of Cancer and Medicine, Chinese Academy of Sciences, Hangzhou, China
- Department of Radiation Physics, Cancer Hospital of the University of Chinese Academy of Sciences, Hangzhou, China
- Department of Radiation Physics, Zhejiang Cancer Hospital, Hangzhou, China
| | - Xue Bai
- Institute of Cancer and Medicine, Chinese Academy of Sciences, Hangzhou, China
- Department of Radiation Physics, Cancer Hospital of the University of Chinese Academy of Sciences, Hangzhou, China
- Department of Radiation Physics, Zhejiang Cancer Hospital, Hangzhou, China
| | - Min Fang
- Institute of Cancer and Medicine, Chinese Academy of Sciences, Hangzhou, China
- Department of Radiation Oncology, Cancer Hospital of the University of Chinese Academy of Sciences, Hangzhou, China
- Department of Radiation Oncology, Zhejiang Cancer Hospital, Hangzhou, China
| | - Guoping Shan
- Institute of Cancer and Medicine, Chinese Academy of Sciences, Hangzhou, China
- Department of Radiation Physics, Cancer Hospital of the University of Chinese Academy of Sciences, Hangzhou, China
- Department of Radiation Physics, Zhejiang Cancer Hospital, Hangzhou, China
| | - Ming Chen
- Institute of Cancer and Medicine, Chinese Academy of Sciences, Hangzhou, China
- Department of Radiation Oncology, Cancer Hospital of the University of Chinese Academy of Sciences, Hangzhou, China
- Department of Radiation Oncology, Zhejiang Cancer Hospital, Hangzhou, China
| |
Collapse
|
28
|
Gou S, Tong N, Qi S, Yang S, Chin R, Sheng K. Self-channel-and-spatial-attention neural network for automated multi-organ segmentation on head and neck CT images. Phys Med Biol 2020; 65:245034. [PMID: 32097892 DOI: 10.1088/1361-6560/ab79c3] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Accurate segmentation of organs at risk (OARs) is necessary for adaptive head and neck (H&N) cancer treatment planning, but manual delineation is tedious, slow, and inconsistent. A self-channel-and-spatial-attention neural network (SCSA-Net) is developed for H&N OAR segmentation on CT images. To simultaneously ease the training and improve the segmentation performance, the proposed SCSA-Net utilizes the self-attention ability of the network. Spatial and channel-wise attention learning mechanisms are both employed to adaptively force the network to emphasize the meaningful features and weaken the irrelevant features simultaneously. The proposed network was first evaluated on a public dataset, which includes 48 patients, then on a separate serial CT dataset, which contains ten patients who received weekly diagnostic fan-beam CT scans. On the second dataset, the accuracy of using SCSA-Net to track the parotid and submandibular gland volume changes during radiotherapy treatment was quantified. The Dice similarity coefficient (DSC), positive predictive value (PPV), sensitivity (SEN), average surface distance (ASD), and 95% maximum surface distance (95SD) were calculated on the brainstem, optic chiasm, optic nerves, mandible, parotid glands, and submandibular glands to evaluate the proposed SCSA-Net. The proposed SCSA-Net consistently outperforms the state-of-the-art methods on the public dataset. Specifically, compared with Res-Net and SE-Net, which is constructed from squeeze-and-excitation block equipped residual blocks, the DSC of the optic nerves and submandibular glands is improved by 0.06, 0.03 and 0.05, 0.04 by the SCSA-Net. Moreover, the proposed method achieves statistically significant improvements in terms of DSC on all and eight of nine OARs over Res-Net and SE-Net, respectively. The trained network was able to achieve good segmentation results on the serial dataset, but the results were further improved after fine-tuning of the model using the simulation CT images. For the parotids and submandibular glands, the volume changes of individual patients are highly consistent between the automated and manual segmentation (Pearson's correlation 0.97-0.99). The proposed SCSA-Net is computationally efficient to perform segmentation (sim 2 s/CT).
Collapse
Affiliation(s)
- Shuiping Gou
- Key Lab of Intelligent Perception and Image Understanding of Ministry of Education, Xidian University, Xi'an, Shaanxi 710071, People's Republic of China
| | | | | | | | | | | |
Collapse
|
29
|
Savadjiev P, Reinhold C, Martin D, Forghani R. Knowledge Based Versus Data Based: A Historical Perspective on a Continuum of Methodologies for Medical Image Analysis. Neuroimaging Clin N Am 2020; 30:401-415. [PMID: 33038992 DOI: 10.1016/j.nic.2020.06.002] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/17/2023]
Abstract
The advent of big data and deep learning algorithms has promoted a major shift toward data-driven methods in medical image analysis recently. However, the medical image analysis field has a long and rich history inclusive of both knowledge-driven and data-driven methodologies. In the present article, we provide a historical review of an illustrative sample of medical image analysis methods and locate them along a knowledge-driven versus data-driven continuum. In doing so, we highlight the historical importance as well as current-day relevance of more traditional, knowledge-based artificial intelligence approaches and their complementarity with fully data-driven techniques such as deep learning.
Collapse
Affiliation(s)
- Peter Savadjiev
- Department of Diagnostic Radiology, McGill University, Room B02 9389, 1001 Decarie Boulevard, Montreal, Quebec H4A 3J1, Canada; School of Computer Science, McGill University, Montreal, Quebec, Canada; Medical Physics Unit, Department of Oncology, McGill University, Montreal, Quebec, Canada; Augmented Intelligence & Precision Health Laboratory (AIPHL), Department of Diagnostic Radiology, Research Institute of the McGill University Health Centre, Montreal, Quebec, Canada.
| | - Caroline Reinhold
- Department of Diagnostic Radiology, McGill University, Room B02 9389, 1001 Decarie Boulevard, Montreal, Quebec H4A 3J1, Canada; Augmented Intelligence & Precision Health Laboratory (AIPHL), Department of Diagnostic Radiology, Research Institute of the McGill University Health Centre, Montreal, Quebec, Canada
| | - Diego Martin
- Department of Diagnostic Radiology, McGill University, Room B02 9389, 1001 Decarie Boulevard, Montreal, Quebec H4A 3J1, Canada; Augmented Intelligence & Precision Health Laboratory (AIPHL), Department of Diagnostic Radiology, Research Institute of the McGill University Health Centre, Montreal, Quebec, Canada
| | - Reza Forghani
- Department of Diagnostic Radiology, McGill University, Room B02 9389, 1001 Decarie Boulevard, Montreal, Quebec H4A 3J1, Canada; Augmented Intelligence & Precision Health Laboratory (AIPHL), Department of Diagnostic Radiology, Research Institute of the McGill University Health Centre, Montreal, Quebec, Canada; Segal Cancer Centre and Lady Davis Institute for Medical Research, Jewish General Hospital, Montreal, Quebec, Canada; Gerald Bronfman Department of Oncology, McGill University, Montreal, Quebec, Canada; Department of Otolaryngology-Head and Neck Surgery, McGill University, Montreal, Quebec, Canada
| |
Collapse
|
30
|
Multi-Atlas Based Adaptive Active Contour Model with Application to Organs at Risk Segmentation in Brain MR Images. Ing Rech Biomed 2020. [DOI: 10.1016/j.irbm.2020.10.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
31
|
Cai JC, Akkus Z, Philbrick KA, Boonrod A, Hoodeshenas S, Weston AD, Rouzrokh P, Conte GM, Zeinoddini A, Vogelsang DC, Huang Q, Erickson BJ. Fully Automated Segmentation of Head CT Neuroanatomy Using Deep Learning. Radiol Artif Intell 2020; 2:e190183. [PMID: 33937839 DOI: 10.1148/ryai.2020190183] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2019] [Revised: 06/02/2020] [Accepted: 06/16/2020] [Indexed: 12/17/2022]
Abstract
Purpose To develop a deep learning model that segments intracranial structures on head CT scans. Materials and Methods In this retrospective study, a primary dataset containing 62 normal noncontrast head CT scans from 62 patients (mean age, 73 years; age range, 27-95 years) acquired between August and December 2018 was used for model development. Eleven intracranial structures were manually annotated on the axial oblique series. The dataset was split into 40 scans for training, 10 for validation, and 12 for testing. After initial training, eight model configurations were evaluated on the validation dataset and the highest performing model was evaluated on the test dataset. Interobserver variability was reported using multirater consensus labels obtained from the test dataset. To ensure that the model learned generalizable features, it was further evaluated on two secondary datasets containing 12 volumes with idiopathic normal pressure hydrocephalus (iNPH) and 30 normal volumes from a publicly available source. Statistical significance was determined using categorical linear regression with P < .05. Results Overall Dice coefficient on the primary test dataset was 0.84 ± 0.05 (standard deviation). Performance ranged from 0.96 ± 0.01 (brainstem and cerebrum) to 0.74 ± 0.06 (internal capsule). Dice coefficients were comparable to expert annotations and exceeded those of existing segmentation methods. The model remained robust on external CT scans and scans demonstrating ventricular enlargement. The use of within-network normalization and class weighting facilitated learning of underrepresented classes. Conclusion Automated segmentation of CT neuroanatomy is feasible with a high degree of accuracy. The model generalized to external CT scans as well as scans demonstrating iNPH.Supplemental material is available for this article.© RSNA, 2020.
Collapse
Affiliation(s)
- Jason C Cai
- Departments of Radiology (J.C.C., K.A.P., S.H., P.R., G.M.C., D.C.V., Q.H., B.J.E.) and Cardiovascular Science (Z.A.), Mayo Clinic Rochester, 200 First St. SW, RO_PB_02_RIL, Rochester, MN 55905; Department of Radiology, Khon Kaen University, Khon Kaen, Thailand (A.B.); Department of Health Sciences Research, Mayo Clinic Florida, Jacksonville, Fla (A.D.W.); and Department of Internal Medicine, Ascension St. John Hospital, Detroit, Mich (A.Z.)
| | - Zeynettin Akkus
- Departments of Radiology (J.C.C., K.A.P., S.H., P.R., G.M.C., D.C.V., Q.H., B.J.E.) and Cardiovascular Science (Z.A.), Mayo Clinic Rochester, 200 First St. SW, RO_PB_02_RIL, Rochester, MN 55905; Department of Radiology, Khon Kaen University, Khon Kaen, Thailand (A.B.); Department of Health Sciences Research, Mayo Clinic Florida, Jacksonville, Fla (A.D.W.); and Department of Internal Medicine, Ascension St. John Hospital, Detroit, Mich (A.Z.)
| | - Kenneth A Philbrick
- Departments of Radiology (J.C.C., K.A.P., S.H., P.R., G.M.C., D.C.V., Q.H., B.J.E.) and Cardiovascular Science (Z.A.), Mayo Clinic Rochester, 200 First St. SW, RO_PB_02_RIL, Rochester, MN 55905; Department of Radiology, Khon Kaen University, Khon Kaen, Thailand (A.B.); Department of Health Sciences Research, Mayo Clinic Florida, Jacksonville, Fla (A.D.W.); and Department of Internal Medicine, Ascension St. John Hospital, Detroit, Mich (A.Z.)
| | - Arunnit Boonrod
- Departments of Radiology (J.C.C., K.A.P., S.H., P.R., G.M.C., D.C.V., Q.H., B.J.E.) and Cardiovascular Science (Z.A.), Mayo Clinic Rochester, 200 First St. SW, RO_PB_02_RIL, Rochester, MN 55905; Department of Radiology, Khon Kaen University, Khon Kaen, Thailand (A.B.); Department of Health Sciences Research, Mayo Clinic Florida, Jacksonville, Fla (A.D.W.); and Department of Internal Medicine, Ascension St. John Hospital, Detroit, Mich (A.Z.)
| | - Safa Hoodeshenas
- Departments of Radiology (J.C.C., K.A.P., S.H., P.R., G.M.C., D.C.V., Q.H., B.J.E.) and Cardiovascular Science (Z.A.), Mayo Clinic Rochester, 200 First St. SW, RO_PB_02_RIL, Rochester, MN 55905; Department of Radiology, Khon Kaen University, Khon Kaen, Thailand (A.B.); Department of Health Sciences Research, Mayo Clinic Florida, Jacksonville, Fla (A.D.W.); and Department of Internal Medicine, Ascension St. John Hospital, Detroit, Mich (A.Z.)
| | - Alexander D Weston
- Departments of Radiology (J.C.C., K.A.P., S.H., P.R., G.M.C., D.C.V., Q.H., B.J.E.) and Cardiovascular Science (Z.A.), Mayo Clinic Rochester, 200 First St. SW, RO_PB_02_RIL, Rochester, MN 55905; Department of Radiology, Khon Kaen University, Khon Kaen, Thailand (A.B.); Department of Health Sciences Research, Mayo Clinic Florida, Jacksonville, Fla (A.D.W.); and Department of Internal Medicine, Ascension St. John Hospital, Detroit, Mich (A.Z.)
| | - Pouria Rouzrokh
- Departments of Radiology (J.C.C., K.A.P., S.H., P.R., G.M.C., D.C.V., Q.H., B.J.E.) and Cardiovascular Science (Z.A.), Mayo Clinic Rochester, 200 First St. SW, RO_PB_02_RIL, Rochester, MN 55905; Department of Radiology, Khon Kaen University, Khon Kaen, Thailand (A.B.); Department of Health Sciences Research, Mayo Clinic Florida, Jacksonville, Fla (A.D.W.); and Department of Internal Medicine, Ascension St. John Hospital, Detroit, Mich (A.Z.)
| | - Gian Marco Conte
- Departments of Radiology (J.C.C., K.A.P., S.H., P.R., G.M.C., D.C.V., Q.H., B.J.E.) and Cardiovascular Science (Z.A.), Mayo Clinic Rochester, 200 First St. SW, RO_PB_02_RIL, Rochester, MN 55905; Department of Radiology, Khon Kaen University, Khon Kaen, Thailand (A.B.); Department of Health Sciences Research, Mayo Clinic Florida, Jacksonville, Fla (A.D.W.); and Department of Internal Medicine, Ascension St. John Hospital, Detroit, Mich (A.Z.)
| | - Atefeh Zeinoddini
- Departments of Radiology (J.C.C., K.A.P., S.H., P.R., G.M.C., D.C.V., Q.H., B.J.E.) and Cardiovascular Science (Z.A.), Mayo Clinic Rochester, 200 First St. SW, RO_PB_02_RIL, Rochester, MN 55905; Department of Radiology, Khon Kaen University, Khon Kaen, Thailand (A.B.); Department of Health Sciences Research, Mayo Clinic Florida, Jacksonville, Fla (A.D.W.); and Department of Internal Medicine, Ascension St. John Hospital, Detroit, Mich (A.Z.)
| | - David C Vogelsang
- Departments of Radiology (J.C.C., K.A.P., S.H., P.R., G.M.C., D.C.V., Q.H., B.J.E.) and Cardiovascular Science (Z.A.), Mayo Clinic Rochester, 200 First St. SW, RO_PB_02_RIL, Rochester, MN 55905; Department of Radiology, Khon Kaen University, Khon Kaen, Thailand (A.B.); Department of Health Sciences Research, Mayo Clinic Florida, Jacksonville, Fla (A.D.W.); and Department of Internal Medicine, Ascension St. John Hospital, Detroit, Mich (A.Z.)
| | - Qiao Huang
- Departments of Radiology (J.C.C., K.A.P., S.H., P.R., G.M.C., D.C.V., Q.H., B.J.E.) and Cardiovascular Science (Z.A.), Mayo Clinic Rochester, 200 First St. SW, RO_PB_02_RIL, Rochester, MN 55905; Department of Radiology, Khon Kaen University, Khon Kaen, Thailand (A.B.); Department of Health Sciences Research, Mayo Clinic Florida, Jacksonville, Fla (A.D.W.); and Department of Internal Medicine, Ascension St. John Hospital, Detroit, Mich (A.Z.)
| | - Bradley J Erickson
- Departments of Radiology (J.C.C., K.A.P., S.H., P.R., G.M.C., D.C.V., Q.H., B.J.E.) and Cardiovascular Science (Z.A.), Mayo Clinic Rochester, 200 First St. SW, RO_PB_02_RIL, Rochester, MN 55905; Department of Radiology, Khon Kaen University, Khon Kaen, Thailand (A.B.); Department of Health Sciences Research, Mayo Clinic Florida, Jacksonville, Fla (A.D.W.); and Department of Internal Medicine, Ascension St. John Hospital, Detroit, Mich (A.Z.)
| |
Collapse
|
32
|
Mu G, Yang Y, Gao Y, Feng Q. [Multi-scale 3D convolutional neural network-based segmentation of head and neck organs at risk]. NAN FANG YI KE DA XUE XUE BAO = JOURNAL OF SOUTHERN MEDICAL UNIVERSITY 2020; 40:491-498. [PMID: 32895133 DOI: 10.12122/j.issn.1673-4254.2020.04.07] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Abstract
OBJECTIVE To establish an algorithm based on 3D convolution neural network to segment the organs at risk (OARs) in the head and neck on CT images. METHODS We propose an automatic segmentation algorithm of head and neck OARs based on V-Net. To enhance the feature expression ability of the 3D neural network, we combined the squeeze and exception (SE) module with the residual convolution module in V-Net to increase the weight of the features that has greater contributions to the segmentation task. Using a multi-scale strategy, we completed organ segmentation using two cascade models for location and fine segmentation, and the input image was resampled to different resolutions during preprocessing to allow the two models to focus on the extraction of global location information and local detail features respectively. RESULTS Our experiments on segmentation of 22 OARs in the head and neck indicated that compared with the existing methods, the proposed method achieved better segmentation accuracy and efficiency, and the average segmentation accuracy was improved by 9%. At the same time, the average test time was reduced from 33.82 s to 2.79 s. CONCLUSIONS The 3D convolution neural network based on multi-scale strategy can effectively and efficiently improve the accuracy of organ segmentation and can be potentially used in clinical setting for segmentation of other organs to improve the efficiency of clinical treatment.
Collapse
Affiliation(s)
- Guangrui Mu
- School of Biomedical Engineering, Guangzhou 510515, China.,Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou 510515, China
| | - Yanping Yang
- Shanghai United Imaging Intelligence Co., Ltd., Shanghai 200030, China
| | - Yaozong Gao
- Shanghai United Imaging Intelligence Co., Ltd., Shanghai 200030, China
| | - Qianjin Feng
- School of Biomedical Engineering, Guangzhou 510515, China.,Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou 510515, China
| |
Collapse
|
33
|
Liu Y, Lei Y, Fu Y, Wang T, Zhou J, Jiang X, McDonald M, Beitler JJ, Curran WJ, Liu T, Yang X. Head and neck multi-organ auto-segmentation on CT images aided by synthetic MRI. Med Phys 2020; 47:4294-4302. [PMID: 32648602 DOI: 10.1002/mp.14378] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2020] [Revised: 06/22/2020] [Accepted: 06/30/2020] [Indexed: 11/03/2023] Open
Abstract
PURPOSE Because the manual contouring process is labor-intensive and time-consuming, segmentation of organs-at-risk (OARs) is a weak link in radiotherapy treatment planning process. Our goal was to develop a synthetic MR (sMR)-aided dual pyramid network (DPN) for rapid and accurate head and neck multi-organ segmentation in order to expedite the treatment planning process. METHODS Forty-five patients' CT, MR, and manual contours pairs were included as our training dataset. Nineteen OARs were target organs to be segmented. The proposed sMR-aided DPN method featured a deep attention strategy to effectively segment multiple organs. The performance of sMR-aided DPN method was evaluated using five metrics, including Dice similarity coefficient (DSC), Hausdorff distance 95% (HD95), mean surface distance (MSD), residual mean square distance (RMSD), and volume difference. Our method was further validated using the 2015 head and neck challenge data. RESULTS The contours generated by the proposed method closely resemble the ground truth manual contours, as evidenced by encouraging quantitative results in terms of DSC using the 2015 head and neck challenge data. Mean DSC values of 0.91 ± 0.02, 0.73 ± 0.11, 0.96 ± 0.01, 0.78 ± 0.09/0.78 ± 0.11, 0.88 ± 0.04/0.88 ± 0.06 and 0.86 ± 0.08/0.85 ± 0.1 were achieved for brain stem, chiasm, mandible, left/right optic nerve, left/right parotid, and left/right submandibular, respectively. CONCLUSIONS We demonstrated the feasibility of sMR-aided DPN for head and neck multi-organ delineation on CT images. Our method has shown superiority over the other methods on the 2015 head and neck challenge data results. The proposed method could significantly expedite the treatment planning process by rapidly segmenting multiple OARs.
Collapse
Affiliation(s)
- Yingzi Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Yabo Fu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Jun Zhou
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Xiaojun Jiang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Mark McDonald
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Jonathan J Beitler
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| |
Collapse
|
34
|
Tseng M, Ho F, Leong YH, Wong LC, Tham IW, Cheo T, Lee AW. Emerging radiotherapy technologies and trends in nasopharyngeal cancer. Cancer Commun (Lond) 2020; 40:395-405. [PMID: 32745354 PMCID: PMC7494066 DOI: 10.1002/cac2.12082] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2020] [Accepted: 07/14/2020] [Indexed: 12/19/2022] Open
Abstract
Technology has always driven advances in radiotherapy treatment. In this review, we describe the main technological advances in radiotherapy over the past decades for the treatment of nasopharyngeal cancer (NPC) and highlight some of the pressing issues and challenges that remain. We aim to identify emerging trends in radiation medicine. These include advances in personalized medicine and advanced imaging modalities, standardization of planning and delineation, assessment of treatment response and adaptive re‐planning, impact of particle therapy, and role of artificial intelligence or automation in clinical care. In conclusion, we expect significant improvement in the therapeutic ratio of radiotherapy treatment for NPC over the next decade.
Collapse
Affiliation(s)
- Michelle Tseng
- Radiation Oncology Centre, Mt Elizabeth Novena Hospital, Singapore, 329563, Singapore
| | - Francis Ho
- Radiation Oncology Centre, Mt Elizabeth Novena Hospital, Singapore, 329563, Singapore
| | - Yiat Horng Leong
- Radiation Oncology Centre, Mt Elizabeth Novena Hospital, Singapore, 329563, Singapore
| | - Lea Choung Wong
- Radiation Oncology Centre, Mt Elizabeth Novena Hospital, Singapore, 329563, Singapore
| | - Ivan Wk Tham
- Radiation Oncology Centre, Mt Elizabeth Novena Hospital, Singapore, 329563, Singapore
| | - Timothy Cheo
- Radiation Oncology Centre, Mt Elizabeth Novena Hospital, Singapore, 329563, Singapore
| | - Anne Wm Lee
- Department of Clinical Oncology, the University of Hong Kong-Shenzhen Hospital, the University of Hong Kong, Hong Kong, 999077, P. R. China
| |
Collapse
|
35
|
Sheng K. Artificial intelligence in radiotherapy: a technological review. Front Med 2020; 14:431-449. [PMID: 32728877 DOI: 10.1007/s11684-020-0761-1] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2019] [Accepted: 02/14/2020] [Indexed: 12/19/2022]
Abstract
Radiation therapy (RT) is widely used to treat cancer. Technological advances in RT have occurred in the past 30 years. These advances, such as three-dimensional image guidance, intensity modulation, and robotics, created challenges and opportunities for the next breakthrough, in which artificial intelligence (AI) will possibly play important roles. AI will replace certain repetitive and labor-intensive tasks and improve the accuracy and consistency of others, particularly those with increased complexity because of technological advances. The improvement in efficiency and consistency is important to manage the increasing cancer patient burden to the society. Furthermore, AI may provide new functionalities that facilitate satisfactory RT. The functionalities include superior images for real-time intervention and adaptive and personalized RT. AI may effectively synthesize and analyze big data for such purposes. This review describes the RT workflow and identifies areas, including imaging, treatment planning, quality assurance, and outcome prediction, that benefit from AI. This review primarily focuses on deep-learning techniques, although conventional machine-learning techniques are also mentioned.
Collapse
Affiliation(s)
- Ke Sheng
- Department of Radiation Oncology, University of California, Los Angeles, CA, 90095, USA.
| |
Collapse
|
36
|
Vrtovec T, Močnik D, Strojan P, Pernuš F, Ibragimov B. Auto-segmentation of organs at risk for head and neck radiotherapy planning: From atlas-based to deep learning methods. Med Phys 2020; 47:e929-e950. [PMID: 32510603 DOI: 10.1002/mp.14320] [Citation(s) in RCA: 80] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2019] [Revised: 05/27/2020] [Accepted: 05/29/2020] [Indexed: 02/06/2023] Open
Abstract
Radiotherapy (RT) is one of the basic treatment modalities for cancer of the head and neck (H&N), which requires a precise spatial description of the target volumes and organs at risk (OARs) to deliver a highly conformal radiation dose to the tumor cells while sparing the healthy tissues. For this purpose, target volumes and OARs have to be delineated and segmented from medical images. As manual delineation is a tedious and time-consuming task subjected to intra/interobserver variability, computerized auto-segmentation has been developed as an alternative. The field of medical imaging and RT planning has experienced an increased interest in the past decade, with new emerging trends that shifted the field of H&N OAR auto-segmentation from atlas-based to deep learning-based approaches. In this review, we systematically analyzed 78 relevant publications on auto-segmentation of OARs in the H&N region from 2008 to date, and provided critical discussions and recommendations from various perspectives: image modality - both computed tomography and magnetic resonance image modalities are being exploited, but the potential of the latter should be explored more in the future; OAR - the spinal cord, brainstem, and major salivary glands are the most studied OARs, but additional experiments should be conducted for several less studied soft tissue structures; image database - several image databases with the corresponding ground truth are currently available for methodology evaluation, but should be augmented with data from multiple observers and multiple institutions; methodology - current methods have shifted from atlas-based to deep learning auto-segmentation, which is expected to become even more sophisticated; ground truth - delineation guidelines should be followed and participation of multiple experts from multiple institutions is recommended; performance metrics - the Dice coefficient as the standard volumetric overlap metrics should be accompanied with at least one distance metrics, and combined with clinical acceptability scores and risk assessments; segmentation performance - the best performing methods achieve clinically acceptable auto-segmentation for several OARs, however, the dosimetric impact should be also studied to provide clinically relevant endpoints for RT planning.
Collapse
Affiliation(s)
- Tomaž Vrtovec
- Faculty Electrical Engineering, University of Ljubljana, Tržaška cesta 25, Ljubljana, SI-1000, Slovenia
| | - Domen Močnik
- Faculty Electrical Engineering, University of Ljubljana, Tržaška cesta 25, Ljubljana, SI-1000, Slovenia
| | - Primož Strojan
- Institute of Oncology Ljubljana, Zaloška cesta 2, Ljubljana, SI-1000, Slovenia
| | - Franjo Pernuš
- Faculty Electrical Engineering, University of Ljubljana, Tržaška cesta 25, Ljubljana, SI-1000, Slovenia
| | - Bulat Ibragimov
- Faculty Electrical Engineering, University of Ljubljana, Tržaška cesta 25, Ljubljana, SI-1000, Slovenia.,Department of Computer Science, University of Copenhagen, Universitetsparken 1, Copenhagen, D-2100, Denmark
| |
Collapse
|
37
|
Tan B, Wong DWK, Yow AP, Yao X, Schmetterer L. Three-dimensional choroidal vessel network quantification using swept source optical coherence tomography. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2020:1883-1886. [PMID: 33018368 DOI: 10.1109/embc44109.2020.9175242] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Precise three-dimensional segmentation of choroidal vessels helps us understand the development and progression of multiple ocular diseases, such as agerelated macular degeneration and pathological myopia. Here we propose a novel automatic choroidal vessel segmentation framework for swept source optical coherence tomography (SS-OCT) to visualize and quantify three-dimensional choroidal vessel networks. Retinal pigment epithelium (RPE) was delineated from volumetric data and enface frames along the depth were extracted under the RPE. Choroidal vessels on the first enface frame were labeled by adaptive thresholding and each subsequent frame was segmented via segment propagation from the frame above and was in turn used as the reference for the next frame. Choroid boundary was determined by structural similarity index between adjacent frames. The framework was tested on 33 mm SS-OCT volumes acquired by a prototype SS-OCT system (PlexElite 9000, Zeiss Meditec, Dublin, CA, US), and vessel metrics including perfusion density, vessel density and mean vessel diameter were computed. Results from human subjects (N = 8) and non-human primates (N = 6) were summarized.Clinical Relevance- Accurate 3D choroid vessel segmentation can help clinicians better quantify blood perfusion which can lead to improved diagnosis and management of retinal eye diseases.
Collapse
|
38
|
Albertini F, Matter M, Nenoff L, Zhang Y, Lomax A. Online daily adaptive proton therapy. Br J Radiol 2020; 93:20190594. [PMID: 31647313 PMCID: PMC7066958 DOI: 10.1259/bjr.20190594] [Citation(s) in RCA: 81] [Impact Index Per Article: 20.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2019] [Revised: 10/15/2019] [Accepted: 10/22/2019] [Indexed: 12/11/2022] Open
Abstract
It is recognized that the use of a single plan calculated on an image acquired some time before the treatment is generally insufficient to accurately represent the daily dose to the target and to the organs at risk. This is particularly true for protons, due to the physical finite range. Although this characteristic enables the generation of steep dose gradients, which is essential for highly conformal radiotherapy, it also tightens the dependency of the delivered dose to the range accuracy. In particular, the use of an outdated patient anatomy is one of the most significant sources of range inaccuracy, thus affecting the quality of the planned dose distribution. A plan should be ideally adapted as soon as anatomical variations occur, ideally online. In this review, we describe in detail the different steps of the adaptive workflow and discuss the challenges and corresponding state-of-the art developments in particular for an online adaptive strategy.
Collapse
Affiliation(s)
| | | | | | - Ye Zhang
- Paul Scherrer Institute, Center for Proton Therapy, Switzerland
| | | |
Collapse
|
39
|
Automatic delineation of the clinical target volume and organs at risk by deep learning for rectal cancer postoperative radiotherapy. Radiother Oncol 2020; 145:186-192. [PMID: 32044531 DOI: 10.1016/j.radonc.2020.01.020] [Citation(s) in RCA: 37] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2019] [Revised: 01/01/2020] [Accepted: 01/21/2020] [Indexed: 02/05/2023]
Abstract
BACKGROUND AND PURPOSE Manual delineation of clinical target volumes (CTVs) and organs at risk (OARs) is time-consuming, and automatic contouring tools lack clinical validation. We aimed to construct and validate the use of convolutional neural networks (CNNs) to set better contouring standards for rectal cancer radiotherapy. MATERIALS AND METHODS We retrospectively collected and evaluated computed tomography (CT) scans of 199 rectal cancer patients treated at our hospital from February 2018 to April 2019. Two CNNs-DeepLabv3+ for extracting high-level semantic information and ResUNet for extracting low-level visual features-were used for the CTV and small intestine contouring, and bladder and femoral head contouring, respectively. Contouring quality was compared using the paired t test. Five-point objective grading was performed independently by two experienced radiation oncologists and verified by a third. The CNN manual correction time was recorded. RESULTS CTVs calculated using DeepLabv3+ (CTVDeepLabv3+) had significant quantitative parameter advantages over CTVResUNet (volumetric Dice coefficient, 0.88 vs 0.87, P = 0.0005; surface Dice coefficient, 0.79 vs 0.78, P = 0.008). Among 315 graded cases, DeepLabv3+ obtained the highest scores with 284 cases, consistent with the objective criteria, whereas CTVResUNet had the minimum mean manual correction time (7.29 min). DeepLabv3+ performed better than ResUNet for small intestine contouring and ResUNet performed better for bladder and femoral head contouring. The manual correction time for OARs was <4 min for both models. CONCLUSION CNNs at various feature resolution levels well delineate rectal cancer CTVs and OARs, displaying high quality and requiring shorter computation and manual correction time.
Collapse
|
40
|
Haq R, Berry SL, Deasy JO, Hunt M, Veeraraghavan H. Dynamic multiatlas selection-based consensus segmentation of head and neck structures from CT images. Med Phys 2019; 46:5612-5622. [PMID: 31587300 DOI: 10.1002/mp.13854] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2018] [Revised: 09/11/2019] [Accepted: 09/16/2019] [Indexed: 12/11/2022] Open
Abstract
PURPOSE Manual delineation of head and neck (H&N) organ-at-risk (OAR) structures for radiation therapy planning is time consuming and highly variable. Therefore, we developed a dynamic multiatlas selection-based approach for fast and reproducible segmentation. METHODS Our approach dynamically selects and weights the appropriate number of atlases for weighted label fusion and generates segmentations and consensus maps indicating voxel-wise agreement between different atlases. Atlases were selected for a target as those exceeding an alignment weight called dynamic atlas attention index. Alignment weights were computed at the image level and called global weighted voting (GWV) or at the structure level and called structure weighted voting (SWV) by using a normalized metric computed as the sum of squared distances of computed tomography (CT)-radiodensity and modality-independent neighborhood descriptors (extracting edge information). Performance comparisons were performed using 77 H&N CT images from an internal Memorial Sloan-Kettering Cancer Center dataset (N = 45) and an external dataset (N = 32) using Dice similarity coefficient (DSC), Hausdorff distance (HD), 95th percentile of HD, median of maximum surface distance, and volume ratio error against expert delineation. Pairwise DSC accuracy comparisons of proposed (GWV, SWV) vs single best atlas (BA) or majority voting (MV) methods were performed using Wilcoxon rank-sum tests. RESULTS Both SWV and GWV methods produced significantly better segmentation accuracy than BA (P < 0.001) and MV (P < 0.001) for all OARs within both datasets. SWV generated the most accurate segmentations with DSC of: 0.88 for oral cavity, 0.85 for mandible, 0.84 for cord, 0.76 for brainstem and parotids, 0.71 for larynx, and 0.60 for submandibular glands. SWV's accuracy exceeded GWV's for submandibular glands (DSC = 0.60 vs 0.52, P = 0.019). CONCLUSIONS The contributed SWV and GWV methods generated more accurate automated segmentations than the other two multiatlas-based segmentation techniques. The consensus maps could be combined with segmentations to visualize voxel-wise consensus between atlases within OARs during manual review.
Collapse
Affiliation(s)
- Rabia Haq
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY, 10065, USA
| | - Sean L Berry
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY, 10065, USA
| | - Joseph O Deasy
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY, 10065, USA
| | - Margie Hunt
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY, 10065, USA
| | - Harini Veeraraghavan
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY, 10065, USA
| |
Collapse
|
41
|
Tang H, Chen X, Liu Y, Lu Z, You J, Yang M, Yao S, Zhao G, Xu Y, Chen T, Liu Y, Xie X. Clinically applicable deep learning framework for organs at risk delineation in CT images. NAT MACH INTELL 2019. [DOI: 10.1038/s42256-019-0099-z] [Citation(s) in RCA: 63] [Impact Index Per Article: 12.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
|
42
|
Cerrolaza JJ, Picazo ML, Humbert L, Sato Y, Rueckert D, Ballester MÁG, Linguraru MG. Computational anatomy for multi-organ analysis in medical imaging: A review. Med Image Anal 2019; 56:44-67. [DOI: 10.1016/j.media.2019.04.002] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2018] [Revised: 02/05/2019] [Accepted: 04/13/2019] [Indexed: 12/19/2022]
|
43
|
A review of feature selection methods in medical applications. Comput Biol Med 2019; 112:103375. [PMID: 31382212 DOI: 10.1016/j.compbiomed.2019.103375] [Citation(s) in RCA: 194] [Impact Index Per Article: 38.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2019] [Revised: 07/29/2019] [Accepted: 07/29/2019] [Indexed: 11/22/2022]
Abstract
Feature selection is a preprocessing technique that identifies the key features of a given problem. It has traditionally been applied in a wide range of problems that include biological data processing, finance, and intrusion detection systems. In particular, feature selection has been successfully used in medical applications, where it can not only reduce dimensionality but also help us understand the causes of a disease. We describe some basic concepts related to medical applications and provide some necessary background information on feature selection. We review the most recent feature selection methods developed for and applied in medical problems, covering prolific research fields such as medical imaging, biomedical signal processing, and DNA microarray data analysis. A case study of two medical applications that includes actual patient data is used to demonstrate the suitability of applying feature selection methods in medical problems and to illustrate how these methods work in real-world scenarios.
Collapse
|
44
|
Sun XS, Li XY, Chen QY, Tang LQ, Mai HQ. Future of Radiotherapy in Nasopharyngeal Carcinoma. Br J Radiol 2019; 92:20190209. [PMID: 31265322 DOI: 10.1259/bjr.20190209] [Citation(s) in RCA: 62] [Impact Index Per Article: 12.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022] Open
Abstract
Nasopharyngeal carcinoma (NPC) is a malignancy with unique clinical biological profiles such as associated Epstein-Barr virus infection and high radiosensitivity. Radiotherapy has long been recognized as the mainstay for the treatment of NPC. However, the further efficacy brought by radical radiotherapy has reached the bottleneck in advanced patients, who are prone to develop recurrence and distant metastasis after treatment. The application of photon therapy makes it possible for radiation dose escalation in refractory cases and may provide second chance for recurrent patients with less unrecoverable tissue damage. The concept of adaptive radiotherapy is put forward in consideration of target volume shrinkage during treatment. The replanning procedure offers better protection for the organ at risk. However, the best timing and candidates for adaptive radiotherapy is still under debate. The current tendency of artificial intelligence in NPC mainly focuses on image recognition, auto-segmentation and dose prediction. Although artificial intelligence is still in developmental stage, the future of it is promising.To further improve the efficacy of NPC, multimodality treatment is encouraged. In-depth studies on genetic and epigenetic variations help to explain the great heterogeneity among patients, and could further be applied to precise screening and prediction, personalized radiotherapy and the evolution of targeted drugs. Given the clinical benefit of immunotherapy in other cancers, the application of immunotherapy, especially immune checkpoint inhibitor, in NPC is also of great potential. Results from ongoing clinical trials combining immunotherapy with radiotherapy in NPC are expected.
Collapse
Affiliation(s)
- Xue-Song Sun
- Department of Nasopharyngeal Carcinoma, Sun Yat-sen University Cancer Center, 651 Dongfeng Road East, Guangzhou, P R China.,Sun Yat-sen University Cancer Center; State Key Laboratory of Oncology in South China; Collaborative Innovation Center for Cancer Medicine, 651 Dongfeng Road East, Guangzhou, P R China
| | - Xiao-Yun Li
- Department of Nasopharyngeal Carcinoma, Sun Yat-sen University Cancer Center, 651 Dongfeng Road East, Guangzhou, P R China.,Sun Yat-sen University Cancer Center; State Key Laboratory of Oncology in South China; Collaborative Innovation Center for Cancer Medicine, 651 Dongfeng Road East, Guangzhou, P R China
| | - Qiu-Yan Chen
- Department of Nasopharyngeal Carcinoma, Sun Yat-sen University Cancer Center, 651 Dongfeng Road East, Guangzhou, P R China.,Sun Yat-sen University Cancer Center; State Key Laboratory of Oncology in South China; Collaborative Innovation Center for Cancer Medicine, 651 Dongfeng Road East, Guangzhou, P R China
| | - Lin-Quan Tang
- Department of Nasopharyngeal Carcinoma, Sun Yat-sen University Cancer Center, 651 Dongfeng Road East, Guangzhou, P R China.,Sun Yat-sen University Cancer Center; State Key Laboratory of Oncology in South China; Collaborative Innovation Center for Cancer Medicine, 651 Dongfeng Road East, Guangzhou, P R China
| | - Hai-Qiang Mai
- Department of Nasopharyngeal Carcinoma, Sun Yat-sen University Cancer Center, 651 Dongfeng Road East, Guangzhou, P R China.,Sun Yat-sen University Cancer Center; State Key Laboratory of Oncology in South China; Collaborative Innovation Center for Cancer Medicine, 651 Dongfeng Road East, Guangzhou, P R China
| |
Collapse
|
45
|
Affiliation(s)
- Zheng Chang
- From the Department of Radiation Oncology, Duke University Medical Center, Box 3295, Durham, NC 27710
| |
Collapse
|
46
|
Tong N, Gou S, Yang S, Cao M, Sheng K. Shape constrained fully convolutional DenseNet with adversarial training for multiorgan segmentation on head and neck CT and low-field MR images. Med Phys 2019; 46:2669-2682. [PMID: 31002188 DOI: 10.1002/mp.13553] [Citation(s) in RCA: 42] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2018] [Revised: 04/14/2019] [Accepted: 04/15/2019] [Indexed: 11/09/2022] Open
Abstract
PURPOSE Image-guided radiotherapy provides images not only for patient positioning but also for online adaptive radiotherapy. Accurate delineation of organs-at-risk (OARs) on Head and Neck (H&N) CT and MR images is valuable to both initial treatment planning and adaptive planning, but manual contouring is laborious and inconsistent. A novel method based on the generative adversarial network (GAN) with shape constraint (SC-GAN) is developed for fully automated H&N OARs segmentation on CT and low-field MRI. METHODS AND MATERIAL A deep supervised fully convolutional DenseNet is employed as the segmentation network for voxel-wise prediction. A convolutional neural network (CNN)-based discriminator network is then utilized to correct predicted errors and image-level inconsistency between the prediction and ground truth. An additional shape representation loss between the prediction and ground truth in the latent shape space is integrated into the segmentation and adversarial loss functions to reduce false positivity and constrain the predicted shapes. The proposed segmentation method was first benchmarked on a public H&N CT database including 32 patients, and then on 25 0.35T MR images obtained from an MR-guided radiotherapy system. The OARs include brainstem, optical chiasm, larynx (MR only), mandible, pharynx (MR only), parotid glands (both left and right), optical nerves (both left and right), and submandibular glands (both left and right, CT only). The performance of the proposed SC-GAN was compared with GAN alone and GAN with the shape constraint (SC) but without the DenseNet (SC-GAN-ResNet) to quantify the contributions of shape constraint and DenseNet in the deep neural network segmentation. RESULTS The proposed SC-GAN slightly but consistently improve the segmentation accuracy on the benchmark H&N CT images compared with our previous deep segmentation network, which outperformed other published methods on the same or similar CT H&N dataset. On the low-field MR dataset, the following average Dice's indices were obtained using improved SC-GAN: 0.916 (brainstem), 0.589 (optical chiasm), 0.816 (mandible), 0.703 (optical nerves), 0.799 (larynx), 0.706 (pharynx), and 0.845 (parotid glands). The average surface distances ranged from 0.68 mm (brainstem) to 1.70 mm (larynx). The 95% surface distance ranged from 1.48 mm (left optical nerve) to 3.92 mm (larynx). Compared with CT, using 95% surface distance evaluation, the automated segmentation accuracy is higher on MR for the brainstem, optical chiasm, optical nerves and parotids, and lower for the mandible. The SC-GAN performance is superior to SC-GAN-ResNet, which is more accurate than GAN alone on both the CT and MR datasets. The segmentation time for one patient is 14 seconds using a single GPU. CONCLUSION The performance of our previous shape constrained fully CNNs for H&N segmentation is further improved by incorporating GAN and DenseNet. With the novel segmentation method, we showed that the low-field MR images acquired on a MR-guided radiation radiotherapy system can support accurate and fully automated segmentation of both bony and soft tissue OARs for adaptive radiotherapy.
Collapse
Affiliation(s)
- Nuo Tong
- Key Lab of Intelligent Perception and Image Understanding of Ministry of Education, Xidian University, Xi'an, Shaanxi, 710071, China.,Department of Radiation Oncology, University of California-Los Angeles, Los Angeles, CA, 90095, USA
| | - Shuiping Gou
- Key Lab of Intelligent Perception and Image Understanding of Ministry of Education, Xidian University, Xi'an, Shaanxi, 710071, China
| | - Shuyuan Yang
- Key Lab of Intelligent Perception and Image Understanding of Ministry of Education, Xidian University, Xi'an, Shaanxi, 710071, China
| | - Minsong Cao
- Department of Radiation Oncology, University of California-Los Angeles, Los Angeles, CA, 90095, USA
| | - Ke Sheng
- Department of Radiation Oncology, University of California-Los Angeles, Los Angeles, CA, 90095, USA
| |
Collapse
|
47
|
Head and Neck Cancer Adaptive Radiation Therapy (ART): Conceptual Considerations for the Informed Clinician. Semin Radiat Oncol 2019; 29:258-273. [PMID: 31027643 DOI: 10.1016/j.semradonc.2019.02.008] [Citation(s) in RCA: 46] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
Abstract
For nearly 2 decades, adaptive radiation therapy (ART) has been proposed as a method to account for changes in head and neck tumor and normal tissue to enhance therapeutic ratios. While technical advances in imaging, planning and delivery have allowed greater capacity for ART delivery, and a series of dosimetric explorations have consistently shown capacity for improvement, there remains a paucity of clinical trials demonstrating the utility of ART. Furthermore, while ad hoc implementation of head and neck ART is reported, systematic full-scale head and neck ART remains an as yet unreached reality. To some degree, this lack of scalability may be related to not only the complexity of ART, but also variability in the nomenclature and descriptions of what is encompassed by ART. Consequently, we present an overview of the history, current status, and recommendations for the future of ART, with an eye toward improving the clarity and description of head and neck ART for interested clinicians, noting practical considerations for implementation of an ART program or clinical trial. Process level considerations for ART are noted, reminding the reader that, paraphrasing the writer Elbert Hubbard, "Art is not a thing, it is a way."
Collapse
|
48
|
Wu X, Udupa JK, Tong Y, Odhner D, Pednekar GV, Simone CB, McLaughlin D, Apinorasethkul C, Apinorasethkul O, Lukens J, Mihailidis D, Shammo G, James P, Tiwari A, Wojtowicz L, Camaratta J, Torigian DA. AAR-RT - A system for auto-contouring organs at risk on CT images for radiation therapy planning: Principles, design, and large-scale evaluation on head-and-neck and thoracic cancer cases. Med Image Anal 2019; 54:45-62. [PMID: 30831357 PMCID: PMC6499546 DOI: 10.1016/j.media.2019.01.008] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2018] [Revised: 12/04/2018] [Accepted: 01/26/2019] [Indexed: 12/25/2022]
Abstract
Contouring (segmentation) of Organs at Risk (OARs) in medical images is required for accurate radiation therapy (RT) planning. In current clinical practice, OAR contouring is performed with low levels of automation. Although several approaches have been proposed in the literature for improving automation, it is difficult to gain an understanding of how well these methods would perform in a realistic clinical setting. This is chiefly due to three key factors - small number of patient studies used for evaluation, lack of performance evaluation as a function of input image quality, and lack of precise anatomic definitions of OARs. In this paper, extending our previous body-wide Automatic Anatomy Recognition (AAR) framework to RT planning of OARs in the head and neck (H&N) and thoracic body regions, we present a methodology called AAR-RT to overcome some of these hurdles. AAR-RT follows AAR's 3-stage paradigm of model-building, object-recognition, and object-delineation. Model-building: Three key advances were made over AAR. (i) AAR-RT (like AAR) starts off with a computationally precise definition of the two body regions and all of their OARs. Ground truth delineations of OARs are then generated following these definitions strictly. We retrospectively gathered patient data sets and the associated contour data sets that have been created previously in routine clinical RT planning from our Radiation Oncology department and mended the contours to conform to these definitions. We then derived an Object Quality Score (OQS) for each OAR sample and an Image Quality Score (IQS) for each study, both on a 1-to-10 scale, based on quality grades assigned to each OAR sample following 9 key quality criteria. Only studies with high IQS and high OQS for all of their OARs were selected for model building. IQS and OQS were employed for evaluating AAR-RT's performance as a function of image/object quality. (ii) In place of the previous hand-crafted hierarchy for organizing OARs in AAR, we devised a method to find an optimal hierarchy for each body region. Optimality was based on minimizing object recognition error. (iii) In addition to the parent-to-child relationship encoded in the hierarchy in previous AAR, we developed a directed probability graph technique to further improve recognition accuracy by learning and encoding in the model "steady" relationships that may exist among OAR boundaries in the three orthogonal planes. Object-recognition: The two key improvements over the previous approach are (i) use of the optimal hierarchy for actual recognition of OARs in a given image, and (ii) refined recognition by making use of the trained probability graph. Object-delineation: We use a kNN classifier confined to the fuzzy object mask localized by the recognition step and then fit optimally the fuzzy mask to the kNN-derived voxel cluster to bring back shape constraint on the object. We evaluated AAR-RT on 205 thoracic and 298 H&N (total 503) studies, involving both planning and re-planning scans and a total of 21 organs (9 - thorax, 12 - H&N). The studies were gathered from two patient age groups for each gender - 40-59 years and 60-79 years. The number of 3D OAR samples analyzed from the two body regions was 4301. IQS and OQS tended to cluster at the two ends of the score scale. Accordingly, we considered two quality groups for each gender - good and poor. Good quality data sets typically had OQS ≥ 6 and had distortions, artifacts, pathology etc. in not more than 3 slices through the object. The number of model-worthy data sets used for training were 38 for thorax and 36 for H&N, and the remaining 479 studies were used for testing AAR-RT. Accordingly, we created 4 anatomy models, one each for: Thorax male (20 model-worthy data sets), Thorax female (18 model-worthy data sets), H&N male (20 model-worthy data sets), and H&N female (16 model-worthy data sets). On "good" cases, AAR-RT's recognition accuracy was within 2 voxels and delineation boundary distance was within ∼1 voxel. This was similar to the variability observed between two dosimetrists in manually contouring 5-6 OARs in each of 169 studies. On "poor" cases, AAR-RT's errors hovered around 5 voxels for recognition and 2 voxels for boundary distance. The performance was similar on planning and replanning cases, and there was no gender difference in performance. AAR-RT's recognition operation is much more robust than delineation. Understanding object and image quality and how they influence performance is crucial for devising effective object recognition and delineation algorithms. OQS seems to be more important than IQS in determining accuracy. Streak artifacts arising from dental implants and fillings and beam hardening from bone pose the greatest challenge to auto-contouring methods.
Collapse
Affiliation(s)
- Xingyu Wu
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, 602 Goddard building, 3710 Hamilton Walk, 6th Floor, Rm 602W, Philadelphia, PA 19104, United States
| | - Jayaram K Udupa
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, 602 Goddard building, 3710 Hamilton Walk, 6th Floor, Rm 602W, Philadelphia, PA 19104, United States.
| | - Yubing Tong
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, 602 Goddard building, 3710 Hamilton Walk, 6th Floor, Rm 602W, Philadelphia, PA 19104, United States
| | - Dewey Odhner
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, 602 Goddard building, 3710 Hamilton Walk, 6th Floor, Rm 602W, Philadelphia, PA 19104, United States
| | - Gargi V Pednekar
- Quantitative Radiology Solutions, 3624 Market Street, Suite 5E, Philadelphia, PA 19104, United States
| | - Charles B Simone
- Department of Radiation Oncology, Maryland Proton Treatment Center, School of Medicine, University of Maryland 850W, Baltimore, MD 21201, United States
| | - David McLaughlin
- Quantitative Radiology Solutions, 3624 Market Street, Suite 5E, Philadelphia, PA 19104, United States
| | - Chavanon Apinorasethkul
- Department of Radiation Oncology, University of Pennsylvania, Philadelphia, PA 19104, United States
| | - Ontida Apinorasethkul
- Department of Radiation Oncology, University of Pennsylvania, Philadelphia, PA 19104, United States
| | - John Lukens
- Department of Radiation Oncology, University of Pennsylvania, Philadelphia, PA 19104, United States
| | - Dimitris Mihailidis
- Department of Radiation Oncology, University of Pennsylvania, Philadelphia, PA 19104, United States
| | - Geraldine Shammo
- Department of Radiation Oncology, University of Pennsylvania, Philadelphia, PA 19104, United States
| | - Paul James
- Department of Radiation Oncology, University of Pennsylvania, Philadelphia, PA 19104, United States
| | - Akhil Tiwari
- Department of Radiation Oncology, University of Pennsylvania, Philadelphia, PA 19104, United States
| | - Lisa Wojtowicz
- Department of Radiation Oncology, University of Pennsylvania, Philadelphia, PA 19104, United States
| | - Joseph Camaratta
- Quantitative Radiology Solutions, 3624 Market Street, Suite 5E, Philadelphia, PA 19104, United States
| | - Drew A Torigian
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, 602 Goddard building, 3710 Hamilton Walk, 6th Floor, Rm 602W, Philadelphia, PA 19104, United States
| |
Collapse
|
49
|
Chan JW, Kearney V, Haaf S, Wu S, Bogdanov M, Reddick M, Dixit N, Sudhyadhom A, Chen J, Yom SS, Solberg TD. A convolutional neural network algorithm for automatic segmentation of head and neck organs at risk using deep lifelong learning. Med Phys 2019; 46:2204-2213. [PMID: 30887523 DOI: 10.1002/mp.13495] [Citation(s) in RCA: 38] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2018] [Revised: 02/19/2019] [Accepted: 02/19/2019] [Indexed: 01/17/2023] Open
Abstract
PURPOSE This study suggests a lifelong learning-based convolutional neural network (LL-CNN) algorithm as a superior alternative to single-task learning approaches for automatic segmentation of head and neck (OARs) organs at risk. METHODS AND MATERIALS Lifelong learning-based convolutional neural network was trained on twelve head and neck OARs simultaneously using a multitask learning framework. Once the weights of the shared network were established, the final multitask convolutional layer was replaced by a single-task convolutional layer. The single-task transfer learning network was trained on each OAR separately with early stoppage. The accuracy of LL-CNN was assessed based on Dice score and root-mean-square error (RMSE) compared to manually delineated contours set as the gold standard. LL-CNN was compared with 2D-UNet, 3D-UNet, a single-task CNN (ST-CNN), and a pure multitask CNN (MT-CNN). Training, validation, and testing followed Kaggle competition rules, where 160 patients were used for training, 20 were used for internal validation, and 20 in a separate test set were used to report final prediction accuracies. RESULTS On average contours generated with LL-CNN had higher Dice coefficients and lower RMSE than 2D-UNet, 3D-Unet, ST- CNN, and MT-CNN. LL-CNN required ~72 hrs to train using a distributed learning framework on 2 Nvidia 1080Ti graphics processing units. LL-CNN required 20 s to predict all 12 OARs, which was approximately as fast as the fastest alternative methods with the exception of MT-CNN. CONCLUSIONS This study demonstrated that for head and neck organs at risk, LL-CNN achieves a prediction accuracy superior to all alternative algorithms.
Collapse
Affiliation(s)
- Jason W Chan
- Department of Radiation Oncology, University of California, San Francisco, CA, 94115, USA
| | - Vasant Kearney
- Department of Radiation Oncology, University of California, San Francisco, CA, 94115, USA
| | - Samuel Haaf
- Department of Radiation Oncology, University of California, San Francisco, CA, 94115, USA
| | - Susan Wu
- Department of Radiation Oncology, University of California, San Francisco, CA, 94115, USA
| | - Madeleine Bogdanov
- Department of Radiation Oncology, University of California, San Francisco, CA, 94115, USA
| | - Mariah Reddick
- Department of Radiation Oncology, University of California, San Francisco, CA, 94115, USA
| | - Nayha Dixit
- Department of Radiation Oncology, University of California, San Francisco, CA, 94115, USA
| | - Atchar Sudhyadhom
- Department of Radiation Oncology, University of California, San Francisco, CA, 94115, USA
| | - Josephine Chen
- Department of Radiation Oncology, University of California, San Francisco, CA, 94115, USA
| | - Sue S Yom
- Department of Radiation Oncology, University of California, San Francisco, CA, 94115, USA
| | - Timothy D Solberg
- Department of Radiation Oncology, University of California, San Francisco, CA, 94115, USA
| |
Collapse
|
50
|
Chen H, Lu W, Chen M, Zhou L, Timmerman R, Tu D, Nedzi L, Wardak Z, Jiang S, Zhen X, Gu X. A recursive ensemble organ segmentation (REOS) framework: application in brain radiotherapy. ACTA ACUST UNITED AC 2019; 64:025015. [DOI: 10.1088/1361-6560/aaf83c] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|