1
|
Clark B, Hardcastle N, Johnston LA, Korte J. Transfer learning for auto-segmentation of 17 organs-at-risk in the head and neck: Bridging the gap between institutional and public datasets. Med Phys 2024; 51:4767-4777. [PMID: 38376454 DOI: 10.1002/mp.16997] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2023] [Revised: 12/04/2023] [Accepted: 01/31/2024] [Indexed: 02/21/2024] Open
Abstract
BACKGROUND Auto-segmentation of organs-at-risk (OARs) in the head and neck (HN) on computed tomography (CT) images is a time-consuming component of the radiation therapy pipeline that suffers from inter-observer variability. Deep learning (DL) has shown state-of-the-art results in CT auto-segmentation, with larger and more diverse datasets showing better segmentation performance. Institutional CT auto-segmentation datasets have been small historically (n < 50) due to the time required for manual curation of images and anatomical labels. Recently, large public CT auto-segmentation datasets (n > 1000 aggregated) have become available through online repositories such as The Cancer Imaging Archive. Transfer learning is a technique applied when training samples are scarce, but a large dataset from a closely related domain is available. PURPOSE The purpose of this study was to investigate whether a large public dataset could be used in place of an institutional dataset (n > 500), or to augment performance via transfer learning, when building HN OAR auto-segmentation models for institutional use. METHODS Auto-segmentation models were trained on a large public dataset (public models) and a smaller institutional dataset (institutional models). The public models were fine-tuned on the institutional dataset using transfer learning (transfer models). We assessed both public model generalizability and transfer model performance by comparison with institutional models. Additionally, the effect of institutional dataset size on both transfer and institutional models was investigated. All DL models used a high-resolution, two-stage architecture based on the popular 3D U-Net. Model performance was evaluated using five geometric measures: the dice similarity coefficient (DSC), surface DSC, 95th percentile Hausdorff distance, mean surface distance (MSD), and added path length. RESULTS For a small subset of OARs (left/right optic nerve, spinal cord, left submandibular), the public models performed significantly better (p < 0.05) than, or showed no significant difference to, the institutional models under most of the metrics examined. For the remaining OARs, the public models were inferior to the institutional models, although performance differences were small (DSC ≤ 0.03, MSD < 0.5 mm) for seven OARs (brainstem, left/right lens, left/right parotid, mandible, right submandibular). The transfer models performed significantly better than the institutional models for seven OARs (brainstem, right lens, left/right optic nerve, left/right parotid, spinal cord) with a small margin of improvement (DSC ≤ 0.02, MSD < 0.4 mm). When numbers of institutional training samples were limited, public and transfer models outperformed the institutional models for most OARs (brainstem, left/right lens, left/right optic nerve, left/right parotid, spinal cord, and left/right submandibular). CONCLUSION Training auto-segmentation models with public data alone was suitable for a small number of OARs. Using only public data incurred a small performance deficit for most other OARs, when compared with institutional data alone, but may be preferable over time-consuming curation of a large institutional dataset. When a large institutional dataset was available, transfer learning with models pretrained on a large public dataset provided a modest performance improvement for several OARs. When numbers of institutional samples were limited, using the public dataset alone, or as a pretrained model, was beneficial for most OARs.
Collapse
Affiliation(s)
- Brett Clark
- Department of Biomedical Engineering, University of Melbourne, Melbourne, Australia
- Department of Physical Sciences, Peter MacCallum Cancer Centre, Melbourne, Australia
| | - Nicholas Hardcastle
- Department of Physical Sciences, Peter MacCallum Cancer Centre, Melbourne, Australia
- Centre for Medical Radiation Physics, University of Wollongong, Wollongong, Australia
- Sir Peter MacCallum Department of Oncology, University of Melbourne, Melbourne, Australia
| | - Leigh A Johnston
- Department of Biomedical Engineering, University of Melbourne, Melbourne, Australia
- Melbourne Brain Centre Imaging Unit, University of Melbourne, Melbourne, Australia
- Graeme Clark Institute, University of Melbourne, Melbourne, Australia
| | - James Korte
- Department of Biomedical Engineering, University of Melbourne, Melbourne, Australia
- Department of Physical Sciences, Peter MacCallum Cancer Centre, Melbourne, Australia
| |
Collapse
|
2
|
Weisman AJ, Huff DT, Govindan RM, Chen S, Perk TG. Multi-organ segmentation of CT via convolutional neural network: impact of training setting and scanner manufacturer. Biomed Phys Eng Express 2023; 9:065021. [PMID: 37725928 DOI: 10.1088/2057-1976/acfb06] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2023] [Accepted: 09/19/2023] [Indexed: 09/21/2023]
Abstract
Objective. Automated organ segmentation on CT images can enable the clinical use of advanced quantitative software devices, but model performance sensitivities must be understood before widespread adoption can occur. The goal of this study was to investigate performance differences between Convolutional Neural Networks (CNNs) trained to segment one (single-class) versus multiple (multi-class) organs, and between CNNs trained on scans from a single manufacturer versus multiple manufacturers.Methods. The multi-class CNN was trained on CT images obtained from 455 whole-body PET/CT scans (413 for training, 42 for testing) taken with Siemens, GE, and Phillips PET/CT scanners where 16 organs were segmented. The multi-class CNN was compared to 16 smaller single-class CNNs trained using the same data, but with segmentations of only one organ per model. In addition, CNNs trained on Siemens-only (N = 186) and GE-only (N = 219) scans (manufacturer-specific) were compared with CNNs trained on data from both Siemens and GE scanners (manufacturer-mixed). Segmentation performance was quantified using five performance metrics, including the Dice Similarity Coefficient (DSC).Results. The multi-class CNN performed well compared to previous studies, even in organs usually considered difficult auto-segmentation targets (e.g., pancreas, bowel). Segmentations from the multi-class CNN were significantly superior to those from smaller single-class CNNs in most organs, and the 16 single-class models took, on average, six times longer to segment all 16 organs compared to the single multi-class model. The manufacturer-mixed approach achieved minimally higher performance over the manufacturer-specific approach.Significance. A CNN trained on contours of multiple organs and CT data from multiple manufacturers yielded high-quality segmentations. Such a model is an essential enabler of image processing in a software device that quantifies and analyzes such data to determine a patient's treatment response. To date, this activity of whole organ segmentation has not been adopted due to the intense manual workload and time required.
Collapse
Affiliation(s)
- Amy J Weisman
- AIQ Solutions, Madison, WI, United States of America
| | - Daniel T Huff
- AIQ Solutions, Madison, WI, United States of America
| | | | - Song Chen
- Department of Nuclear Medicine, The First Hospital of China Medical University, Shenyang, Liaoning, People's Republic of China
| | | |
Collapse
|
3
|
Güneş AM, van Rooij W, Gulshad S, Slotman B, Dahele M, Verbakel W. Impact of imperfection in medical imaging data on deep learning-based segmentation performance: An experimental study using synthesized data. Med Phys 2023; 50:6421-6432. [PMID: 37118976 DOI: 10.1002/mp.16437] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2022] [Revised: 02/20/2023] [Accepted: 03/12/2023] [Indexed: 04/30/2023] Open
Abstract
BACKGROUND Clinical data used to train deep learning models are often not clean data. They can contain imperfections in both the imaging data and the corresponding segmentations. PURPOSE This study investigates the influence of data imperfections on the performance of deep learning models for parotid gland segmentation. This was done in a controlled manner by using synthesized data. The insights this study provides may be used to make deep learning models better and more reliable. METHODS The data were synthesized by using the clinical segmentations, creating a pseudo ground-truth in the process. Three kinds of imperfections were simulated: incorrect segmentations, low image contrast, and artifacts in the imaging data. The severity of each imperfection was varied in five levels. Models resulting from training sets from each of the five levels were cross-evaluated with test sets from each of the five levels. RESULTS Using synthesized data led to almost perfect parotid gland segmentation when no error was added. Lowering the quality of the parotid gland segmentations used for training substantially lowered the model performance. Additionally, lowering the image quality of the training data by decreasing the contrast or introducing artifacts made the resulting models more robust to data containing those respective kinds of data imperfection. CONCLUSION This study demonstrated the importance of good-quality segmentations for deep learning training and it shows that using low-quality imaging data for training can enhance the robustness of the resulting models.
Collapse
Affiliation(s)
| | - Ward van Rooij
- Department of Radiation Oncology, Amsterdam UMC, Amsterdam, The Netherlands
| | - Sadaf Gulshad
- Faculty of Science, Universiteit van Amsterdam, Amsterdam, The Netherlands
| | - Ben Slotman
- Department of Radiation Oncology, Amsterdam UMC, Amsterdam, The Netherlands
| | - Max Dahele
- Department of Radiation Oncology, Amsterdam UMC, Amsterdam, The Netherlands
| | - Wilko Verbakel
- Department of Radiation Oncology, Amsterdam UMC, Amsterdam, The Netherlands
| |
Collapse
|
4
|
Pera Ó, Martínez Á, Möhler C, Hamans B, Vega F, Barral F, Becerra N, Jimenez R, Fernandez-Velilla E, Quera J, Algara M. Clinical Validation of Siemens' Syngo.via Automatic Contouring System. Adv Radiat Oncol 2023; 8:101177. [PMID: 36865668 PMCID: PMC9972393 DOI: 10.1016/j.adro.2023.101177] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2022] [Accepted: 01/05/2023] [Indexed: 01/18/2023] Open
Abstract
Purpose The manual delineation of organs at risk is a process that requires a great deal of time both for the technician and for the physician. Availability of validated software tools assisted by artificial intelligence would be of great benefit, as it would significantly improve the radiation therapy workflow, reducing the time required for segmentation. The purpose of this article is to validate the deep learning-based autocontouring solution integrated in syngo.via RT Image Suite VB40 (Siemens Healthineers, Forchheim, Germany). Methods and Materials For this purpose, we have used our own specific qualitative classification system, RANK, to evaluate more than 600 contours corresponding to 18 different automatically delineated organs at risk. Computed tomography data sets of 95 different patients were included: 30 patients with lung, 30 patients with breast, and 35 male patients with pelvic cancer. The automatically generated structures were reviewed in the Eclipse Contouring module independently by 3 observers: an expert physician, an expert technician, and a junior physician. Results There is a statistically significant difference between the Dice coefficient associated with RANK 4 compared with the coefficient associated with RANKs 2 and 3 (P < .001). In total, 64% of the evaluated structures received the maximum score, 4. Only 1% of the structures were classified with the lowest score, 1. The time savings for breast, thorax, and pelvis were 87.6%, 93.5%, and 82.2%, respectively. Conclusions Siemens' syngo.via RT Image Suite offers good autocontouring results and significant time savings.
Collapse
Affiliation(s)
- Óscar Pera
- Radiation Oncology Department, Hospital del Mar, Barcelona, Spain,Institut Hospital del Mar d'Investigacions Mèdiques, Barcelona, Spain,Corresponding author: Óscar Pera, MSc
| | - Álvaro Martínez
- Radiation Oncology Department, Hospital del Mar, Barcelona, Spain
| | | | | | | | | | - Nuria Becerra
- Radiation Oncology Department, Hospital del Mar, Barcelona, Spain
| | - Rafael Jimenez
- Radiation Oncology Department, Hospital del Mar, Barcelona, Spain
| | - Enric Fernandez-Velilla
- Radiation Oncology Department, Hospital del Mar, Barcelona, Spain,Institut Hospital del Mar d'Investigacions Mèdiques, Barcelona, Spain
| | - Jaume Quera
- Radiation Oncology Department, Hospital del Mar, Barcelona, Spain,Institut Hospital del Mar d'Investigacions Mèdiques, Barcelona, Spain
| | - Manuel Algara
- Radiation Oncology Department, Hospital del Mar, Barcelona, Spain,Institut Hospital del Mar d'Investigacions Mèdiques, Barcelona, Spain,Autonomous University of Barcelona, Barcelona, Spain
| |
Collapse
|
5
|
Huang X, Liu Y, Li Y, Qi K, Gao A, Zheng B, Liang D, Long X. Deep Learning-Based Multiclass Brain Tissue Segmentation in Fetal MRIs. SENSORS (BASEL, SWITZERLAND) 2023; 23:655. [PMID: 36679449 PMCID: PMC9862805 DOI: 10.3390/s23020655] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Revised: 12/31/2022] [Accepted: 01/03/2023] [Indexed: 06/17/2023]
Abstract
Fetal brain tissue segmentation is essential for quantifying the presence of congenital disorders in the developing fetus. Manual segmentation of fetal brain tissue is cumbersome and time-consuming, so using an automatic segmentation method can greatly simplify the process. In addition, the fetal brain undergoes a variety of changes throughout pregnancy, such as increased brain volume, neuronal migration, and synaptogenesis. In this case, the contrast between tissues, especially between gray matter and white matter, constantly changes throughout pregnancy, increasing the complexity and difficulty of our segmentation. To reduce the burden of manual refinement of segmentation, we proposed a new deep learning-based segmentation method. Our approach utilized a novel attentional structural block, the contextual transformer block (CoT-Block), which was applied in the backbone network model of the encoder-decoder to guide the learning of dynamic attentional matrices and enhance image feature extraction. Additionally, in the last layer of the decoder, we introduced a hybrid dilated convolution module, which can expand the receptive field and retain detailed spatial information, effectively extracting the global contextual information in fetal brain MRI. We quantitatively evaluated our method according to several performance measures: dice, precision, sensitivity, and specificity. In 80 fetal brain MRI scans with gestational ages ranging from 20 to 35 weeks, we obtained an average Dice similarity coefficient (DSC) of 83.79%, an average Volume Similarity (VS) of 84.84%, and an average Hausdorff95 Distance (HD95) of 35.66 mm. We also used several advanced deep learning segmentation models for comparison under equivalent conditions, and the results showed that our method was superior to other methods and exhibited an excellent segmentation performance.
Collapse
Affiliation(s)
- Xiaona Huang
- Research Center for Medical AI, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
- Department of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Yang Liu
- Shenzhen Maternity and Child Healthcare Hospital, Shenzhen 518027, China
| | - Yuhan Li
- Research Center for Medical AI, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
- Department of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Keying Qi
- Research Center for Medical AI, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
- Department of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Ang Gao
- Research Center for Medical AI, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Bowen Zheng
- Research Center for Medical AI, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Dong Liang
- Research Center for Medical AI, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Xiaojing Long
- Research Center for Medical AI, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| |
Collapse
|
6
|
Sample C, Jung N, Rahmim A, Uribe C, Clark H. Development of a CT-Based Auto-Segmentation Model for Prostate-Specific Membrane Antigen (PSMA) Positron Emission Tomography-Delineated Tubarial Glands. Cureus 2022; 14:e31060. [DOI: 10.7759/cureus.31060] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/02/2022] [Indexed: 11/06/2022] Open
|
7
|
Hazarika RA, Maji AK, Syiem R, Sur SN, Kandar D. Hippocampus Segmentation Using U-Net Convolutional Network from Brain Magnetic Resonance Imaging (MRI). J Digit Imaging 2022; 35:893-909. [PMID: 35304675 PMCID: PMC9485390 DOI: 10.1007/s10278-022-00613-y] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2021] [Revised: 01/04/2022] [Accepted: 01/14/2022] [Indexed: 12/21/2022] Open
Abstract
Hippocampus is a part of the limbic system in human brain that plays an important role in forming memories and dealing with intellectual abilities. In most of the neurological disorders related to dementia, such as, Alzheimer's disease, hippocampus is one of the earliest affected regions. Because there are no effective dementia drugs, an ambient assisted living approach may help to prevent or slow the progression of dementia. By segmenting and analyzing the size/shape of hippocampus, it may be possible to classify the early dementia stages. Because of complex structure, traditional image segmentation techniques can't segment hippocampus accurately. Machine learning (ML) is a well known tool in medical image processing that can predict and deliver the outcomes accurately by learning from it's previous results. Convolutional Neural Networks (CNN) is one of the most popular ML algorithms. In this work, a U-Net Convolutional Network based approach is used for hippocampus segmentation from 2D brain images. It is observed that, the original U-Net architecture can segment hippocampus with an average performance rate of 93.6%, which outperforms all other discussed state-of-arts. By using a filter size of [Formula: see text], the original U-Net architecture performs a sequence of convolutional processes. We tweaked the architecture further to extract more relevant features by replacing all [Formula: see text] kernels with three alternative kernels of sizes [Formula: see text], [Formula: see text], and [Formula: see text]. It is observed that, the modified architecture achieved an average performance rate of 96.5%, which outperforms the original U-Net model convincingly.
Collapse
Affiliation(s)
- Ruhul Amin Hazarika
- Department of Information Technology, North Eastern Hill University, Shillong, Meghalaya, 793022, India.
| | - Arnab Kumar Maji
- Department of Information Technology, North Eastern Hill University, Shillong, Meghalaya, 793022, India
| | - Raplang Syiem
- Department of Information Technology, North Eastern Hill University, Shillong, Meghalaya, 793022, India
| | - Samarendra Nath Sur
- Department of Electronics and Communication Engineering, Sikkim Manipal Institute of Technology, Sikkim Manipal University, Majitar, Sikkim, 737136, India
| | - Debdatta Kandar
- Department of Information Technology, North Eastern Hill University, Shillong, Meghalaya, 793022, India.
| |
Collapse
|
8
|
Volpe S, Pepa M, Zaffaroni M, Bellerba F, Santamaria R, Marvaso G, Isaksson LJ, Gandini S, Starzyńska A, Leonardi MC, Orecchia R, Alterio D, Jereczek-Fossa BA. Machine Learning for Head and Neck Cancer: A Safe Bet?-A Clinically Oriented Systematic Review for the Radiation Oncologist. Front Oncol 2021; 11:772663. [PMID: 34869010 PMCID: PMC8637856 DOI: 10.3389/fonc.2021.772663] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2021] [Accepted: 10/25/2021] [Indexed: 12/17/2022] Open
Abstract
BACKGROUND AND PURPOSE Machine learning (ML) is emerging as a feasible approach to optimize patients' care path in Radiation Oncology. Applications include autosegmentation, treatment planning optimization, and prediction of oncological and toxicity outcomes. The purpose of this clinically oriented systematic review is to illustrate the potential and limitations of the most commonly used ML models in solving everyday clinical issues in head and neck cancer (HNC) radiotherapy (RT). MATERIALS AND METHODS Electronic databases were screened up to May 2021. Studies dealing with ML and radiomics were considered eligible. The quality of the included studies was rated by an adapted version of the qualitative checklist originally developed by Luo et al. All statistical analyses were performed using R version 3.6.1. RESULTS Forty-eight studies (21 on autosegmentation, four on treatment planning, 12 on oncological outcome prediction, 10 on toxicity prediction, and one on determinants of postoperative RT) were included in the analysis. The most common imaging modality was computed tomography (CT) (40%) followed by magnetic resonance (MR) (10%). Quantitative image features were considered in nine studies (19%). No significant differences were identified in global and methodological scores when works were stratified per their task (i.e., autosegmentation). DISCUSSION AND CONCLUSION The range of possible applications of ML in the field of HN Radiation Oncology is wide, albeit this area of research is relatively young. Overall, if not safe yet, ML is most probably a bet worth making.
Collapse
Affiliation(s)
- Stefania Volpe
- Division of Radiation Oncology, European Institute of Oncology (IEO) Istituto di Ricovero e Cura a Carattere Scientifico (IRCCS), Milan, Italy
- Department of Oncology and Hemato-Oncology, University of Milan, Milan, Italy
| | - Matteo Pepa
- Division of Radiation Oncology, European Institute of Oncology (IEO) Istituto di Ricovero e Cura a Carattere Scientifico (IRCCS), Milan, Italy
| | - Mattia Zaffaroni
- Division of Radiation Oncology, European Institute of Oncology (IEO) Istituto di Ricovero e Cura a Carattere Scientifico (IRCCS), Milan, Italy
| | - Federica Bellerba
- Molecular and Pharmaco-Epidemiology Unit, Department of Experimental Oncology, European Institute of Oncology (IEO) Istituto di Ricovero e Cura a Carattere Scientifico (IRCCS), Milan, Italy
| | - Riccardo Santamaria
- Division of Radiation Oncology, European Institute of Oncology (IEO) Istituto di Ricovero e Cura a Carattere Scientifico (IRCCS), Milan, Italy
- Department of Oncology and Hemato-Oncology, University of Milan, Milan, Italy
| | - Giulia Marvaso
- Division of Radiation Oncology, European Institute of Oncology (IEO) Istituto di Ricovero e Cura a Carattere Scientifico (IRCCS), Milan, Italy
- Department of Oncology and Hemato-Oncology, University of Milan, Milan, Italy
| | - Lars Johannes Isaksson
- Division of Radiation Oncology, European Institute of Oncology (IEO) Istituto di Ricovero e Cura a Carattere Scientifico (IRCCS), Milan, Italy
| | - Sara Gandini
- Molecular and Pharmaco-Epidemiology Unit, Department of Experimental Oncology, European Institute of Oncology (IEO) Istituto di Ricovero e Cura a Carattere Scientifico (IRCCS), Milan, Italy
| | - Anna Starzyńska
- Department of Oral Surgery, Medical University of Gdańsk, Gdańsk, Poland
| | - Maria Cristina Leonardi
- Division of Radiation Oncology, European Institute of Oncology (IEO) Istituto di Ricovero e Cura a Carattere Scientifico (IRCCS), Milan, Italy
| | - Roberto Orecchia
- Scientific Directorate, European Institute of Oncology (IEO) Istituto di Ricovero e Cura a Carattere Scientifico (IRCCS), Milan, Italy
| | - Daniela Alterio
- Division of Radiation Oncology, European Institute of Oncology (IEO) Istituto di Ricovero e Cura a Carattere Scientifico (IRCCS), Milan, Italy
| | - Barbara Alicja Jereczek-Fossa
- Division of Radiation Oncology, European Institute of Oncology (IEO) Istituto di Ricovero e Cura a Carattere Scientifico (IRCCS), Milan, Italy
- Department of Oncology and Hemato-Oncology, University of Milan, Milan, Italy
| |
Collapse
|
9
|
Rohm M, Markmann M, Forsting J, Rehmann R, Froeling M, Schlaffke L. 3D Automated Segmentation of Lower Leg Muscles Using Machine Learning on a Heterogeneous Dataset. Diagnostics (Basel) 2021; 11:1747. [PMID: 34679445 PMCID: PMC8534967 DOI: 10.3390/diagnostics11101747] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2021] [Revised: 09/16/2021] [Accepted: 09/18/2021] [Indexed: 12/29/2022] Open
Abstract
Quantitative MRI combines non-invasive imaging techniques to reveal alterations in muscle pathophysiology. Creating muscle-specific labels manually is time consuming and requires an experienced examiner. Semi-automatic and fully automatic methods reduce segmentation time significantly. Current machine learning solutions are commonly trained on data from healthy subjects using homogeneous databases with the same image contrast. While yielding high Dice scores (DS), those solutions are not applicable to different image contrasts and acquisitions. Therefore, the aim of our study was to evaluate the feasibility of automatic segmentation of a heterogeneous database. To create a heterogeneous dataset, we pooled lower leg muscle images from different studies with different contrasts and fields-of-view, containing healthy controls and diagnosed patients with various neuromuscular diseases. A second homogenous database with uniform contrasts was created as a subset of the first database. We trained three 3D-convolutional neuronal networks (CNN) on those databases to test performance as compared to manual segmentation. All networks, training on heterogeneous data, were able to predict seven muscles with a minimum average DS of 0.75. U-Net performed best when trained on the heterogeneous dataset (DS: 0.80 ± 0.10, AHD: 0.39 ± 0.35). ResNet and DenseNet yielded higher DS, when trained on a heterogeneous dataset (both DS: 0.86), as compared to a homogeneous dataset (ResNet DS: 0.83, DenseNet DS: 0.76). In conclusion, a CNN trained on a heterogeneous dataset achieves more accurate labels for predicting a heterogeneous database of lower leg muscles than a CNN trained on a homogenous dataset. We propose that a large heterogeneous database is needed, to make automated segmentation feasible for different kinds of image acquisitions.
Collapse
Affiliation(s)
- Marlena Rohm
- Department of Neurology, BG-University Hospital Bergmannsheil gGmbH, Ruhr-University Bochum, 44789 Bochum, Germany; (M.M.); (J.F.); (R.R.); (L.S.)
- Heimer Institute for Muscle Research, BG-University Hospital Bergmannsheil gGmbH, 44789 Bochum, Germany
| | - Marius Markmann
- Department of Neurology, BG-University Hospital Bergmannsheil gGmbH, Ruhr-University Bochum, 44789 Bochum, Germany; (M.M.); (J.F.); (R.R.); (L.S.)
| | - Johannes Forsting
- Department of Neurology, BG-University Hospital Bergmannsheil gGmbH, Ruhr-University Bochum, 44789 Bochum, Germany; (M.M.); (J.F.); (R.R.); (L.S.)
| | - Robert Rehmann
- Department of Neurology, BG-University Hospital Bergmannsheil gGmbH, Ruhr-University Bochum, 44789 Bochum, Germany; (M.M.); (J.F.); (R.R.); (L.S.)
- Department of Neurology, Klinikum Dortmund, University Witten-Herdecke, 44137 Dortmund, Germany
| | - Martijn Froeling
- Department of Radiology, University Medical Centre Utrecht, 3584 Utrecht, The Netherlands;
| | - Lara Schlaffke
- Department of Neurology, BG-University Hospital Bergmannsheil gGmbH, Ruhr-University Bochum, 44789 Bochum, Germany; (M.M.); (J.F.); (R.R.); (L.S.)
- Heimer Institute for Muscle Research, BG-University Hospital Bergmannsheil gGmbH, 44789 Bochum, Germany
| |
Collapse
|
10
|
Fang Y, Wang J, Ou X, Ying H, Hu C, Zhang Z, Hu W. The impact of training sample size on deep learning-based organ auto-segmentation for head-and-neck patients. Phys Med Biol 2021; 66. [PMID: 34450599 DOI: 10.1088/1361-6560/ac2206] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2021] [Accepted: 08/27/2021] [Indexed: 12/23/2022]
Abstract
To investigate the impact of training sample size on the performance of deep learning-based organ auto-segmentation for head-and-neck cancer patients, a total of 1160 patients with head-and-neck cancer who received radiotherapy were enrolled in this study. Patient planning CT images and regions of interest (ROIs) delineation, including the brainstem, spinal cord, eyes, lenses, optic nerves, temporal lobes, parotids, larynx and body, were collected. An evaluation dataset with 200 patients were randomly selected and combined with Dice similarity index to evaluate the model performances. Eleven training datasets with different sample sizes were randomly selected from the remaining 960 patients to form auto-segmentation models. All models used the same data augmentation methods, network structures and training hyperparameters. A performance estimation model of the training sample size based on the inverse power law function was established. Different performance change patterns were found for different organs. Six organs had the best performance with 800 training samples and others achieved their best performance with 600 training samples or 400 samples. The benefit of increasing the size of the training dataset gradually decreased. Compared to the best performance, optic nerves and lenses reached 95% of their best effect at 200, and the other organs reached 95% of their best effect at 40. For the fitting effect of the inverse power law function, the fitted root mean square errors of all ROIs were less than 0.03 (left eye: 0.024, others: <0.01), and theRsquare of all ROIs except for the body was greater than 0.5. The sample size has a significant impact on the performance of deep learning-based auto-segmentation. The relationship between sample size and performance depends on the inherent characteristics of the organ. In some cases, relatively small samples can achieve satisfactory performance.
Collapse
Affiliation(s)
- Yingtao Fang
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, People's Republic of China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, People's Republic of China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, People's Republic of China
| | - Jiazhou Wang
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, People's Republic of China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, People's Republic of China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, People's Republic of China
| | - Xiaomin Ou
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, People's Republic of China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, People's Republic of China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, People's Republic of China
| | - Hongmei Ying
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, People's Republic of China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, People's Republic of China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, People's Republic of China
| | - Chaosu Hu
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, People's Republic of China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, People's Republic of China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, People's Republic of China
| | - Zhen Zhang
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, People's Republic of China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, People's Republic of China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, People's Republic of China
| | - Weigang Hu
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, People's Republic of China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, People's Republic of China.,Shanghai Key Laboratory of Radiation Oncology, Shanghai, People's Republic of China
| |
Collapse
|
11
|
Douglass MJJ, Keal JA. DeepWL: Robust EPID based Winston-Lutz analysis using deep learning, synthetic image generation and optical path-tracing. Phys Med 2021; 89:306-316. [PMID: 34492498 DOI: 10.1016/j.ejmp.2021.08.012] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/02/2021] [Revised: 08/03/2021] [Accepted: 08/27/2021] [Indexed: 12/23/2022] Open
Abstract
Radiation therapy requires clinical linear accelerators to be mechanically and dosimetrically calibrated to a high standard. One important quality assurance test is the Winston-Lutz test which localises the radiation isocentre of the linac. In the current work we demonstrate a novel method of analysing EPID based Winston-Lutz QA images using a deep learning model trained only on synthetic image data. In addition, we propose a novel method of generating the synthetic WL images and associated 'ground-truth' masks using an optical path-tracing engine to 'fake' mega-voltage EPID images. The model called DeepWL was trained on 1500 synthetic WL images using data augmentation techniques for 180 epochs. The model was built using Keras with a TensorFlow backend on an Intel Core i5-6500T CPU and trained in approximately 15 h. DeepWL was shown to produce ball bearing and multi-leaf collimator field segmentations with a mean dice coefficient of 0.964 and 0.994 respectively on previously unseen synthetic testing data. When DeepWL was applied to WL data measured on an EPID, the predicted mean displacements were shown to be statistically similar to the Canny Edge detection method. However, the DeepWL predictions for the ball bearing locations were shown to correlate better with manual annotations compared with the Canny edge detection algorithm. DeepWL was demonstrated to analyse Winston-Lutz images with an accuracy suitable for routine linac quality assurance with some statistical evidence that it may outperform Canny Edge detection methods in terms of segmentation robustness and the resultant displacement predictions.
Collapse
Affiliation(s)
- Michael John James Douglass
- School of Physical Sciences, University of Adelaide, Adelaide 5005, South Australia, Australia; Department of Medical Physics, Royal Adelaide Hospital, Adelaide 5000, South Australia, Australia.
| | - James Alan Keal
- School of Physical Sciences, University of Adelaide, Adelaide 5005, South Australia, Australia
| |
Collapse
|
12
|
Watanabe S, Sakaguchi K, Murata D, Ishii K. Deep learning-based Hounsfield unit value measurement method for bolus tracking images in cerebral computed tomography angiography. Comput Biol Med 2021; 137:104824. [PMID: 34488029 DOI: 10.1016/j.compbiomed.2021.104824] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2021] [Revised: 08/28/2021] [Accepted: 08/28/2021] [Indexed: 10/20/2022]
Abstract
BACKGROUND Patient movement during bolus tracking (BT) impairs the accuracy of Hounsfield unit (HU) measurements. This study assesses the accuracy of measuring HU values in the internal carotid artery (ICA) using an original deep learning (DL)-based method as compared with using the conventional region of interest (ROI) setting method. METHOD A total of 722 BT images of 127 patients who underwent cerebral computed tomography angiography were selected retrospectively and divided into groups for training data, validation data, and test data. To segment the ICA using our proposed method, DL was performed using a convolutional neural network. The HU values in the ICA were obtained using our DL-based method and the ROI setting method. The ROI setting was performed with and without correcting for patient body movement (corrected ROI and settled ROI). We compared the proposed DL-based method with settled ROI to evaluate HU value differences from the corrected ROI, based on whether or not patients experienced involuntary movement during BT image acquisition. RESULTS Differences in HU values from the corrected ROI in the settled ROI and the proposed method were 23.8 ± 12.7 HU and 9.0 ± 6.4 HU in patients with body movement and 1.1 ± 1.6 HU and 3.9 ± 4.7 HU in patients without body movement, respectively. There were significant differences in both comparisons (P < 0.01). CONCLUSION DL-based method can improve the accuracy of HU value measurements for ICA in BT images with patient involuntary movement.
Collapse
Affiliation(s)
- Shota Watanabe
- Division of Positron Emission Tomography, Institute of Advanced Clinical Medicine, Kindai University Hospital, 377-2 Ohno-Higashi, Osakasayama, Osaka, 589-8511, Japan; Radiology Center, Kindai University Hospital, 377-2 Ohno-Higashi, Osakasayama, Osaka, 589-8511, Japan.
| | - Kenta Sakaguchi
- Radiology Center, Kindai University Hospital, 377-2 Ohno-Higashi, Osakasayama, Osaka, 589-8511, Japan.
| | - Daisuke Murata
- Radiology Center, Kindai University Hospital, 377-2 Ohno-Higashi, Osakasayama, Osaka, 589-8511, Japan.
| | - Kazunari Ishii
- Department of Radiology, Kindai University Faculty of Medicine, 377-2 Ohno-Higashi, Osakasayama, Osaka, 589-8511, Japan.
| |
Collapse
|
13
|
Park J, Lee JS, Oh D, Ryoo HG, Han JH, Lee WW. Quantitative salivary gland SPECT/CT using deep convolutional neural networks. Sci Rep 2021; 11:7842. [PMID: 33837284 PMCID: PMC8035179 DOI: 10.1038/s41598-021-87497-0] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2020] [Accepted: 03/30/2021] [Indexed: 11/08/2022] Open
Abstract
Quantitative single-photon emission computed tomography/computed tomography (SPECT/CT) using Tc-99m pertechnetate aids in evaluating salivary gland function. However, gland segmentation and quantitation of gland uptake is challenging. We develop a salivary gland SPECT/CT with automated segmentation using a deep convolutional neural network (CNN). The protocol comprises SPECT/CT at 20 min, sialagogue stimulation, and SPECT at 40 min post-injection of Tc-99m pertechnetate (555 MBq). The 40-min SPECT was reconstructed using the 20-min CT after misregistration correction. Manual salivary gland segmentation for %injected dose (%ID) by human experts proved highly reproducible, but took 15 min per scan. An automatic salivary segmentation method was developed using a modified 3D U-Net for end-to-end learning from the human experts (n = 333). The automatic segmentation performed comparably with human experts in voxel-wise comparison (mean Dice similarity coefficient of 0.81 for parotid and 0.79 for submandibular, respectively) and gland %ID correlation (R2 = 0.93 parotid, R2 = 0.95 submandibular) with an operating time less than 1 min. The algorithm generated results that were comparable to the reference data. In conclusion, with the aid of a CNN, we developed a quantitative salivary gland SPECT/CT protocol feasible for clinical applications. The method saves analysis time and manual effort while reducing patients' radiation exposure.
Collapse
Affiliation(s)
- Junyoung Park
- Department of Biomedical Sciences, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080, Korea
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080, Korea
| | - Jae Sung Lee
- Department of Biomedical Sciences, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080, Korea
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080, Korea
- Institute of Radiation Medicine, Medical Research Center, Seoul National University, Seoul, Korea
| | - Dongkyu Oh
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080, Korea
- Department of Nuclear Medicine, Seoul National University Bundang Hospital, 82, Gumi-ro 173 Beon-gil, Bundang-gu, Seongnam-si, 13620, Gyeonggi-do, Korea
| | - Hyun Gee Ryoo
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080, Korea
- Department of Nuclear Medicine, Seoul National University Bundang Hospital, 82, Gumi-ro 173 Beon-gil, Bundang-gu, Seongnam-si, 13620, Gyeonggi-do, Korea
| | - Jeong Hee Han
- Department of Nuclear Medicine, Seoul National University Bundang Hospital, 82, Gumi-ro 173 Beon-gil, Bundang-gu, Seongnam-si, 13620, Gyeonggi-do, Korea
| | - Won Woo Lee
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080, Korea.
- Department of Nuclear Medicine, Seoul National University Bundang Hospital, 82, Gumi-ro 173 Beon-gil, Bundang-gu, Seongnam-si, 13620, Gyeonggi-do, Korea.
- Institute of Radiation Medicine, Medical Research Center, Seoul National University, Seoul, Korea.
| |
Collapse
|
14
|
van Rooij W, Dahele M, Nijhuis H, Slotman BJ, Verbakel WF. Strategies to improve deep learning-based salivary gland segmentation. Radiat Oncol 2020; 15:272. [PMID: 33261620 PMCID: PMC7709305 DOI: 10.1186/s13014-020-01721-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2020] [Accepted: 11/20/2020] [Indexed: 12/01/2022] Open
Abstract
BACKGROUND Deep learning-based delineation of organs-at-risk for radiotherapy purposes has been investigated to reduce the time-intensiveness and inter-/intra-observer variability associated with manual delineation. We systematically evaluated ways to improve the performance and reliability of deep learning for organ-at-risk segmentation, with the salivary glands as the paradigm. Improving deep learning performance is clinically relevant with applications ranging from the initial contouring process, to on-line adaptive radiotherapy. METHODS Various experiments were designed: increasing the amount of training data (1) with original images, (2) with traditional data augmentation and (3) with domain-specific data augmentation; (4) the influence of data quality was tested by comparing training/testing on clinical versus curated contours, (5) the effect of using several custom cost functions was explored, and (6) patient-specific Hounsfield unit windowing was applied during inference; lastly, (7) the effect of model ensembles was analyzed. Model performance was measured with geometric parameters and model reliability with those parameters' variance. RESULTS A positive effect was observed from increasing the (1) training set size, (2/3) data augmentation, (6) patient-specific Hounsfield unit windowing and (7) model ensembles. The effects of the strategies on performance diminished when the base model performance was already 'high'. The effect of combining all beneficial strategies was an increase in average Sørensen-Dice coefficient of about 4% and 3% and a decrease in standard deviation of about 1% and 1% for the submandibular and parotid gland, respectively. CONCLUSIONS A subset of the strategies that were investigated provided a positive effect on model performance and reliability. The clinical impact of such strategies would be an expected reduction in post-segmentation editing, which facilitates the adoption of deep learning for autonomous automated salivary gland segmentation.
Collapse
Affiliation(s)
- Ward van Rooij
- Department of Radiation Oncology, Cancer Center Amsterdam, Vrije Universiteit Amsterdam, Amsterdam UMC, de Boelelaan 1117, Amsterdam, The Netherlands.
| | - Max Dahele
- Department of Radiation Oncology, Cancer Center Amsterdam, Vrije Universiteit Amsterdam, Amsterdam UMC, de Boelelaan 1117, Amsterdam, The Netherlands
| | - Hanne Nijhuis
- Department of Radiation Oncology, Cancer Center Amsterdam, Vrije Universiteit Amsterdam, Amsterdam UMC, de Boelelaan 1117, Amsterdam, The Netherlands
| | - Berend J Slotman
- Department of Radiation Oncology, Cancer Center Amsterdam, Vrije Universiteit Amsterdam, Amsterdam UMC, de Boelelaan 1117, Amsterdam, The Netherlands
| | - Wilko F Verbakel
- Department of Radiation Oncology, Cancer Center Amsterdam, Vrije Universiteit Amsterdam, Amsterdam UMC, de Boelelaan 1117, Amsterdam, The Netherlands
| |
Collapse
|
15
|
Jiang J, Hu YC, Tyagi N, Wang C, Lee N, Deasy JO, Sean B, Veeraraghavan H. Self-derived organ attention for unpaired CT-MRI deep domain adaptation based MRI segmentation. Phys Med Biol 2020; 65:205001. [PMID: 33027063 DOI: 10.1088/1361-6560/ab9fca] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
To develop and evaluate a deep learning method to segment parotid glands from MRI using unannotated MRI and unpaired expert-segmented CT datasets. We introduced a new self-derived organ attention deep learning network for combined CT to MRI image-to-image translation (I2I) and MRI segmentation, all trained as an end-to-end network. The expert segmentations available on CT scans were combined with the I2I translated pseudo MR images to train the MRI segmentation network. Once trained, the MRI segmentation network alone is required. We introduced an organ attention discriminator that constrains the CT to MR generator to synthesize pseudo MR images that preserve organ geometry and appearance statistics as in real MRI. The I2I translation network training was regularized using the organ attention discriminator, global image-matching discriminator, and cycle consistency losses. MRI segmentation training was regularized by using cross-entropy loss. Segmentation performance was compared against multiple domain adaptation-based segmentation methods using the Dice similarity coefficient (DSC) and Hausdorff distance at the 95th percentile (HD95). All networks were trained using 85 unlabeled T2-weighted fat suppressed (T2wFS) MRIs and 96 expert-segmented CT scans. Performance upper-limit was based on fully supervised MRI training done using the 85 T2wFS MRI with expert segmentations. Independent evaluation was performed on 77 MRIs never used in training. The proposed approach achieved the highest accuracy (left parotid: DSC 0.82 ± 0.03, HD95 2.98 ± 1.01 mm; right parotid: 0.81 ± 0.05, HD95 3.14 ± 1.17 mm) compared to other methods. This accuracy was close to the reference fully supervised MRI segmentation (DSC of 0.84 ± 0.04, a HD95 of 2.24 ± 0.77 mm for the left parotid, and a DSC of 0.84 ± 0.06 and HD95 of 2.32 ± 1.37 mm for the right parotid glands).
Collapse
Affiliation(s)
- Jue Jiang
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY 10065, United States of America
| | | | | | | | | | | | | | | |
Collapse
|
16
|
Xin S, Shi H, Jide A, Zhu M, Ma C, Liao H. Automatic lesion segmentation and classification of hepatic echinococcosis using a multiscale-feature convolutional neural network. Med Biol Eng Comput 2020; 58:659-668. [PMID: 31950330 DOI: 10.1007/s11517-020-02126-8] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2019] [Accepted: 01/07/2020] [Indexed: 11/24/2022]
Abstract
Hepatic echinococcosis (HE) is a life-threatening liver disease caused by parasites that requires a precise diagnosis and proper treatments. To assess HE lesions accurately, we propose a novel automatic HE lesion segmentation and classification network that contains lesion region positioning (LRP) and lesion region segmenting (LRS) modules. First, we used the LRP module to obtain the probability map of the lesion distribution and the position of the lesion. Then, based on the result of the LRP module, we used the LRS module to precisely segment the HE lesions within the high-probability region. Finally, we classified the HE lesions and identified the lesion types by a convolutional neural network (CNN). The entire dataset was delineated by the hospital's senior radiologist. We collected CT slices of 160 patients from Qinghai Provincial People's Hospital. The Dice score of the final segmentation result reached 89.89%. The Dice scores, indicating the classification accuracy, for cystic vs. alveolar echinococcosis and calcified vs. noncalcified lesions were 80.32% and 82.45%, the sensitivities were 72.41% and 75.17%, the specificities were 83.72% and 86.04%, the NPVs were 80.01% and 86.96%, the PPVs were 80.45% and 81.74%, and the areas under the ROC curves were 0.8128 and 0.8205, respectively. Graphical abstract.
Collapse
Affiliation(s)
- Shenghai Xin
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, 100084, China.,Qinghai Provincial People's Hospital, Xining, 810007, China
| | - Huabei Shi
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, 100084, China
| | - A Jide
- Qinghai Provincial People's Hospital, Xining, 810007, China
| | - Mingyu Zhu
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, 100084, China
| | - Cong Ma
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, 100084, China
| | - Hongen Liao
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, 100084, China.
| |
Collapse
|
17
|
Head and Neck Cancer Adaptive Radiation Therapy (ART): Conceptual Considerations for the Informed Clinician. Semin Radiat Oncol 2019; 29:258-273. [PMID: 31027643 DOI: 10.1016/j.semradonc.2019.02.008] [Citation(s) in RCA: 46] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
Abstract
For nearly 2 decades, adaptive radiation therapy (ART) has been proposed as a method to account for changes in head and neck tumor and normal tissue to enhance therapeutic ratios. While technical advances in imaging, planning and delivery have allowed greater capacity for ART delivery, and a series of dosimetric explorations have consistently shown capacity for improvement, there remains a paucity of clinical trials demonstrating the utility of ART. Furthermore, while ad hoc implementation of head and neck ART is reported, systematic full-scale head and neck ART remains an as yet unreached reality. To some degree, this lack of scalability may be related to not only the complexity of ART, but also variability in the nomenclature and descriptions of what is encompassed by ART. Consequently, we present an overview of the history, current status, and recommendations for the future of ART, with an eye toward improving the clarity and description of head and neck ART for interested clinicians, noting practical considerations for implementation of an ART program or clinical trial. Process level considerations for ART are noted, reminding the reader that, paraphrasing the writer Elbert Hubbard, "Art is not a thing, it is a way."
Collapse
|