101
|
Elisabeth Olsson C, Suresh R, Niemelä J, Akram SU, Valdman A. Autosegmentation based on different-sized training datasets of consistently-curated volumes and impact on rectal contours in prostate cancer radiation therapy. Phys Imaging Radiat Oncol 2022; 22:67-72. [PMID: 35572041 PMCID: PMC9092250 DOI: 10.1016/j.phro.2022.04.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2021] [Revised: 04/20/2022] [Accepted: 04/22/2022] [Indexed: 12/01/2022] Open
Abstract
Background and purpose Autosegmentation techniques are emerging as time-saving means for radiation therapy (RT) contouring, but the understanding of their performance on different datasets is limited. The aim of this study was to determine agreement between rectal volumes by an existing autosegmentation algorithm and manually-delineated rectal volumes in prostate cancer RT. We also investigated contour quality by different-sized training datasets and consistently-curated volumes for retrained versions of this same algorithm. Materials and methods Single-institutional data from 624 prostate cancer patients treated to 50–70 Gy were used. Manually-delineated clinical rectal volumes (clinical) and consistently-curated volumes recontoured to one anatomical guideline (reference) were compared to autocontoured volumes by a commercial autosegmentation tool based on deep-learning (v1; n = 891, multiple-institutional data) and retrained versions using subsets of the curated volumes (v32/64/128/256; n = 32/64/128/256). Evaluations included dose-volume histogram metrics, Dice similarity coefficients, and Hausdorff distances; differences between groups were quantified using parametric or non-parametric hypothesis testing. Results Volumes by v1-256 (76–78 cm3) were larger than reference (75 cm3) and clinical (76 cm3). Mean doses by v1-256 (24.2–25.2 Gy) were closer to reference (24.2 Gy) than to clinical (23.8 Gy). Maximum doses were similar for all volumes (65.7–66.0 Gy). Dice for v1-256 and reference (0.87–0.89) were higher than for v1-256 and clinical (0.86–0.87) with corresponding Hausdorff comparisons including reference smaller than comparisons including clinical (5–6 mm vs. 7–8 mm). Conclusion Using small single-institutional RT datasets with consistently-defined rectal volumes when training autosegmentation algorithms created contours of similar quality as the same algorithm trained on large multi-institutional datasets.
Collapse
|
102
|
Recent Applications of Artificial Intelligence in Radiotherapy: Where We Are and Beyond. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12073223] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
In recent decades, artificial intelligence (AI) tools have been applied in many medical fields, opening the possibility of finding novel solutions for managing very complex and multifactorial problems, such as those commonly encountered in radiotherapy (RT). We conducted a PubMed and Scopus search to identify the AI application field in RT limited to the last four years. In total, 1824 original papers were identified, and 921 were analyzed by considering the phase of the RT workflow according to the applied AI approaches. AI permits the processing of large quantities of information, data, and images stored in RT oncology information systems, a process that is not manageable for individuals or groups. AI allows the iterative application of complex tasks in large datasets (e.g., delineating normal tissues or finding optimal planning solutions) and might support the entire community working in the various sectors of RT, as summarized in this overview. AI-based tools are now on the roadmap for RT and have been applied to the entire workflow, mainly for segmentation, the generation of synthetic images, and outcome prediction. Several concerns were raised, including the need for harmonization while overcoming ethical, legal, and skill barriers.
Collapse
|
103
|
Fully Automated Thrombus Segmentation on CT Images of Patients with Acute Ischemic Stroke. Diagnostics (Basel) 2022; 12:diagnostics12030698. [PMID: 35328251 PMCID: PMC8947334 DOI: 10.3390/diagnostics12030698] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2022] [Revised: 02/16/2022] [Accepted: 03/10/2022] [Indexed: 11/17/2022] Open
Abstract
Thrombus imaging characteristics are associated with treatment success and functional outcomes in stroke patients. However, assessing these characteristics based on manual annotations is labor intensive and subject to observer bias. Therefore, we aimed to create an automated pipeline for consistent and fast full thrombus segmentation. We used multi-center, multi-scanner datasets of anterior circulation stroke patients with baseline NCCT and CTA for training (n = 228) and testing (n = 100). We first found the occlusion location using StrokeViewer LVO and created a bounding box around it. Subsequently, we trained dual modality U-Net based convolutional neural networks (CNNs) to segment the thrombus inside this bounding box. We experimented with: (1) U-Net with two input channels for NCCT and CTA, and U-Nets with two encoders where (2) concatenate, (3) add, and (4) weighted-sum operators were used for feature fusion. Furthermore, we proposed a dynamic bounding box algorithm to adjust the bounding box. The dynamic bounding box algorithm reduces the missed cases but does not improve Dice. The two-encoder U-Net with a weighted-sum feature fusion shows the best performance (surface Dice 0.78, Dice 0.62, and 4% missed cases). Final segmentation results have high spatial accuracies and can therefore be used to determine thrombus characteristics and potentially benefit radiologists in clinical practice.
Collapse
|
104
|
Zhovannik I, Bontempi D, Romita A, Pfaehler E, Primakov S, Dekker A, Bussink J, Traverso A, Monshouwer R. Segmentation Uncertainty Estimation as a Sanity Check for Image Biomarker Studies. Cancers (Basel) 2022; 14:cancers14051288. [PMID: 35267597 PMCID: PMC8909427 DOI: 10.3390/cancers14051288] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2022] [Revised: 02/25/2022] [Accepted: 02/28/2022] [Indexed: 11/16/2022] Open
Abstract
Problem. Image biomarker analysis, also known as radiomics, is a tool for tissue characterization and treatment prognosis that relies on routinely acquired clinical images and delineations. Due to the uncertainty in image acquisition, processing, and segmentation (delineation) protocols, radiomics often lack reproducibility. Radiomics harmonization techniques have been proposed as a solution to reduce these sources of uncertainty and/or their influence on the prognostic model performance. A relevant question is how to estimate the protocol-induced uncertainty of a specific image biomarker, what the effect is on the model performance, and how to optimize the model given the uncertainty. Methods. Two non-small cell lung cancer (NSCLC) cohorts, composed of 421 and 240 patients, respectively, were used for training and testing. Per patient, a Monte Carlo algorithm was used to generate three hundred synthetic contours with a surface dice tolerance measure of less than 1.18 mm with respect to the original GTV. These contours were subsequently used to derive 104 radiomic features, which were ranked on their relative sensitivity to contour perturbation, expressed in the parameter η. The top four (low η) and the bottom four (high η) features were selected for two models based on the Cox proportional hazards model. To investigate the influence of segmentation uncertainty on the prognostic model, we trained and tested the setup in 5000 augmented realizations (using a Monte Carlo sampling method); the log-rank test was used to assess the stratification performance and stability of segmentation uncertainty. Results. Although both low and high η setup showed significant testing set log-rank p-values (p = 0.01) in the original GTV delineations (without segmentation uncertainty introduced), in the model with high uncertainty, to effect ratio, only around 30% of the augmented realizations resulted in model performance with p < 0.05 in the test set. In contrast, the low η setup performed with a log-rank p < 0.05 in 90% of the augmented realizations. Moreover, the high η setup classification was uncertain in its predictions for 50% of the subjects in the testing set (for 80% agreement rate), whereas the low η setup was uncertain only in 10% of the cases. Discussion. Estimating image biomarker model performance based only on the original GTV segmentation, without considering segmentation, uncertainty may be deceiving. The model might result in a significant stratification performance, but can be unstable for delineation variations, which are inherent to manual segmentation. Simulating segmentation uncertainty using the method described allows for more stable image biomarker estimation, selection, and model development. The segmentation uncertainty estimation method described here is universal and can be extended to estimate other protocol uncertainties (such as image acquisition and pre-processing).
Collapse
Affiliation(s)
- Ivan Zhovannik
- Department of Radiation Oncology, Radboud Institute for Health Sciences, Radboud University Medical Center, 6525 GA Nijmegen, The Netherlands; (J.B.); (R.M.)
- Department of Radiation Oncology (Maastro), School for Oncology (GROW), Maastricht University Medical Center, 6229 ET Maastricht, The Netherlands; (D.B.); (A.R.); (E.P.); (A.D.); (A.T.)
- Department of Radiation Oncology, The Netherlands Cancer Institute, 1066 CX Amsterdam, The Netherlands
- Correspondence:
| | - Dennis Bontempi
- Department of Radiation Oncology (Maastro), School for Oncology (GROW), Maastricht University Medical Center, 6229 ET Maastricht, The Netherlands; (D.B.); (A.R.); (E.P.); (A.D.); (A.T.)
| | - Alessio Romita
- Department of Radiation Oncology (Maastro), School for Oncology (GROW), Maastricht University Medical Center, 6229 ET Maastricht, The Netherlands; (D.B.); (A.R.); (E.P.); (A.D.); (A.T.)
| | - Elisabeth Pfaehler
- Department of Radiation Oncology (Maastro), School for Oncology (GROW), Maastricht University Medical Center, 6229 ET Maastricht, The Netherlands; (D.B.); (A.R.); (E.P.); (A.D.); (A.T.)
- University Clinic Augsburg, 86156 Augsburg, Germany
| | - Sergey Primakov
- The D-Lab, Department of Precision Medicine, GROW—School for Oncology, Maastricht University, 6229 ER Maastricht, The Netherlands;
| | - Andre Dekker
- Department of Radiation Oncology (Maastro), School for Oncology (GROW), Maastricht University Medical Center, 6229 ET Maastricht, The Netherlands; (D.B.); (A.R.); (E.P.); (A.D.); (A.T.)
| | - Johan Bussink
- Department of Radiation Oncology, Radboud Institute for Health Sciences, Radboud University Medical Center, 6525 GA Nijmegen, The Netherlands; (J.B.); (R.M.)
| | - Alberto Traverso
- Department of Radiation Oncology (Maastro), School for Oncology (GROW), Maastricht University Medical Center, 6229 ET Maastricht, The Netherlands; (D.B.); (A.R.); (E.P.); (A.D.); (A.T.)
| | - René Monshouwer
- Department of Radiation Oncology, Radboud Institute for Health Sciences, Radboud University Medical Center, 6525 GA Nijmegen, The Netherlands; (J.B.); (R.M.)
| |
Collapse
|
105
|
Asbach JC, Singh AK, Matott LS, Le AH. Deep learning tools for the cancer clinic: an open-source framework with head and neck contour validation. Radiat Oncol 2022; 17:28. [PMID: 35135569 PMCID: PMC8822676 DOI: 10.1186/s13014-022-01982-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2021] [Accepted: 01/04/2022] [Indexed: 11/12/2022] Open
Abstract
Background With the rapid growth of deep learning research for medical applications comes the need for clinical personnel to be comfortable and familiar with these techniques. Taking a proven approach, we developed a straightforward open-source framework for producing automatic contours for head and neck planning computed tomography studies using a convolutional neural network (CNN). Methods Anonymized studies of 229 patients treated at our clinic for head and neck cancer from 2014 to 2018 were used to train and validate the network. We trained a separate CNN iteration for each of 11 common organs at risk, and then used data from 19 patients previously set aside as test cases for evaluation. We used a commercial atlas-based automatic contouring tool as a comparative benchmark on these test cases to ensure acceptable CNN performance. For the CNN contours and the atlas-based contours, performance was measured using three quantitative metrics and physician reviews using survey and quantifiable correction time for each contour. Results The CNN achieved statistically better scores than the atlas-based workflow on the quantitative metrics for 7 of the 11 organs at risk. In the physician review, the CNN contours were more likely to need minor corrections but less likely to need substantial corrections, and the cumulative correction time required was less than for the atlas-based contours for all but two test cases. Conclusions With this validation, we packaged the code framework and trained CNN parameters and a no-code, browser-based interface to facilitate reproducibility and expansion of the work. All scripts and files are available in a public GitHub repository and are ready for immediate use under the MIT license. Our work introduces a deep learning tool for automatic contouring that is easy for novice personnel to use. Supplementary Information The online version contains supplementary material available at 10.1186/s13014-022-01982-y.
Collapse
|
106
|
Arrarte Terreros N, van Willigen BG, Niekolaas WS, Tolhuisen ML, Brouwer J, Coutinho JM, Beenen LFM, Majoie CBLM, van Bavel E, Marquering HA. Occult blood flow patterns distal to an occluded artery in acute ischemic stroke. J Cereb Blood Flow Metab 2022; 42:292-302. [PMID: 34550818 PMCID: PMC8795216 DOI: 10.1177/0271678x211044941] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Residual blood flow distal to an arterial occlusion in patients with acute ischemic stroke (AIS) is associated with favorable patient outcome. Both collateral flow and thrombus permeability may contribute to such residual flow. We propose a method for discriminating between these two mechanisms, based on determining the direction of flow in multiple branches distal to the occluding thrombus using dynamic Computed Tomography Angiography (dynamic CTA). We analyzed dynamic CTA data of 30 AIS patients and present patient-specific cases that identify typical blood flow patterns and velocities. We distinguished patterns with anterograde (N = 10), retrograde (N = 9), and both flow directions (N = 11), with a large variability in velocities for each flow pattern. The observed flow patterns reflect the interplay between permeability and collaterals. The presented method characterizes distal flow and provides a tool to study patient-specific distal tissue perfusion.
Collapse
Affiliation(s)
- Nerea Arrarte Terreros
- Department of Biomedical Engineering and Physics,
Amsterdam UMC, location AMC, Amsterdam, the Netherlands
- Department of Radiology and Nuclear Medicine,
Amsterdam UMC, location AMC, Amsterdam, the Netherlands
- Nerea Arrarte Terreros, Department
of Biomedical Engineering and Physics, Amsterdam UMC, location AMC,
Meibergdreef 9, 1011 AZ Amsterdam, the Netherlands.
| | - Bettine G van Willigen
- Department of Biomedical Engineering and Physics,
Amsterdam UMC, location AMC, Amsterdam, the Netherlands
- Cardiovascular Biomechanics, Eindhoven University of
Technology, Eindhoven, the Netherlands
| | - Wera S Niekolaas
- Department of Biomedical Engineering and Physics,
Amsterdam UMC, location AMC, Amsterdam, the Netherlands
| | - Manon L Tolhuisen
- Department of Biomedical Engineering and Physics,
Amsterdam UMC, location AMC, Amsterdam, the Netherlands
- Department of Radiology and Nuclear Medicine,
Amsterdam UMC, location AMC, Amsterdam, the Netherlands
| | - Josje Brouwer
- Department of Neurology, Amsterdam UMC, location AMC,
Amsterdam, the Netherlands
| | - Jonathan M Coutinho
- Department of Neurology, Amsterdam UMC, location AMC,
Amsterdam, the Netherlands
| | - Ludo FM Beenen
- Department of Radiology and Nuclear Medicine,
Amsterdam UMC, location AMC, Amsterdam, the Netherlands
| | - Charles BLM Majoie
- Department of Radiology and Nuclear Medicine,
Amsterdam UMC, location AMC, Amsterdam, the Netherlands
| | - Ed van Bavel
- Department of Biomedical Engineering and Physics,
Amsterdam UMC, location AMC, Amsterdam, the Netherlands
| | - Henk A Marquering
- Department of Biomedical Engineering and Physics,
Amsterdam UMC, location AMC, Amsterdam, the Netherlands
- Department of Radiology and Nuclear Medicine,
Amsterdam UMC, location AMC, Amsterdam, the Netherlands
| |
Collapse
|
107
|
Dot G, Schouman T, Dubois G, Rouch P, Gajny L. Fully automatic segmentation of craniomaxillofacial CT scans for computer-assisted orthognathic surgery planning using the nnU-Net framework. Eur Radiol 2022; 32:3639-3648. [PMID: 35037088 DOI: 10.1007/s00330-021-08455-y] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2021] [Revised: 09/27/2021] [Accepted: 11/01/2021] [Indexed: 01/06/2023]
Abstract
OBJECTIVES To evaluate the performance of the nnU-Net open-source deep learning framework for automatic multi-task segmentation of craniomaxillofacial (CMF) structures in CT scans obtained for computer-assisted orthognathic surgery. METHODS Four hundred and fifty-three consecutive patients having undergone high-resolution CT scans before orthognathic surgery were randomly distributed among a training/validation cohort (n = 300) and a testing cohort (n = 153). The ground truth segmentations were generated by 2 operators following an industry-certified procedure for use in computer-assisted surgical planning and personalized implant manufacturing. Model performance was assessed by comparing model predictions with ground truth segmentations. Examination of 45 CT scans by an industry expert provided additional evaluation. The model's generalizability was tested on a publicly available dataset of 10 CT scans with ground truth segmentation of the mandible. RESULTS In the test cohort, mean volumetric Dice similarity coefficient (vDSC) and surface Dice similarity coefficient at 1 mm (sDSC) were 0.96 and 0.97 for the upper skull, 0.94 and 0.98 for the mandible, 0.95 and 0.99 for the upper teeth, 0.94 and 0.99 for the lower teeth, and 0.82 and 0.98 for the mandibular canal. Industry expert segmentation approval rates were 93% for the mandible, 89% for the mandibular canal, 82% for the upper skull, 69% for the upper teeth, and 58% for the lower teeth. CONCLUSION While additional efforts are required for the segmentation of dental apices, our results demonstrated the model's reliability in terms of fully automatic segmentation of preoperative orthognathic CT scans. KEY POINTS • The nnU-Net deep learning framework can be trained out-of-the-box to provide robust fully automatic multi-task segmentation of CT scans performed for computer-assisted orthognathic surgery planning. • The clinical viability of the trained nnU-Net model is shown on a challenging test dataset of 153 CT scans randomly selected from clinical practice, showing metallic artifacts and diverse anatomical deformities. • Commonly used biomedical segmentation evaluation metrics (volumetric and surface Dice similarity coefficient) do not always match industry expert evaluation in the case of more demanding clinical applications.
Collapse
Affiliation(s)
- Gauthier Dot
- Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, 151 Boulevard de l'Hôpital 75013, Paris, France. .,Universite de Paris, AP-HP, Hopital Pitie-Salpetriere, Service d'Odontologie, Paris, France.
| | - Thomas Schouman
- Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, 151 Boulevard de l'Hôpital 75013, Paris, France.,Medecine Sorbonne Universite, AP-HP, Hopital Pitie-Salpetriere, Service de Chirurgie Maxillo-Faciale, Paris, France
| | - Guillaume Dubois
- Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, 151 Boulevard de l'Hôpital 75013, Paris, France.,Materialise, Malakoff, France
| | - Philippe Rouch
- Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, 151 Boulevard de l'Hôpital 75013, Paris, France.,EPF-Graduate School of Engineering, Sceaux, France
| | - Laurent Gajny
- Institut de Biomecanique Humaine Georges Charpak, Arts et Metiers Institute of Technology, 151 Boulevard de l'Hôpital 75013, Paris, France
| |
Collapse
|
108
|
Wahid KA, Ahmed S, He R, van Dijk LV, Teuwen J, McDonald BA, Salama V, Mohamed AS, Salzillo T, Dede C, Taku N, Lai SY, Fuller CD, Naser MA. Evaluation of deep learning-based multiparametric MRI oropharyngeal primary tumor auto-segmentation and investigation of input channel effects: Results from a prospective imaging registry. Clin Transl Radiat Oncol 2022; 32:6-14. [PMID: 34765748 PMCID: PMC8570930 DOI: 10.1016/j.ctro.2021.10.003] [Citation(s) in RCA: 25] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2021] [Revised: 09/24/2021] [Accepted: 10/10/2021] [Indexed: 12/09/2022] Open
Abstract
BACKGROUND/PURPOSE Oropharyngeal cancer (OPC) primary gross tumor volume (GTVp) segmentation is crucial for radiotherapy. Multiparametric MRI (mpMRI) is increasingly used for OPC adaptive radiotherapy but relies on manual segmentation. Therefore, we constructed mpMRI deep learning (DL) OPC GTVp auto-segmentation models and determined the impact of input channels on segmentation performance. MATERIALS/METHODS GTVp ground truth segmentations were manually generated for 30 OPC patients from a clinical trial. We evaluated five mpMRI input channels (T2, T1, ADC, Ktrans, Ve). 3D Residual U-net models were developed and assessed using leave-one-out cross-validation. A baseline T2 model was compared to mpMRI models (T2 + T1, T2 + ADC, T2 + Ktrans, T2 + Ve, all five channels [ALL]) primarily using the Dice similarity coefficient (DSC). False-negative DSC (FND), false-positive DSC, sensitivity, positive predictive value, surface DSC, Hausdorff distance (HD), 95% HD, and mean surface distance were also assessed. For the best model, ground truth and DL-generated segmentations were compared through a blinded Turing test using three physician observers. RESULTS Models yielded mean DSCs from 0.71 ± 0.12 (ALL) to 0.73 ± 0.12 (T2 + T1). Compared to the T2 model, performance was significantly improved for FND, sensitivity, surface DSC, HD, and 95% HD for the T2 + T1 model (p < 0.05) and for FND for the T2 + Ve and ALL models (p < 0.05). No model demonstrated significant correlations between tumor size and DSC (p > 0.05). Most models demonstrated significant correlations between tumor size and HD or Surface DSC (p < 0.05), except those that included ADC or Ve as input channels (p > 0.05). On average, there were no significant differences between ground truth and DL-generated segmentations for all observers (p > 0.05). CONCLUSION DL using mpMRI provides reasonably accurate segmentations of OPC GTVp that may be comparable to ground truth segmentations generated by clinical experts. Incorporating additional mpMRI channels may increase the performance of FND, sensitivity, surface DSC, HD, and 95% HD, and improve model robustness to tumor size.
Collapse
Affiliation(s)
- Kareem A. Wahid
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX USA
| | - Sara Ahmed
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX USA
| | - Renjie He
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX USA
| | - Lisanne V. van Dijk
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX USA
| | - Jonas Teuwen
- Department of Medical Imaging, Radboud University Medical Centre, Nijmegen, The Netherlands
| | - Brigid A. McDonald
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX USA
| | - Vivian Salama
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX USA
| | - Abdallah S.R. Mohamed
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX USA
| | - Travis Salzillo
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX USA
| | - Cem Dede
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX USA
| | - Nicolette Taku
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX USA
| | - Stephen Y. Lai
- Department of Head and Neck Surgery, University of Texas MD Anderson Cancer Center, Houston, TX USA
| | - Clifton D. Fuller
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX USA
| | - Mohamed A. Naser
- Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX USA
| |
Collapse
|
109
|
Sharkey MJ, Taylor JC, Alabed S, Dwivedi K, Karunasaagarar K, Johns CS, Rajaram S, Garg P, Alkhanfar D, Metherall P, O'Regan DP, van der Geest RJ, Condliffe R, Kiely DG, Mamalakis M, Swift AJ. Fully automatic cardiac four chamber and great vessel segmentation on CT pulmonary angiography using deep learning. Front Cardiovasc Med 2022; 9:983859. [PMID: 36225963 PMCID: PMC9549370 DOI: 10.3389/fcvm.2022.983859] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2022] [Accepted: 09/02/2022] [Indexed: 11/13/2022] Open
Abstract
Introduction Computed tomography pulmonary angiography (CTPA) is an essential test in the work-up of suspected pulmonary vascular disease including pulmonary hypertension and pulmonary embolism. Cardiac and great vessel assessments on CTPA are based on visual assessment and manual measurements which are known to have poor reproducibility. The primary aim of this study was to develop an automated whole heart segmentation (four chamber and great vessels) model for CTPA. Methods A nine structure semantic segmentation model of the heart and great vessels was developed using 200 patients (80/20/100 training/validation/internal testing) with testing in 20 external patients. Ground truth segmentations were performed by consultant cardiothoracic radiologists. Failure analysis was conducted in 1,333 patients with mixed pulmonary vascular disease. Segmentation was achieved using deep learning via a convolutional neural network. Volumetric imaging biomarkers were correlated with invasive haemodynamics in the test cohort. Results Dice similarity coefficients (DSC) for segmented structures were in the range 0.58-0.93 for both the internal and external test cohorts. The left and right ventricle myocardium segmentations had lower DSC of 0.83 and 0.58 respectively while all other structures had DSC >0.89 in the internal test cohort and >0.87 in the external test cohort. Interobserver comparison found that the left and right ventricle myocardium segmentations showed the most variation between observers: mean DSC (range) of 0.795 (0.785-0.801) and 0.520 (0.482-0.542) respectively. Right ventricle myocardial volume had strong correlation with mean pulmonary artery pressure (Spearman's correlation coefficient = 0.7). The volume of segmented cardiac structures by deep learning had higher or equivalent correlation with invasive haemodynamics than by manual segmentations. The model demonstrated good generalisability to different vendors and hospitals with similar performance in the external test cohort. The failure rates in mixed pulmonary vascular disease were low (<3.9%) indicating good generalisability of the model to different diseases. Conclusion Fully automated segmentation of the four cardiac chambers and great vessels has been achieved in CTPA with high accuracy and low rates of failure. DL volumetric biomarkers can potentially improve CTPA cardiac assessment and invasive haemodynamic prediction.
Collapse
Affiliation(s)
- Michael J Sharkey
- Department of Infection, Immunity and Cardiovascular Disease, University of Sheffield, Sheffield, United Kingdom.,3D Imaging Lab, Sheffield Teaching Hospitals NHSFT, Sheffield, United Kingdom
| | - Jonathan C Taylor
- 3D Imaging Lab, Sheffield Teaching Hospitals NHSFT, Sheffield, United Kingdom
| | - Samer Alabed
- Department of Infection, Immunity and Cardiovascular Disease, University of Sheffield, Sheffield, United Kingdom
| | - Krit Dwivedi
- Department of Infection, Immunity and Cardiovascular Disease, University of Sheffield, Sheffield, United Kingdom.,Insigneo Institute for in Silico Medicine, University of Sheffield, Sheffield, United Kingdom
| | - Kavitasagary Karunasaagarar
- Department of Infection, Immunity and Cardiovascular Disease, University of Sheffield, Sheffield, United Kingdom.,Radiology Department, Sheffield Teaching Hospitals NHSFT, Sheffield, United Kingdom
| | - Christopher S Johns
- Radiology Department, Sheffield Teaching Hospitals NHSFT, Sheffield, United Kingdom
| | - Smitha Rajaram
- Radiology Department, Sheffield Teaching Hospitals NHSFT, Sheffield, United Kingdom
| | - Pankaj Garg
- Norwich Medical School, University of East Anglia, Norwich, United Kingdom
| | - Dheyaa Alkhanfar
- Department of Infection, Immunity and Cardiovascular Disease, University of Sheffield, Sheffield, United Kingdom
| | - Peter Metherall
- 3D Imaging Lab, Sheffield Teaching Hospitals NHSFT, Sheffield, United Kingdom
| | - Declan P O'Regan
- MRC London Institute of Medical Sciences, Imperial College London, London, United Kingdom
| | - Rob J van der Geest
- Department of Radiology, Leiden University Medical Center, Leiden, Netherlands
| | - Robin Condliffe
- Department of Infection, Immunity and Cardiovascular Disease, University of Sheffield, Sheffield, United Kingdom.,Sheffield Pulmonary Vascular Disease Unit, Sheffield Teaching Hospitals NHS Trust, Sheffield, United Kingdom
| | - David G Kiely
- Department of Infection, Immunity and Cardiovascular Disease, University of Sheffield, Sheffield, United Kingdom.,Insigneo Institute for in Silico Medicine, University of Sheffield, Sheffield, United Kingdom.,Sheffield Pulmonary Vascular Disease Unit, Sheffield Teaching Hospitals NHS Trust, Sheffield, United Kingdom
| | - Michail Mamalakis
- Department of Infection, Immunity and Cardiovascular Disease, University of Sheffield, Sheffield, United Kingdom.,Insigneo Institute for in Silico Medicine, University of Sheffield, Sheffield, United Kingdom.,Department of Computer Science, University of Sheffield, Sheffield, United Kingdom
| | - Andrew J Swift
- Department of Infection, Immunity and Cardiovascular Disease, University of Sheffield, Sheffield, United Kingdom.,Insigneo Institute for in Silico Medicine, University of Sheffield, Sheffield, United Kingdom
| |
Collapse
|
110
|
Yoganathan SA, Zhang R. Segmentation of Organs and Tumor within Brain Magnetic Resonance Images Using K-Nearest Neighbor Classification. J Med Phys 2022; 47:40-49. [PMID: 35548028 PMCID: PMC9084578 DOI: 10.4103/jmp.jmp_87_21] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2021] [Revised: 10/24/2021] [Accepted: 12/11/2021] [Indexed: 11/29/2022] Open
Abstract
PURPOSE To fully exploit the benefits of magnetic resonance imaging (MRI) for radiotherapy, it is desirable to develop segmentation methods to delineate patients' MRI images fast and accurately. The purpose of this work is to develop a semi-automatic method to segment organs and tumor within the brain on standard T1- and T2-weighted MRI images. METHODS AND MATERIALS Twelve brain cancer patients were retrospectively included in this study, and a simple rigid registration was used to align all the images to the same spatial coordinates. Regions of interest were created for organs and tumor segmentations. The K-nearest neighbor (KNN) classification algorithm was used to characterize the knowledge of previous segmentations using 15 image features (T1 and T2 image intensity, 4 Gabor filtered images, 6 image gradients, and 3 Cartesian coordinates), and the trained models were used to predict organ and tumor contours. Dice similarity coefficient (DSC), normalized surface dice, sensitivity, specificity, and Hausdorff distance were used to evaluate the performance of segmentations. RESULTS Our semi-automatic segmentations matched with the ground truths closely. The mean DSC value was between 0.49 (optical chiasm) and 0.89 (right eye) for organ segmentations and was 0.87 for tumor segmentation. Overall performance of our method is comparable or superior to the previous work, and the accuracy of our semi-automatic segmentation is generally better for large volume objects. CONCLUSION The proposed KNN method can accurately segment organs and tumor using standard brain MRI images, provides fast and accurate image processing and planning tools, and paves the way for clinical implementation of MRI-guided radiotherapy and adaptive radiotherapy.
Collapse
Affiliation(s)
- S. A. Yoganathan
- Department of Physics and Astronomy, Louisiana State University, Baton Rouge, Louisiana, USA
| | - Rui Zhang
- Department of Physics and Astronomy, Louisiana State University, Baton Rouge, Louisiana, USA,Department of Radiation Oncology, Mary Bird Perkins Cancer Center, Baton Rouge, Louisiana, USA,Address for correspondence: Dr. Rui Zhang, Department of Physics and Astronomy, Louisiana State University, Baton Rouge, Louisiana, USA. E-mail:
| |
Collapse
|
111
|
Beekman C, van Beek S, Stam J, Sonke JJ, Remeijer P. Improving predictive CTV segmentation on CT and CBCT for cervical cancer by diffeomorphic registration of a prior. Med Phys 2021; 49:1701-1711. [PMID: 34964986 DOI: 10.1002/mp.15421] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2021] [Revised: 11/14/2021] [Accepted: 11/26/2021] [Indexed: 11/07/2022] Open
Abstract
PURPOSE Automatic cervix-uterus segmentation of the clinical target volume (CTV) on CT and cone beam CT (CBCT) scans is challenged by the limited visibility and the non-anatomical definition of certain border regions. We study potential performance gain of convolutional neural networks by regulating the segmentation predictions as diffeomorphic deformations of a segmentation prior. METHODS We introduce a 3D convolutional neural network (CNN) which segments the target scan by joint voxel-wise classification and the registration of a given prior. We compare this network to two other 3D baseline models: one treating segmentation as a classification problem (segmentation-only), the other as a registration problem (deformation-only). For reference and to highlight benefits of a 3D model, these models are also benchmarked against a 2D segmentation model. Network performances are reported for CT and CBCT segmentation of the cervix-uterus CTV. We train the networks on data of 84 patients. The prior is provided by the CTV segmentation of a planning CT. Repeat CT or CBCT scans constitute the target scans to be segmented. RESULTS All 3D models outperformed the 2D segmentation model. For CT segmentation, combining classification and registration in the proposed joint model proved beneficial, achieving a Dice score of 0.87 and a mean squared error (MSE) of the surface distance below 1.7 mm. No such synergy was observed for CBCT segmentation, for which the joint and the deformation-only model performed similarly, achieving a Dice score of about 0.80 and a MSE surface distance of 2.5 mm. However, the segmentation-only model performed notably worse in this low contrast regime. Visual inspection revealed that this performance drop translated into geometric inconsistencies between the prior and target segmentation. Such inconsistencies where not observed for the deformation-based models. CONCLUSION Constraining the solution space of admissible segmentation predictions to those reachable by a diffeomorphic deformation of the prior proved beneficial as it improved geometric consistency. Especially for CBCT, with its poor soft tissue contrast, this type of regularization becomes important as shown by quantitative and qualitative evaluation. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Chris Beekman
- Department of Radiation Oncology, The Netherlands Cancer Institute, Amsterdam, The Netherlands
| | - Suzanne van Beek
- Department of Radiation Oncology, The Netherlands Cancer Institute, Amsterdam, The Netherlands
| | - Jikke Stam
- Department of Radiation Oncology, The Netherlands Cancer Institute, Amsterdam, The Netherlands
| | - Jan-Jakob Sonke
- Department of Radiation Oncology, The Netherlands Cancer Institute, Amsterdam, The Netherlands
| | - Peter Remeijer
- Department of Radiation Oncology, The Netherlands Cancer Institute, Amsterdam, The Netherlands
| |
Collapse
|
112
|
Oreiller V, Andrearczyk V, Jreige M, Boughdad S, Elhalawani H, Castelli J, Vallières M, Zhu S, Xie J, Peng Y, Iantsen A, Hatt M, Yuan Y, Ma J, Yang X, Rao C, Pai S, Ghimire K, Feng X, Naser MA, Fuller CD, Yousefirizi F, Rahmim A, Chen H, Wang L, Prior JO, Depeursinge A. Head and neck tumor segmentation in PET/CT: The HECKTOR challenge. Med Image Anal 2021; 77:102336. [PMID: 35016077 DOI: 10.1016/j.media.2021.102336] [Citation(s) in RCA: 55] [Impact Index Per Article: 18.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2021] [Revised: 10/13/2021] [Accepted: 12/14/2021] [Indexed: 12/23/2022]
Abstract
This paper relates the post-analysis of the first edition of the HEad and neCK TumOR (HECKTOR) challenge. This challenge was held as a satellite event of the 23rd International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2020, and was the first of its kind focusing on lesion segmentation in combined FDG-PET and CT image modalities. The challenge's task is the automatic segmentation of the Gross Tumor Volume (GTV) of Head and Neck (H&N) oropharyngeal primary tumors in FDG-PET/CT images. To this end, the participants were given a training set of 201 cases from four different centers and their methods were tested on a held-out set of 53 cases from a fifth center. The methods were ranked according to the Dice Score Coefficient (DSC) averaged across all test cases. An additional inter-observer agreement study was organized to assess the difficulty of the task from a human perspective. 64 teams registered to the challenge, among which 10 provided a paper detailing their approach. The best method obtained an average DSC of 0.7591, showing a large improvement over our proposed baseline method and the inter-observer agreement, associated with DSCs of 0.6610 and 0.61, respectively. The automatic methods proved to successfully leverage the wealth of metabolic and structural properties of combined PET and CT modalities, significantly outperforming human inter-observer agreement level, semi-automatic thresholding based on PET images as well as other single modality-based methods. This promising performance is one step forward towards large-scale radiomics studies in H&N cancer, obviating the need for error-prone and time-consuming manual delineation of GTVs.
Collapse
Affiliation(s)
- Valentin Oreiller
- Institute of Information Systems, University of Applied Sciences Western Switzerland (HES-SO), Sierre, Switzerland; Department of Nuclear Medicine and Molecular Imaging, Centre Hospitalier Universitaire Vaudois (CHUV), Lausanne, Switzerland.
| | - Vincent Andrearczyk
- Institute of Information Systems, University of Applied Sciences Western Switzerland (HES-SO), Sierre, Switzerland
| | - Mario Jreige
- Department of Nuclear Medicine and Molecular Imaging, Centre Hospitalier Universitaire Vaudois (CHUV), Lausanne, Switzerland
| | - Sarah Boughdad
- Department of Nuclear Medicine and Molecular Imaging, Centre Hospitalier Universitaire Vaudois (CHUV), Lausanne, Switzerland
| | - Hesham Elhalawani
- Department of Radiation Oncology, Brigham and Women's Hospital and Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, USA
| | - Joel Castelli
- Radiotherapy Department, Cancer Institute Eugène Marquis, Rennes, France
| | - Martin Vallières
- Department of Computer Science, Université de Sherbrooke, Sherbrooke, Québec, Canada
| | - Simeng Zhu
- Department of Radiation Oncology, Henry Ford Cancer Institute, Detroit, MI, USA
| | - Juanying Xie
- School of Computer Science, Shaanxi Normal University, Xi'an 710119, PR China
| | - Ying Peng
- School of Computer Science, Shaanxi Normal University, Xi'an 710119, PR China
| | - Andrei Iantsen
- LaTIM, INSERM, UMR 1101, University Brest, Brest, France
| | - Mathieu Hatt
- LaTIM, INSERM, UMR 1101, University Brest, Brest, France
| | - Yading Yuan
- Department of Radiation Oncology, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Jun Ma
- Department of Mathematics, Nanjing University of Science and Technology, Jiangsu, China
| | - Xiaoping Yang
- Department of Mathematics, Nanjing University, Jiangsu, China
| | - Chinmay Rao
- Department of Radiation Oncology (Maastro), GROW School for Oncology, Maastricht University Medical Centre+, Maastricht, The Netherlands
| | - Suraj Pai
- Department of Radiation Oncology (Maastro), GROW School for Oncology, Maastricht University Medical Centre+, Maastricht, The Netherlands
| | | | - Xue Feng
- Carina Medical, Lexington, KY, 40513, USA; Department of Biomedical Engineering, University of Virginia, Charlottesville VA 22903, USA
| | - Mohamed A Naser
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas 77030, USA
| | - Clifton D Fuller
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas 77030, USA
| | - Fereshteh Yousefirizi
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver BC, Canada
| | - Arman Rahmim
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver BC, Canada
| | - Huai Chen
- Department of Automation, Institute of Image Processing and Pattern Recognition, Shangai Jiao Tong University, Shanghai 200240, People's Republic of China
| | - Lisheng Wang
- Department of Automation, Institute of Image Processing and Pattern Recognition, Shangai Jiao Tong University, Shanghai 200240, People's Republic of China
| | - John O Prior
- Department of Nuclear Medicine and Molecular Imaging, Centre Hospitalier Universitaire Vaudois (CHUV), Lausanne, Switzerland
| | - Adrien Depeursinge
- Institute of Information Systems, University of Applied Sciences Western Switzerland (HES-SO), Sierre, Switzerland; Department of Nuclear Medicine and Molecular Imaging, Centre Hospitalier Universitaire Vaudois (CHUV), Lausanne, Switzerland
| |
Collapse
|
113
|
Kalantar R, Lin G, Winfield JM, Messiou C, Lalondrelle S, Blackledge MD, Koh DM. Automatic Segmentation of Pelvic Cancers Using Deep Learning: State-of-the-Art Approaches and Challenges. Diagnostics (Basel) 2021; 11:1964. [PMID: 34829310 PMCID: PMC8625809 DOI: 10.3390/diagnostics11111964] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Revised: 10/14/2021] [Accepted: 10/19/2021] [Indexed: 12/18/2022] Open
Abstract
The recent rise of deep learning (DL) and its promising capabilities in capturing non-explicit detail from large datasets have attracted substantial research attention in the field of medical image processing. DL provides grounds for technological development of computer-aided diagnosis and segmentation in radiology and radiation oncology. Amongst the anatomical locations where recent auto-segmentation algorithms have been employed, the pelvis remains one of the most challenging due to large intra- and inter-patient soft-tissue variabilities. This review provides a comprehensive, non-systematic and clinically-oriented overview of 74 DL-based segmentation studies, published between January 2016 and December 2020, for bladder, prostate, cervical and rectal cancers on computed tomography (CT) and magnetic resonance imaging (MRI), highlighting the key findings, challenges and limitations.
Collapse
Affiliation(s)
- Reza Kalantar
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SM2 5NG, UK; (R.K.); (J.M.W.); (C.M.); (S.L.); (D.-M.K.)
| | - Gigin Lin
- Department of Medical Imaging and Intervention, Chang Gung Memorial Hospital at Linkou and Chang Gung University, 5 Fuhsing St., Guishan, Taoyuan 333, Taiwan;
| | - Jessica M. Winfield
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SM2 5NG, UK; (R.K.); (J.M.W.); (C.M.); (S.L.); (D.-M.K.)
- Department of Radiology, The Royal Marsden Hospital, London SW3 6JJ, UK
| | - Christina Messiou
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SM2 5NG, UK; (R.K.); (J.M.W.); (C.M.); (S.L.); (D.-M.K.)
- Department of Radiology, The Royal Marsden Hospital, London SW3 6JJ, UK
| | - Susan Lalondrelle
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SM2 5NG, UK; (R.K.); (J.M.W.); (C.M.); (S.L.); (D.-M.K.)
- Department of Radiology, The Royal Marsden Hospital, London SW3 6JJ, UK
| | - Matthew D. Blackledge
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SM2 5NG, UK; (R.K.); (J.M.W.); (C.M.); (S.L.); (D.-M.K.)
| | - Dow-Mu Koh
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SM2 5NG, UK; (R.K.); (J.M.W.); (C.M.); (S.L.); (D.-M.K.)
- Department of Radiology, The Royal Marsden Hospital, London SW3 6JJ, UK
| |
Collapse
|
114
|
Garrett Fernandes M, Bussink J, Stam B, Wijsman R, Schinagl DAX, Monshouwer R, Teuwen J. Deep learning model for automatic contouring of cardiovascular substructures on radiotherapy planning CT images: Dosimetric validation and reader study based clinical acceptability testing. Radiother Oncol 2021; 165:52-59. [PMID: 34688808 DOI: 10.1016/j.radonc.2021.10.008] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2021] [Revised: 09/27/2021] [Accepted: 10/11/2021] [Indexed: 12/25/2022]
Abstract
BACKGROUND AND PURPOSE Large radiotherapy (RT) planning imaging datasets with consistently contoured cardiovascular structures are essential for robust cardiac radiotoxicity research in thoracic cancers. This study aims to develop and validate a highly accurate automatic contouring model for the heart, cardiac chambers, and great vessels for RT planning computed tomography (CT) images that can be used for dose-volume parameter estimation. MATERIALS AND METHODS A neural network model was trained using a dataset of 127 expertly contoured planning CT images from RT treatment of locally advanced non-small-cell lung cancer (NSCLC) patients. Evaluation of geometric accuracy and quality of dosimetric parameter estimation was performed on 50 independent scans with contrast and without contrast enhancement. The model was further evaluated regarding the clinical acceptability of the contours in 99 scans randomly sampled from the RTOG-0617 dataset by three experienced radiation oncologists. RESULTS Median surface dice at 3 mm tolerance for all dedicated thoracic structures was 90% in the test set. Median absolute difference between mean dose computed with model contours and expert contours was 0.45 Gy averaged over all structures. The mean clinical acceptability rate by majority vote in the RTOG-0617 scans was 91%. CONCLUSION This model can be used to contour the heart, cardiac chambers, and great vessels in large datasets of RT planning thoracic CT images accurately, quickly, and consistently. Additionally, the model can be used as a time-saving tool for contouring in clinic practice.
Collapse
Affiliation(s)
- Miguel Garrett Fernandes
- Department of Radiation Oncology, Radboud Institute for Health Sciences, Radboud University Medical Center, Nijmegen, The Netherlands; Department of Medical Imaging, Radboud Institute for Health Sciences, Radboud University Medical Center, Nijmegen, The Netherlands.
| | - Johan Bussink
- Department of Radiation Oncology, Radboud Institute for Health Sciences, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Barbara Stam
- Department of Radiation Oncology, The Netherlands Cancer Institute, Amsterdam, The Netherlands
| | - Robin Wijsman
- Department of Radiation Oncology, University Medical Center Groningen, Groningen, The Netherlands
| | - Dominic A X Schinagl
- Department of Radiation Oncology, Radboud Institute for Health Sciences, Radboud University Medical Center, Nijmegen, The Netherlands
| | - René Monshouwer
- Department of Radiation Oncology, Radboud Institute for Health Sciences, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Jonas Teuwen
- Department of Medical Imaging, Radboud Institute for Health Sciences, Radboud University Medical Center, Nijmegen, The Netherlands; Department of Radiation Oncology, The Netherlands Cancer Institute, Amsterdam, The Netherlands
| |
Collapse
|
115
|
Pérez-García F, Sparks R, Ourselin S. TorchIO: A Python library for efficient loading, preprocessing, augmentation and patch-based sampling of medical images in deep learning. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 208:106236. [PMID: 34311413 DOI: 10.5281/zenodo.4296288] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Subscribe] [Scholar Register] [Received: 12/02/2020] [Accepted: 06/09/2021] [Indexed: 05/28/2023]
Abstract
BACKGROUND AND OBJECTIVE Processing of medical images such as MRI or CT presents different challenges compared to RGB images typically used in computer vision. These include a lack of labels for large datasets, high computational costs, and the need of metadata to describe the physical properties of voxels. Data augmentation is used to artificially increase the size of the training datasets. Training with image subvolumes or patches decreases the need for computational power. Spatial metadata needs to be carefully taken into account in order to ensure a correct alignment and orientation of volumes. METHODS We present TorchIO, an open-source Python library to enable efficient loading, preprocessing, augmentation and patch-based sampling of medical images for deep learning. TorchIO follows the style of PyTorch and integrates standard medical image processing libraries to efficiently process images during training of neural networks. TorchIO transforms can be easily composed, reproduced, traced and extended. Most transforms can be inverted, making the library suitable for test-time augmentation and estimation of aleatoric uncertainty in the context of segmentation. We provide multiple generic preprocessing and augmentation operations as well as simulation of MRI-specific artifacts. RESULTS Source code, comprehensive tutorials and extensive documentation for TorchIO can be found at http://torchio.rtfd.io/. The package can be installed from the Python Package Index (PyPI) running pip install torchio. It includes a command-line interface which allows users to apply transforms to image files without using Python. Additionally, we provide a graphical user interface within a TorchIO extension in 3D Slicer to visualize the effects of transforms. CONCLUSION TorchIO was developed to help researchers standardize medical image processing pipelines and allow them to focus on the deep learning experiments. It encourages good open-science practices, as it supports experiment reproducibility and is version-controlled so that the software can be cited precisely. Due to its modularity, the library is compatible with other frameworks for deep learning with medical images.
Collapse
Affiliation(s)
- Fernando Pérez-García
- Department of Medical Physics and Biomedical Engineering, University College London, UK; Wellcome / EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, UK; School of Biomedical Engineering & Imaging Sciences (BMEIS), King's College London, UK.
| | - Rachel Sparks
- School of Biomedical Engineering & Imaging Sciences (BMEIS), King's College London, UK
| | - Sébastien Ourselin
- School of Biomedical Engineering & Imaging Sciences (BMEIS), King's College London, UK
| |
Collapse
|
116
|
Pérez-García F, Sparks R, Ourselin S. TorchIO: A Python library for efficient loading, preprocessing, augmentation and patch-based sampling of medical images in deep learning. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 208:106236. [PMID: 34311413 PMCID: PMC8542803 DOI: 10.1016/j.cmpb.2021.106236] [Citation(s) in RCA: 128] [Impact Index Per Article: 42.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/02/2020] [Accepted: 06/09/2021] [Indexed: 05/15/2023]
Abstract
BACKGROUND AND OBJECTIVE Processing of medical images such as MRI or CT presents different challenges compared to RGB images typically used in computer vision. These include a lack of labels for large datasets, high computational costs, and the need of metadata to describe the physical properties of voxels. Data augmentation is used to artificially increase the size of the training datasets. Training with image subvolumes or patches decreases the need for computational power. Spatial metadata needs to be carefully taken into account in order to ensure a correct alignment and orientation of volumes. METHODS We present TorchIO, an open-source Python library to enable efficient loading, preprocessing, augmentation and patch-based sampling of medical images for deep learning. TorchIO follows the style of PyTorch and integrates standard medical image processing libraries to efficiently process images during training of neural networks. TorchIO transforms can be easily composed, reproduced, traced and extended. Most transforms can be inverted, making the library suitable for test-time augmentation and estimation of aleatoric uncertainty in the context of segmentation. We provide multiple generic preprocessing and augmentation operations as well as simulation of MRI-specific artifacts. RESULTS Source code, comprehensive tutorials and extensive documentation for TorchIO can be found at http://torchio.rtfd.io/. The package can be installed from the Python Package Index (PyPI) running pip install torchio. It includes a command-line interface which allows users to apply transforms to image files without using Python. Additionally, we provide a graphical user interface within a TorchIO extension in 3D Slicer to visualize the effects of transforms. CONCLUSION TorchIO was developed to help researchers standardize medical image processing pipelines and allow them to focus on the deep learning experiments. It encourages good open-science practices, as it supports experiment reproducibility and is version-controlled so that the software can be cited precisely. Due to its modularity, the library is compatible with other frameworks for deep learning with medical images.
Collapse
Affiliation(s)
- Fernando Pérez-García
- Department of Medical Physics and Biomedical Engineering, University College London, UK; Wellcome / EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, UK; School of Biomedical Engineering & Imaging Sciences (BMEIS), King's College London, UK.
| | - Rachel Sparks
- School of Biomedical Engineering & Imaging Sciences (BMEIS), King's College London, UK
| | - Sébastien Ourselin
- School of Biomedical Engineering & Imaging Sciences (BMEIS), King's College London, UK
| |
Collapse
|
117
|
Sugino T, Kawase T, Onogi S, Kin T, Saito N, Nakajima Y. Loss Weightings for Improving Imbalanced Brain Structure Segmentation Using Fully Convolutional Networks. Healthcare (Basel) 2021; 9:938. [PMID: 34442075 PMCID: PMC8393549 DOI: 10.3390/healthcare9080938] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2021] [Revised: 07/12/2021] [Accepted: 07/22/2021] [Indexed: 11/30/2022] Open
Abstract
Brain structure segmentation on magnetic resonance (MR) images is important for various clinical applications. It has been automatically performed by using fully convolutional networks. However, it suffers from the class imbalance problem. To address this problem, we investigated how loss weighting strategies work for brain structure segmentation tasks with different class imbalance situations on MR images. In this study, we adopted segmentation tasks of the cerebrum, cerebellum, brainstem, and blood vessels from MR cisternography and angiography images as the target segmentation tasks. We used a U-net architecture with cross-entropy and Dice loss functions as a baseline and evaluated the effect of the following loss weighting strategies: inverse frequency weighting, median inverse frequency weighting, focal weighting, distance map-based weighting, and distance penalty term-based weighting. In the experiments, the Dice loss function with focal weighting showed the best performance and had a high average Dice score of 92.8% in the binary-class segmentation tasks, while the cross-entropy loss functions with distance map-based weighting achieved the Dice score of up to 93.1% in the multi-class segmentation tasks. The results suggested that the distance map-based and the focal weightings could boost the performance of cross-entropy and Dice loss functions in class imbalanced segmentation tasks, respectively.
Collapse
Affiliation(s)
- Takaaki Sugino
- Department of Biomedical Information, Institute of Biomaterials and Bioengineering, Tokyo Medical and Dental University, Tokyo 101-0062, Japan; (T.K.); (S.O.)
| | - Toshihiro Kawase
- Department of Biomedical Information, Institute of Biomaterials and Bioengineering, Tokyo Medical and Dental University, Tokyo 101-0062, Japan; (T.K.); (S.O.)
| | - Shinya Onogi
- Department of Biomedical Information, Institute of Biomaterials and Bioengineering, Tokyo Medical and Dental University, Tokyo 101-0062, Japan; (T.K.); (S.O.)
| | - Taichi Kin
- Department of Neurosurgery, Graduate School of Medicine, The University of Tokyo, Tokyo 113-0033, Japan; (T.K.); (N.S.)
| | - Nobuhito Saito
- Department of Neurosurgery, Graduate School of Medicine, The University of Tokyo, Tokyo 113-0033, Japan; (T.K.); (N.S.)
| | - Yoshikazu Nakajima
- Department of Biomedical Information, Institute of Biomaterials and Bioengineering, Tokyo Medical and Dental University, Tokyo 101-0062, Japan; (T.K.); (S.O.)
| |
Collapse
|
118
|
Qiu B, van der Wel H, Kraeima J, Glas HH, Guo J, Borra RJH, Witjes MJH, van Ooijen PMA. Automatic Segmentation of Mandible from Conventional Methods to Deep Learning-A Review. J Pers Med 2021; 11:629. [PMID: 34357096 PMCID: PMC8307673 DOI: 10.3390/jpm11070629] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2021] [Revised: 06/26/2021] [Accepted: 06/28/2021] [Indexed: 01/05/2023] Open
Abstract
Medical imaging techniques, such as (cone beam) computed tomography and magnetic resonance imaging, have proven to be a valuable component for oral and maxillofacial surgery (OMFS). Accurate segmentation of the mandible from head and neck (H&N) scans is an important step in order to build a personalized 3D digital mandible model for 3D printing and treatment planning of OMFS. Segmented mandible structures are used to effectively visualize the mandible volumes and to evaluate particular mandible properties quantitatively. However, mandible segmentation is always challenging for both clinicians and researchers, due to complex structures and higher attenuation materials, such as teeth (filling) or metal implants that easily lead to high noise and strong artifacts during scanning. Moreover, the size and shape of the mandible vary to a large extent between individuals. Therefore, mandible segmentation is a tedious and time-consuming task and requires adequate training to be performed properly. With the advancement of computer vision approaches, researchers have developed several algorithms to automatically segment the mandible during the last two decades. The objective of this review was to present the available fully (semi)automatic segmentation methods of the mandible published in different scientific articles. This review provides a vivid description of the scientific advancements to clinicians and researchers in this field to help develop novel automatic methods for clinical applications.
Collapse
Affiliation(s)
- Bingjiang Qiu
- 3D Lab, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands; (B.Q.); (H.v.d.W.); (J.K.); (H.H.G.); (M.J.H.W.)
- Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands;
- Data Science Center in Health (DASH), University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| | - Hylke van der Wel
- 3D Lab, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands; (B.Q.); (H.v.d.W.); (J.K.); (H.H.G.); (M.J.H.W.)
- Department of Oral and Maxillofacial Surgery, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| | - Joep Kraeima
- 3D Lab, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands; (B.Q.); (H.v.d.W.); (J.K.); (H.H.G.); (M.J.H.W.)
- Department of Oral and Maxillofacial Surgery, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| | - Haye Hendrik Glas
- 3D Lab, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands; (B.Q.); (H.v.d.W.); (J.K.); (H.H.G.); (M.J.H.W.)
- Department of Oral and Maxillofacial Surgery, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| | - Jiapan Guo
- Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands;
- Data Science Center in Health (DASH), University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| | - Ronald J. H. Borra
- Medical Imaging Center (MIC), University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands;
| | - Max Johannes Hendrikus Witjes
- 3D Lab, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands; (B.Q.); (H.v.d.W.); (J.K.); (H.H.G.); (M.J.H.W.)
- Department of Oral and Maxillofacial Surgery, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| | - Peter M. A. van Ooijen
- Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands;
- Data Science Center in Health (DASH), University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 GZ Groningen, The Netherlands
| |
Collapse
|