1
|
Guimond S, Alftieh A, Devenyi GA, Mike L, Chakravarty MM, Shah JL, Parker DA, Sweeney JA, Pearlson G, Clementz BA, Tamminga CA, Keshavan M. Enlarged pituitary gland volume: a possible state rather than trait marker of psychotic disorders. Psychol Med 2024; 54:1835-1843. [PMID: 38357733 PMCID: PMC11132920 DOI: 10.1017/s003329172300380x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/16/2024]
Abstract
BACKGROUND Enlarged pituitary gland volume could be a marker of psychotic disorders. However, previous studies report conflicting results. To better understand the role of the pituitary gland in psychosis, we examined a large transdiagnostic sample of individuals with psychotic disorders. METHODS The study included 751 participants (174 with schizophrenia, 114 with schizoaffective disorder, 167 with psychotic bipolar disorder, and 296 healthy controls) across six sites in the Bipolar-Schizophrenia Network on Intermediate Phenotypes consortium. Structural magnetic resonance images were obtained, and pituitary gland volumes were measured using the MAGeT brain algorithm. Linear mixed models examined between-group differences with controls and among patient subgroups based on diagnosis, as well as how pituitary volumes were associated with symptom severity, cognitive function, antipsychotic dose, and illness duration. RESULTS Mean pituitary gland volume did not significantly differ between patients and controls. No significant effect of diagnosis was observed. Larger pituitary gland volume was associated with greater symptom severity (F = 13.61, p = 0.0002), lower cognitive function (F = 4.76, p = 0.03), and higher antipsychotic dose (F = 5.20, p = 0.02). Illness duration was not significantly associated with pituitary gland volume. When all variables were considered, only symptom severity significantly predicted pituitary gland volume (F = 7.54, p = 0.006). CONCLUSIONS Although pituitary volumes were not increased in psychotic disorders, larger size may be a marker associated with more severe symptoms in the progression of psychosis. This finding helps clarify previous inconsistent reports and highlights the need for further research into pituitary gland-related factors in individuals with psychosis.
Collapse
Affiliation(s)
- Synthia Guimond
- Department of Psychiatry, The Royal’s Institute of Mental Health Research, University of Ottawa, Ottawa, ON, Canada
- Department of Psychoeducation and Psychology, Université du Québec en Outaouais, Gatineau, QC, Canada
- Department of Psychiatry, Massachusetts Mental Health Center and Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA, USA
| | - Ahmad Alftieh
- Department of Psychiatry, The Royal’s Institute of Mental Health Research, University of Ottawa, Ottawa, ON, Canada
| | - Gabriel A. Devenyi
- Department of Psychiatry, McGill University, Montréal, QC, Canada
- Douglas Mental Health University Institute, Verdun, QC, Canada
| | - Luke Mike
- Department of Psychiatry, Massachusetts Mental Health Center and Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA, USA
| | - M. Mallar Chakravarty
- Department of Psychiatry, McGill University, Montréal, QC, Canada
- Douglas Mental Health University Institute, Verdun, QC, Canada
- Department of Biomedical Engineering, McGill University Montréal, QC, Canada
- Cerebral Imaging Centre, Douglas Mental Health University Institute, Verdun, QC, Canada
| | - Jai L. Shah
- Department of Psychiatry, McGill University, Montréal, QC, Canada
- Douglas Mental Health University Institute, Verdun, QC, Canada
| | - David A. Parker
- Department of Psychology, BioImaging Research Center, University of Georgia, Athens, GA, USA
- Department of and Neuroscience, BioImaging Research Center, University of Georgia, Athens, GA, USA
- Department of Human Genetics, Emory University School of Medicine, Atlanta, GA, USA
| | - John A. Sweeney
- Department of Psychiatry, University of Cincinnati, Cincinnati, OH, USA
| | - Godfrey Pearlson
- Department of Psychiatry, Yale University, New Haven, CT, USA
- Department of Neuroscience, Yale University, New Haven, CT, USA
| | - Brett A. Clementz
- Department of Psychology, BioImaging Research Center, University of Georgia, Athens, GA, USA
- Department of and Neuroscience, BioImaging Research Center, University of Georgia, Athens, GA, USA
| | - Carol A. Tamminga
- Department of Psychiatry, UT Southwestern Medical Center, Dallas, TX, USA
| | - Matcheri Keshavan
- Department of Psychiatry, Massachusetts Mental Health Center and Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
2
|
McDonald BA, Dal Bello R, Fuller CD, Balermpas P. The Use of MR-Guided Radiation Therapy for Head and Neck Cancer and Recommended Reporting Guidance. Semin Radiat Oncol 2024; 34:69-83. [PMID: 38105096 PMCID: PMC11372437 DOI: 10.1016/j.semradonc.2023.10.003] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2023]
Abstract
Although magnetic resonance imaging (MRI) has become standard diagnostic workup for head and neck malignancies and is currently recommended by most radiological societies for pharyngeal and oral carcinomas, its utilization in radiotherapy has been heterogeneous during the last decades. However, few would argue that implementing MRI for annotation of target volumes and organs at risk provides several advantages, so that implementation of the modality for this purpose is widely accepted. Today, the term MR-guidance has received a much broader meaning, including MRI for adaptive treatments, MR-gating and tracking during radiotherapy application, MR-features as biomarkers and finally MR-only workflows. First studies on treatment of head and neck cancer on commercially available dedicated hybrid-platforms (MR-linacs), with distinct common features but also differences amongst them, have also been recently reported, as well as "biological adaptation" based on evaluation of early treatment response via functional MRI-sequences such as diffusion weighted ones. Yet, all of these approaches towards head and neck treatment remain at their infancy, especially when compared to other radiotherapy indications. Moreover, the lack of standardization for reporting MR-guided radiotherapy is a major obstacle both to further progress in the field and to conduct and compare clinical trials. Goals of this article is to present and explain all different aspects of MR-guidance for radiotherapy of head and neck cancer, summarize evidence, as well as possible advantages and challenges of the method and finally provide a comprehensive reporting guidance for use in clinical routine and trials.
Collapse
Affiliation(s)
- Brigid A McDonald
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX
| | - Riccardo Dal Bello
- Department of Radiation Oncology, University Hospital Zurich and University of Zurich, Zurich, Switzerland
| | - Clifton D Fuller
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX
| | - Panagiotis Balermpas
- Department of Radiation Oncology, University Hospital Zurich and University of Zurich, Zurich, Switzerland.
| |
Collapse
|
3
|
Vaassen F, Zegers CML, Hofstede D, Wubbels M, Beurskens H, Verheesen L, Canters R, Looney P, Battye M, Gooding MJ, Compter I, Eekers DBP, van Elmpt W. Geometric and dosimetric analysis of CT- and MR-based automatic contouring for the EPTN contouring atlas in neuro-oncology. Phys Med 2023; 114:103156. [PMID: 37813050 DOI: 10.1016/j.ejmp.2023.103156] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Revised: 09/21/2023] [Accepted: 09/26/2023] [Indexed: 10/11/2023] Open
Abstract
PURPOSE Atlas-based and deep-learning contouring (DLC) are methods for automatic segmentation of organs-at-risk (OARs). The European Particle Therapy Network (EPTN) published a consensus-based atlas for delineation of OARs in neuro-oncology. In this study, geometric and dosimetric evaluation of automatically-segmented neuro-oncological OARs was performed using CT- and MR-models following the EPTN-contouring atlas. METHODS Image and contouring data from 76 neuro-oncological patients were included. Two atlas-based models (CT-atlas and MR-atlas) and one DLC-model (MR-DLC) were created. Manual contours on registered CT-MR-images were used as ground-truth. Results were analyzed in terms of geometrical (volumetric Dice similarity coefficient (vDSC), surface DSC (sDSC), added path length (APL), and mean slice-wise Hausdorff distance (MSHD)) and dosimetrical accuracy. Distance-to-tumor analysis was performed to analyze to which extent the location of the OAR relative to planning target volume (PTV) has dosimetric impact, using Wilcoxon rank-sum tests. RESULTS CT-atlas outperformed MR-atlas for 22/26 OARs. MR-DLC outperformed MR-atlas for all OARs. Highest median (95 %CI) vDSC and sDSC were found for the brainstem in MR-DLC: 0.92 (0.88-0.95) and 0.84 (0.77-0.89) respectively, as well as lowest MSHD: 0.27 (0.22-0.39)cm. Median dose differences (ΔD) were within ± 1 Gy for 24/26(92 %) OARs for all three models. Distance-to-tumor showed a significant correlation for ΔDmax,0.03cc-parameters when splitting the data in ≤ 4 cm and > 4 cm OAR-distance (p < 0.001). CONCLUSION MR-based DLC and CT-based atlas-contouring enable high-quality segmentation. It was shown that a combination of both CT- and MR-autocontouring models results in the best quality.
Collapse
Affiliation(s)
- Femke Vaassen
- Department of Radiation Oncology (Maastro), GROW School for Oncology and Reproduction, Maastricht University Medical Centre(+), Maastricht, the Netherlands.
| | - Catharina M L Zegers
- Department of Radiation Oncology (Maastro), GROW School for Oncology and Reproduction, Maastricht University Medical Centre(+), Maastricht, the Netherlands
| | - David Hofstede
- Department of Radiation Oncology (Maastro), GROW School for Oncology and Reproduction, Maastricht University Medical Centre(+), Maastricht, the Netherlands
| | - Mart Wubbels
- Department of Radiation Oncology (Maastro), GROW School for Oncology and Reproduction, Maastricht University Medical Centre(+), Maastricht, the Netherlands
| | - Hilde Beurskens
- Department of Radiation Oncology (Maastro), GROW School for Oncology and Reproduction, Maastricht University Medical Centre(+), Maastricht, the Netherlands
| | - Lindsey Verheesen
- Department of Radiation Oncology (Maastro), GROW School for Oncology and Reproduction, Maastricht University Medical Centre(+), Maastricht, the Netherlands
| | - Richard Canters
- Department of Radiation Oncology (Maastro), GROW School for Oncology and Reproduction, Maastricht University Medical Centre(+), Maastricht, the Netherlands
| | | | | | | | - Inge Compter
- Department of Radiation Oncology (Maastro), GROW School for Oncology and Reproduction, Maastricht University Medical Centre(+), Maastricht, the Netherlands
| | - Daniëlle B P Eekers
- Department of Radiation Oncology (Maastro), GROW School for Oncology and Reproduction, Maastricht University Medical Centre(+), Maastricht, the Netherlands
| | - Wouter van Elmpt
- Department of Radiation Oncology (Maastro), GROW School for Oncology and Reproduction, Maastricht University Medical Centre(+), Maastricht, the Netherlands
| |
Collapse
|
4
|
Turcas A, Leucuta D, Balan C, Clementel E, Gheara C, Kacso A, Kelly SM, Tanasa D, Cernea D, Achimas-Cadariu P. Deep-learning magnetic resonance imaging-based automatic segmentation for organs-at-risk in the brain: Accuracy and impact on dose distribution. Phys Imaging Radiat Oncol 2023; 27:100454. [PMID: 37333894 PMCID: PMC10276287 DOI: 10.1016/j.phro.2023.100454] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2023] [Revised: 05/27/2023] [Accepted: 05/31/2023] [Indexed: 06/20/2023] Open
Abstract
Background and purpose Normal tissue sparing in radiotherapy relies on proper delineation. While manual contouring is time consuming and subject to inter-observer variability, auto-contouring could optimize workflows and harmonize practice. We assessed the accuracy of a commercial, deep-learning, MRI-based tool for brain organs-at-risk delineation. Materials and methods Thirty adult brain tumor patients were retrospectively manually recontoured. Two additional structure sets were obtained: AI (artificial intelligence) and AIedit (manually corrected auto-contours). For 15 selected cases, identical plans were optimized for each structure set. We used Dice Similarity Coefficient (DSC) and mean surface-distance (MSD) for geometric comparison and gamma analysis and dose-volume-histogram comparison for dose metrics evaluation. Wilcoxon signed-ranks test was used for paired data, Spearman coefficient(ρ) for correlations and Bland-Altman plots to assess level of agreement. Results Auto-contouring was significantly faster than manual (1.1/20 min, p < 0.01). Median DSC and MSD were 0.7/0.9 mm for AI and 0.8/0.5 mm for AIedit. DSC was significantly correlated with structure size (ρ = 0.76, p < 0.01), with higher DSC for large structures. Median gamma pass rate was 74% (71-81%) for Plan_AI and 82% (75-86%) for Plan_AIedit, with no correlation with DSC or MSD. Differences between Dmean_AI and Dmean_Ref were ≤ 0.2 Gy (p < 0.05). The dose difference was moderately correlated with DSC. Bland Altman plot showed minimal discrepancy (0.1/0) between AI and reference Dmean/Dmax. Conclusions The AI-model showed good accuracy for large structures, but developments are required for smaller ones. Auto-segmentation was significantly faster, with minor differences in dose distribution caused by geometric variations.
Collapse
Affiliation(s)
- Andrada Turcas
- The European Organisation for Research and Treatment of Cancer (EORTC) Headquarters, RTQA, Brussels, Belgium
- SIOP Europe, The European Society for Paediatric Oncology (SIOPE), QUARTET Project, Brussels, Belgium
- University of Medicine and Pharmacy and Medicine “Iuliu Hatieganu”, Oncology Department, Cluj-Napoca, Romania
- Oncology Institute “Prof. Dr. Ion Chiricuta”, Radiotherapy Department, Cluj-Napoca, Romania
| | - Daniel Leucuta
- University of Medicine and Pharmacy “Iuliu Hatieganu”, Department of Medical Informatics and Biostatistics, Cluj-Napoca, Romania
| | - Cristina Balan
- Oncology Institute “Prof. Dr. Ion Chiricuta”, Radiotherapy Department, Cluj-Napoca, Romania
- “Babes-Bolyai” University, Faculty of Physics, Cluj-Napoca, Romania
| | - Enrico Clementel
- The European Organisation for Research and Treatment of Cancer (EORTC) Headquarters, RTQA, Brussels, Belgium
| | - Cristina Gheara
- Oncology Institute “Prof. Dr. Ion Chiricuta”, Radiotherapy Department, Cluj-Napoca, Romania
- “Babes-Bolyai” University, Faculty of Physics, Cluj-Napoca, Romania
| | - Alex Kacso
- University of Medicine and Pharmacy and Medicine “Iuliu Hatieganu”, Oncology Department, Cluj-Napoca, Romania
- Oncology Institute “Prof. Dr. Ion Chiricuta”, Radiotherapy Department, Cluj-Napoca, Romania
| | - Sarah M. Kelly
- The European Organisation for Research and Treatment of Cancer (EORTC) Headquarters, RTQA, Brussels, Belgium
- SIOP Europe, The European Society for Paediatric Oncology (SIOPE), QUARTET Project, Brussels, Belgium
| | - Delia Tanasa
- Oncology Institute “Prof. Dr. Ion Chiricuta”, Radiotherapy Department, Cluj-Napoca, Romania
| | - Dana Cernea
- Oncology Institute “Prof. Dr. Ion Chiricuta”, Radiotherapy Department, Cluj-Napoca, Romania
| | - Patriciu Achimas-Cadariu
- University of Medicine and Pharmacy and Medicine “Iuliu Hatieganu”, Oncology Department, Cluj-Napoca, Romania
- Oncology Institute “Prof. Dr. Ion Chiricuta”, Surgery Department, Cluj-Napoca, Romania
| |
Collapse
|
5
|
Jin R, Cai Y, Zhang S, Yang T, Feng H, Jiang H, Zhang X, Hu Y, Liu J. Computational approaches for the reconstruction of optic nerve fibers along the visual pathway from medical images: a comprehensive review. Front Neurosci 2023; 17:1191999. [PMID: 37304011 PMCID: PMC10250625 DOI: 10.3389/fnins.2023.1191999] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2023] [Accepted: 05/09/2023] [Indexed: 06/13/2023] Open
Abstract
Optic never fibers in the visual pathway play significant roles in vision formation. Damages of optic nerve fibers are biomarkers for the diagnosis of various ophthalmological and neurological diseases; also, there is a need to prevent the optic nerve fibers from getting damaged in neurosurgery and radiation therapy. Reconstruction of optic nerve fibers from medical images can facilitate all these clinical applications. Although many computational methods are developed for the reconstruction of optic nerve fibers, a comprehensive review of these methods is still lacking. This paper described both the two strategies for optic nerve fiber reconstruction applied in existing studies, i.e., image segmentation and fiber tracking. In comparison to image segmentation, fiber tracking can delineate more detailed structures of optic nerve fibers. For each strategy, both conventional and AI-based approaches were introduced, and the latter usually demonstrates better performance than the former. From the review, we concluded that AI-based methods are the trend for optic nerve fiber reconstruction and some new techniques like generative AI can help address the current challenges in optic nerve fiber reconstruction.
Collapse
Affiliation(s)
- Richu Jin
- Research Institute of Trustworthy Autonomous Systems, Southern University of Science and Technology, Shenzhen, China
- Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Yongning Cai
- Research Institute of Trustworthy Autonomous Systems, Southern University of Science and Technology, Shenzhen, China
| | - Shiyang Zhang
- Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Ting Yang
- Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Haibo Feng
- Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Hongyang Jiang
- Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Xiaoqing Zhang
- Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Yan Hu
- Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Jiang Liu
- Research Institute of Trustworthy Autonomous Systems, Southern University of Science and Technology, Shenzhen, China
- Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China
- Guangdong Provincial Key Laboratory of Brain-inspired Intelligent Computation, Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China
| |
Collapse
|
6
|
van Elst S, de Bloeme CM, Noteboom S, de Jong MC, Moll AC, Göricke S, de Graaf P, Caan MWA. Automatic segmentation and quantification of the optic nerve on MRI using a 3D U-Net. J Med Imaging (Bellingham) 2023; 10:034501. [PMID: 37197374 PMCID: PMC10185127 DOI: 10.1117/1.jmi.10.3.034501] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2022] [Revised: 03/15/2023] [Accepted: 04/19/2023] [Indexed: 05/19/2023] Open
Abstract
Purpose Pathological conditions associated with the optic nerve (ON) can cause structural changes in the nerve. Quantifying these changes could provide further understanding of disease mechanisms. We aim to develop a framework that automatically segments the ON separately from its surrounding cerebrospinal fluid (CSF) on magnetic resonance imaging (MRI) and quantifies the diameter and cross-sectional area along the entire length of the nerve. Approach Multicenter data were obtained from retinoblastoma referral centers, providing a heterogeneous dataset of 40 high-resolution 3D T2-weighted MRI scans with manual ground truth delineations of both ONs. A 3D U-Net was used for ON segmentation, and performance was assessed in a tenfold cross-validation (n = 32 ) and on a separate test-set (n = 8 ) by measuring spatial, volumetric, and distance agreement with manual ground truths. Segmentations were used to quantify diameter and cross-sectional area along the length of the ON, using centerline extraction of tubular 3D surface models. Absolute agreement between automated and manual measurements was assessed by the intraclass correlation coefficient (ICC). Results The segmentation network achieved high performance, with a mean Dice similarity coefficient score of 0.84, median Hausdorff distance of 0.64 mm, and ICC of 0.95 on the test-set. The quantification method obtained acceptable correspondence to manual reference measurements with mean ICC values of 0.76 for the diameter and 0.71 for the cross-sectional area. Compared with other methods, our method precisely identifies the ON from surrounding CSF and accurately estimates its diameter along the nerve's centerline. Conclusions Our automated framework provides an objective method for ON assessment in vivo.
Collapse
Affiliation(s)
- Sabien van Elst
- Amsterdam UMC location Vrije Universiteit Amsterdam, Department of Radiology and Nuclear Medicine, Amsterdam, The Netherlands
- Cancer Center Amsterdam, Imaging and Biomarkers, Amsterdam, The Netherlands
| | - Christiaan M. de Bloeme
- Amsterdam UMC location Vrije Universiteit Amsterdam, Department of Radiology and Nuclear Medicine, Amsterdam, The Netherlands
- Cancer Center Amsterdam, Imaging and Biomarkers, Amsterdam, The Netherlands
| | - Samantha Noteboom
- Amsterdam UMC location Vrije Universiteit Amsterdam, Department of Anatomy and Neurosciences, Amsterdam, The Netherlands
| | - Marcus C. de Jong
- Amsterdam UMC location Vrije Universiteit Amsterdam, Department of Radiology and Nuclear Medicine, Amsterdam, The Netherlands
- Cancer Center Amsterdam, Imaging and Biomarkers, Amsterdam, The Netherlands
| | - Annette C. Moll
- Cancer Center Amsterdam, Imaging and Biomarkers, Amsterdam, The Netherlands
- Amsterdam UMC location Vrije Universiteit Amsterdam, Department of Ophthalmology, Amsterdam, The Netherlands
| | - Sophia Göricke
- University Hospital Essen, Institute of Diagnostic and Interventional Radiology and Neuroradiology, Essen, Germany
| | - Pim de Graaf
- Amsterdam UMC location Vrije Universiteit Amsterdam, Department of Radiology and Nuclear Medicine, Amsterdam, The Netherlands
- Cancer Center Amsterdam, Imaging and Biomarkers, Amsterdam, The Netherlands
| | - Matthan W. A. Caan
- Amsterdam UMC location University of Amsterdam, Department of Biomedical Engineering and Physics, Amsterdam, The Netherlands
| |
Collapse
|
7
|
Eidex Z, Ding Y, Wang J, Abouei E, Qiu RL, Liu T, Wang T, Yang X. Deep Learning in MRI-guided Radiation Therapy: A Systematic Review. ARXIV 2023:arXiv:2303.11378v2. [PMID: 36994167 PMCID: PMC10055493] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 03/31/2023]
Abstract
MRI-guided radiation therapy (MRgRT) offers a precise and adaptive approach to treatment planning. Deep learning applications which augment the capabilities of MRgRT are systematically reviewed. MRI-guided radiation therapy offers a precise, adaptive approach to treatment planning. Deep learning applications which augment the capabilities of MRgRT are systematically reviewed with emphasis placed on underlying methods. Studies are further categorized into the areas of segmentation, synthesis, radiomics, and real time MRI. Finally, clinical implications, current challenges, and future directions are discussed.
Collapse
Affiliation(s)
- Zach Eidex
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
- School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA
| | - Yifu Ding
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| | - Jing Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| | - Elham Abouei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| | - Richard L.J. Qiu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| | - Tian Liu
- Department of Radiation Oncology, Icahn School of Medicine at Mount Sinai, New York, NY
| | - Tonghe Wang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
- School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA
| |
Collapse
|
8
|
Lin CY, Chou LS, Wu YH, Kuo JS, Mehta MP, Shiau AC, Liang JA, Hsu SM, Wang TH. Developing an AI-assisted planning pipeline for hippocampal avoidance whole brain radiotherapy. Radiother Oncol 2023; 181:109528. [PMID: 36773828 DOI: 10.1016/j.radonc.2023.109528] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2022] [Revised: 01/04/2023] [Accepted: 02/03/2023] [Indexed: 02/12/2023]
Abstract
BACKGROUND AND PURPOSE Hippocampal avoidance whole brain radiotherapy (HA-WBRT) is effective for controlling disease and preserving neuro-cognitive function for brain metastases. However, contouring and planning of HA-WBRT is complex and time-consuming. We designed and evaluated a pipeline using deep learning tools for a fully automated treatment planning workflow to generate HA-WBRT radiotherapy plans. MATERIALS AND METHODS We retrospectively collected 50 adult patients who received HA-WBRT. Using RTOG- 0933 clinical trial protocol guidelines, all organs-at-risk (OARs) and the clinical target volume (CTV) were contoured by experienced radiation oncologists. A deep-learning segmentation model was designed and trained. Next, we developed a volumetric-modulated arc therapy (VMAT) auto-planning algorithm for 30 Gy in 10 fractions. Automated segmentations were evaluated using the Dice similarity coefficient (DSC) and 95th-percentile Hausdorff distance (95 % HD). Auto-plans were evaluated by the percentage of PTV volume that receives 30 Gy (V30Gy), conformity index (CI), and homogeneity index (HI) of planning target volume (PTV) and the minimum dose (D100%) and maximum dose (Dmax) for the hippocampus, Dmax for the lens, eyes, optic nerve, brain stem, and chiasm. RESULTS We developed a deep-learning segmentation model and an auto-planning script. For the 10 cases in the independent test set, the overall average DSC and 95 % HD of contours were greater than 0.8 and less than 7 mm, respectively. All auto-plans met the RTOG- 0933 criteria. The HA-WBRT plan automatically created time was about 10 min. CONCLUSIONS An artificial intelligence (AI)-assisted pipeline using deep learning tools can rapidly and accurately generate clinically acceptable HA-WBRT plans with minimal manual intervention and increase efficiency of this treatment for brain metastases.
Collapse
Affiliation(s)
- Chih-Yuan Lin
- Department of Biomedical Imaging and Radiological Sciences, National Yang Ming Chiao Tung University, Taipei, Taiwan; Medical Physics and Radiation Measurements Laboratory, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Lin-Shan Chou
- Division of Radiation Oncology, Department of Oncology, Taipei Veterans General Hospital, Taipei, Taiwan
| | - Yuan-Hung Wu
- Department of Biomedical Imaging and Radiological Sciences, National Yang Ming Chiao Tung University, Taipei, Taiwan; Medical Physics and Radiation Measurements Laboratory, National Yang Ming Chiao Tung University, Taipei, Taiwan; Division of Radiation Oncology, Department of Oncology, Taipei Veterans General Hospital, Taipei, Taiwan; School of Medicine, College of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - John S Kuo
- Neuroscience and Brain Disease Center, China Medical University, Taichung, Taiwan; Graduate Institute of Biomedical Sciences, China Medical University, Taichung, Taiwan; Department of Neurosurgery, Dell Medical School, The University of Texas at Austin, Austin, TX, USA
| | - Minesh P Mehta
- Department of Radiation Oncology, Miami Cancer Institute, Baptist Health South Florida, Miami, Florida, USA; Florida International University, Miami, Florida, USA
| | - An-Cheng Shiau
- Department of Biomedical Imaging and Radiological Sciences, National Yang Ming Chiao Tung University, Taipei, Taiwan; Department of Biomedical Imaging and Radiological Science, China Medical University, Taichung, Taiwan; Department of Radiation Oncology, China Medical University Hospital, Taichung, Taiwan
| | - Ji-An Liang
- Department of Radiation Oncology, China Medical University Hospital, Taichung, Taiwan; Department of Medicine, China Medical University, Taichung, Taiwan
| | - Shih-Ming Hsu
- Department of Biomedical Imaging and Radiological Sciences, National Yang Ming Chiao Tung University, Taipei, Taiwan; Medical Physics and Radiation Measurements Laboratory, National Yang Ming Chiao Tung University, Taipei, Taiwan.
| | - Ti-Hao Wang
- Department of Radiation Oncology, China Medical University Hospital, Taichung, Taiwan.
| |
Collapse
|
9
|
Hsu K, Yuh DY, Lin SC, Lyu PS, Pan GX, Zhuang YC, Chang CC, Peng HH, Lee TY, Juan CH, Juan CE, Liu YJ, Juan CJ. Improving performance of deep learning models using 3.5D U-Net via majority voting for tooth segmentation on cone beam computed tomography. Sci Rep 2022; 12:19809. [PMID: 36396696 PMCID: PMC9672125 DOI: 10.1038/s41598-022-23901-7] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2022] [Accepted: 11/07/2022] [Indexed: 11/18/2022] Open
Abstract
Deep learning allows automatic segmentation of teeth on cone beam computed tomography (CBCT). However, the segmentation performance of deep learning varies among different training strategies. Our aim was to propose a 3.5D U-Net to improve the performance of the U-Net in segmenting teeth on CBCT. This study retrospectively enrolled 24 patients who received CBCT. Five U-Nets, including 2Da U-Net, 2Dc U-Net, 2Ds U-Net, 2.5Da U-Net, 3D U-Net, were trained to segment the teeth. Four additional U-Nets, including 2.5Dv U-Net, 3.5Dv5 U-Net, 3.5Dv4 U-Net, and 3.5Dv3 U-Net, were obtained using majority voting. Mathematical morphology operations including erosion and dilation (E&D) were applied to remove diminutive noise speckles. Segmentation performance was evaluated by fourfold cross validation using Dice similarity coefficient (DSC), accuracy, sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV). Kruskal-Wallis test with post hoc analysis using Bonferroni correction was used for group comparison. P < 0.05 was considered statistically significant. Performance of U-Nets significantly varies among different training strategies for teeth segmentation on CBCT (P < 0.05). The 3.5Dv5 U-Net and 2.5Dv U-Net showed DSC and PPV significantly higher than any of five originally trained U-Nets (all P < 0.05). E&D significantly improved the DSC, accuracy, specificity, and PPV (all P < 0.005). The 3.5Dv5 U-Net achieved highest DSC and accuracy among all U-Nets. The segmentation performance of the U-Net can be improved by majority voting and E&D. Overall speaking, the 3.5Dv5 U-Net achieved the best segmentation performance among all U-Nets.
Collapse
Affiliation(s)
- Kang Hsu
- grid.260565.20000 0004 0634 0356Department of Periodontology, School of Dentistry, Tri-Service General Hospital, National Defense Medical Center, Taipei, Taiwan, ROC ,grid.260565.20000 0004 0634 0356School of Dentistry and Graduate Institute of Dental Science, National Defense Medical Center, Taipei, Taiwan, ROC
| | - Da-Yo Yuh
- grid.260565.20000 0004 0634 0356Department of Periodontology, School of Dentistry, Tri-Service General Hospital, National Defense Medical Center, Taipei, Taiwan, ROC
| | - Shao-Chieh Lin
- Department of Medical Imaging, Xinglong Rd, China Medical University Hsinchu Hospital, 199, Sec. 1, Zhubei, 302 Hsinchu Taiwan, ROC ,grid.411298.70000 0001 2175 4846Ph.D. Program in Electrical and Communication Engineering, Feng Chia University, Taichung, Taiwan, ROC
| | - Pin-Sian Lyu
- Department of Medical Imaging, Xinglong Rd, China Medical University Hsinchu Hospital, 199, Sec. 1, Zhubei, 302 Hsinchu Taiwan, ROC ,grid.411298.70000 0001 2175 4846Department of Automatic Control Engineering, Feng Chia University, No. 100 Wenhwa Rd., Seatwen, 40724 Taichung Taiwan, ROC
| | - Guan-Xin Pan
- Department of Medical Imaging, Xinglong Rd, China Medical University Hsinchu Hospital, 199, Sec. 1, Zhubei, 302 Hsinchu Taiwan, ROC ,grid.411298.70000 0001 2175 4846Master’s Program of Biomedical Informatics and Biomedical Engineering, Feng Chia University, Taichung, Taiwan, ROC
| | - Yi-Chun Zhuang
- Department of Medical Imaging, Xinglong Rd, China Medical University Hsinchu Hospital, 199, Sec. 1, Zhubei, 302 Hsinchu Taiwan, ROC ,grid.411298.70000 0001 2175 4846Master’s Program of Biomedical Informatics and Biomedical Engineering, Feng Chia University, Taichung, Taiwan, ROC
| | - Chia-Ching Chang
- Department of Medical Imaging, Xinglong Rd, China Medical University Hsinchu Hospital, 199, Sec. 1, Zhubei, 302 Hsinchu Taiwan, ROC ,grid.260539.b0000 0001 2059 7017Department of Management Science, National Yang Ming Chiao Tung University, Taipei, Taiwan, ROC
| | - Hsu-Hsia Peng
- grid.38348.340000 0004 0532 0580Department of Biomedical Engineering and Environmental Sciences, National Tsing Hua University, Hsinchu, Taiwan, ROC
| | - Tung-Yang Lee
- grid.411298.70000 0001 2175 4846Master’s Program of Biomedical Informatics and Biomedical Engineering, Feng Chia University, Taichung, Taiwan, ROC ,grid.413844.e0000 0004 0638 8798Cheng Ching Hospital, Taichung, Taiwan, ROC
| | - Cheng-Hsuan Juan
- Department of Medical Imaging, Xinglong Rd, China Medical University Hsinchu Hospital, 199, Sec. 1, Zhubei, 302 Hsinchu Taiwan, ROC ,grid.411298.70000 0001 2175 4846Master’s Program of Biomedical Informatics and Biomedical Engineering, Feng Chia University, Taichung, Taiwan, ROC ,grid.413844.e0000 0004 0638 8798Cheng Ching Hospital, Taichung, Taiwan, ROC
| | - Cheng-En Juan
- grid.411298.70000 0001 2175 4846Department of Automatic Control Engineering, Feng Chia University, No. 100 Wenhwa Rd., Seatwen, 40724 Taichung Taiwan, ROC
| | - Yi-Jui Liu
- grid.411298.70000 0001 2175 4846Department of Automatic Control Engineering, Feng Chia University, No. 100 Wenhwa Rd., Seatwen, 40724 Taichung Taiwan, ROC
| | - Chun-Jung Juan
- Department of Medical Imaging, Xinglong Rd, China Medical University Hsinchu Hospital, 199, Sec. 1, Zhubei, 302 Hsinchu Taiwan, ROC ,grid.38348.340000 0004 0532 0580Department of Biomedical Engineering and Environmental Sciences, National Tsing Hua University, Hsinchu, Taiwan, ROC ,grid.254145.30000 0001 0083 6092Department of Radiology, School of Medicine, College of Medicine, China Medical University, Taichung, Taiwan, ROC ,grid.411508.90000 0004 0572 9415Department of Medical Imaging, China Medical University Hospital, Taichung, Taiwan, ROC ,grid.260565.20000 0004 0634 0356Department of Biomedical Engineering, National Defense Medical Center, Taipei, Taiwan, ROC ,grid.19188.390000 0004 0546 0241Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan, ROC
| |
Collapse
|
10
|
Crouzen JA, Petoukhova AL, Wiggenraad RGJ, Hutschemaekers S, Gadellaa-van Hooijdonk CGM, van der Voort van Zyp NCMG, Mast ME, Zindler JD. Development and evaluation of an automated EPTN-consensus based organ at risk atlas in the brain on MRI. Radiother Oncol 2022; 173:262-268. [PMID: 35714807 DOI: 10.1016/j.radonc.2022.06.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2021] [Revised: 04/29/2022] [Accepted: 06/08/2022] [Indexed: 11/19/2022]
Abstract
BACKGROUND AND PURPOSE During radiotherapy treatment planning, avoidance of organs at risk (OARs) is important. An international consensus-based delineation guideline was recently published with 34 OARs in the brain. We developed an MR-based OAR autosegmentation atlas and evaluated its performance compared to manual delineation. MATERIALS AND METHODS Anonymized cerebral T1-weighted MR scans (voxel size 0.9x0.9x0.9mm 3) were available. OARs were manually delineated according to international consensus. Fifty MR scans were used to develop the autosegmentation atlas in a commercially available treatment planning system (Raystation®). The performance of this atlas was tested on another 40 MR scans by automatically delineating 34 OARs, as defined by the 2018 EPTN consensus. Spatial overlap between manual and automated delineations was determined by calculating the Dice similarity coefficient (DSC). Two radiation oncologists determined the quality of each automatically delineated OAR. The time needed to delineate all OARs manually or to adjust automatically delineated OARs was determined. RESULTS DSC was ≥0.75 in 31 (91%) out of 34 automated OAR delineations. Delineations were rated by radiation oncologists as excellent or good in 29 (85%) out 34 OAR delineations, while 4 were rated fair (12%) and 1 was rated poor (3%). Interobserver agreement between the radiation oncologists ranged from 77-100% per OAR. The time to manually delineate all OARs was 88.5 minutes, while the time needed to adjust automatically delineated OARs was 15.8 minutes. CONCLUSION Autosegmentation of OARs enables high-quality contouring within a limited time. Accurate OAR delineation helps to define OAR constraints to mitigate serious complications and helps with the development of NTCP models.
Collapse
Affiliation(s)
- Jeroen A Crouzen
- Haaglanden Medical Center, Department of Radiotherapy, BA Leidschendam, The Netherlands.
| | - Anna L Petoukhova
- Haaglanden Medical Center, Department of Medical Physics, BA Leidschendam, The Netherlands.
| | - Ruud G J Wiggenraad
- Haaglanden Medical Center, Department of Radiotherapy, BA Leidschendam, The Netherlands
| | - Stefan Hutschemaekers
- Haaglanden Medical Center, Department of Radiotherapy, BA Leidschendam, The Netherlands.
| | | | | | - Mirjam E Mast
- Haaglanden Medical Center, Department of Radiotherapy, BA Leidschendam, The Netherlands.
| | - Jaap D Zindler
- Haaglanden Medical Center, Department of Radiotherapy, BA Leidschendam, The Netherlands.
| |
Collapse
|
11
|
Puzniak RJ, Prabhakaran GT, Hoffmann MB. Deep Learning-Based Detection of Malformed Optic Chiasms From MRI Images. Front Neurosci 2021; 15:755785. [PMID: 34759795 PMCID: PMC8573410 DOI: 10.3389/fnins.2021.755785] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Accepted: 09/16/2021] [Indexed: 11/18/2022] Open
Abstract
Convolutional neural network (CNN) models are of great promise to aid the segmentation and analysis of brain structures. Here, we tested whether CNN trained to segment normal optic chiasms from the T1w magnetic resonance imaging (MRI) image can be also applied to abnormal chiasms, specifically with optic nerve misrouting as typical for human albinism. We performed supervised training of the CNN on the T1w images of control participants (n = 1049) from the Human Connectome Project (HCP) repository and automatically generated algorithm-based optic chiasm masks. The trained CNN was subsequently tested on data of persons with albinism (PWA; n = 9) and controls (n = 8) from the CHIASM repository. The quality of outcome segmentation was assessed via the comparison to manually defined optic chiasm masks using the Dice similarity coefficient (DSC). The results revealed contrasting quality of masks obtained for control (mean DSC ± SEM = 0.75 ± 0.03) and PWA data (0.43 ± 0.8, few-corrected p = 0.04). The fact that the CNN recognition of the optic chiasm fails for chiasm abnormalities in PWA underlines the fundamental differences in their spatial features. This finding provides proof of concept for a novel deep-learning-based diagnostics approach of chiasmal misrouting from T1w images, as well as further analyses on chiasmal misrouting and their impact on the structure and function of the visual system.
Collapse
Affiliation(s)
- Robert J Puzniak
- Visual Processing Lab, Department of Ophthalmology, Otto-von-Guericke-University, Magdeburg, Germany
| | - Gokulraj T Prabhakaran
- Visual Processing Lab, Department of Ophthalmology, Otto-von-Guericke-University, Magdeburg, Germany
| | - Michael B Hoffmann
- Visual Processing Lab, Department of Ophthalmology, Otto-von-Guericke-University, Magdeburg, Germany.,Center for Behavioral Brain Sciences, Otto-von-Guericke-University, Magdeburg, Germany
| |
Collapse
|
12
|
Minnema J, Wolff J, Koivisto J, Lucka F, Batenburg KJ, Forouzanfar T, van Eijnatten M. Comparison of convolutional neural network training strategies for cone-beam CT image segmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 207:106192. [PMID: 34062493 DOI: 10.1016/j.cmpb.2021.106192] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/08/2020] [Accepted: 05/11/2021] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVE Over the past decade, convolutional neural networks (CNNs) have revolutionized the field of medical image segmentation. Prompted by the developments in computational resources and the availability of large datasets, a wide variety of different two-dimensional (2D) and three-dimensional (3D) CNN training strategies have been proposed. However, a systematic comparison of the impact of these strategies on the image segmentation performance is still lacking. Therefore, this study aimed to compare eight different CNN training strategies, namely 2D (axial, sagittal and coronal slices), 2.5D (3 and 5 adjacent slices), majority voting, randomly oriented 2D cross-sections and 3D patches. METHODS These eight strategies were used to train a U-Net and an MS-D network for the segmentation of simulated cone-beam computed tomography (CBCT) images comprising randomly-placed non-overlapping cylinders and experimental CBCT images of anthropomorphic phantom heads. The resulting segmentation performances were quantitatively compared by calculating Dice similarity coefficients. In addition, all segmented and gold standard experimental CBCT images were converted into virtual 3D models and compared using orientation-based surface comparisons. RESULTS The CNN training strategy that generally resulted in the best performances on both simulated and experimental CBCT images was majority voting. When employing 2D training strategies, the segmentation performance can be optimized by training on image slices that are perpendicular to the predominant orientation of the anatomical structure of interest. Such spatial features should be taken into account when choosing or developing novel CNN training strategies for medical image segmentation. CONCLUSIONS The results of this study will help clinicians and engineers to choose the most-suited CNN training strategy for CBCT image segmentation.
Collapse
Affiliation(s)
- Jordi Minnema
- Department of Oral and Maxillofacial Surgery/Pathology, 3D Innovationlab, Amsterdam UMC and Academic Centre for Dentistry Amsterdam (ACTA), Vrije Universiteit Amsterdam, Amsterdam Movement Sciences, Amsterdam 1081 HV, theNetherlands.
| | - Jan Wolff
- Fraunhofer Research Institution for Additive Manufacturing Technologies IAPT, Am Schleusengraben 13, Hamburg 21029, Germany; Department of Oral and Maxillofacial Surgery, Division for Regenerative Orofacial Medicine, University Hospital Hamburg-Eppendorf, Hamburg 20246, Germany; Department of Dentistry and Oral Health, Aarhus University, Vennelyst Boulevard 9, DK-8000 Aarhus C, Denmark
| | - Juha Koivisto
- Department of Physics, University of Helsinki, Helsinki 20560, Finland
| | - Felix Lucka
- Centrum Wiskunde & Informatica (CWI), Amsterdam 1090 GB, the Netherlands; University College London, London WC1E 6BT, United Kingdom
| | | | - Tymour Forouzanfar
- Department of Oral and Maxillofacial Surgery/Pathology, 3D Innovationlab, Amsterdam UMC and Academic Centre for Dentistry Amsterdam (ACTA), Vrije Universiteit Amsterdam, Amsterdam Movement Sciences, Amsterdam 1081 HV, theNetherlands
| | | |
Collapse
|
13
|
Poel R, Rüfenacht E, Hermann E, Scheib S, Manser P, Aebersold DM, Reyes M. The predictive value of segmentation metrics on dosimetry in organs at risk of the brain. Med Image Anal 2021; 73:102161. [PMID: 34293536 DOI: 10.1016/j.media.2021.102161] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2021] [Revised: 06/29/2021] [Accepted: 07/02/2021] [Indexed: 12/31/2022]
Abstract
BACKGROUND Fully automatic medical image segmentation has been a long pursuit in radiotherapy (RT). Recent developments involving deep learning show promising results yielding consistent and time efficient contours. In order to train and validate these systems, several geometric based metrics, such as Dice Similarity Coefficient (DSC), Hausdorff, and other related metrics are currently the standard in automated medical image segmentation challenges. However, the relevance of these metrics in RT is questionable. The quality of automated segmentation results needs to reflect clinical relevant treatment outcomes, such as dosimetry and related tumor control and toxicity. In this study, we present results investigating the correlation between popular geometric segmentation metrics and dose parameters for Organs-At-Risk (OAR) in brain tumor patients, and investigate properties that might be predictive for dose changes in brain radiotherapy. METHODS A retrospective database of glioblastoma multiforme patients was stratified for planning difficulty, from which 12 cases were selected and reference sets of OARs and radiation targets were defined. In order to assess the relation between segmentation quality -as measured by standard segmentation assessment metrics- and quality of RT plans, clinically realistic, yet alternative contours for each OAR of the selected cases were obtained through three methods: (i) Manual contours by two additional human raters. (ii) Realistic manual manipulations of reference contours. (iii) Through deep learning based segmentation results. On the reference structure set a reference plan was generated that was re-optimized for each corresponding alternative contour set. The correlation between segmentation metrics, and dosimetric changes was obtained and analyzed for each OAR, by means of the mean dose and maximum dose to 1% of the volume (Dmax 1%). Furthermore, we conducted specific experiments to investigate the dosimetric effect of alternative OAR contours with respect to the proximity to the target, size, particular shape and relative location to the target. RESULTS We found a low correlation between the DSC, reflecting the alternative OAR contours, and dosimetric changes. The Pearson correlation coefficient between the mean OAR dose effect and the Dice was -0.11. For Dmax 1%, we found a correlation of -0.13. Similar low correlations were found for 22 other segmentation metrics. The organ based analysis showed that there is a better correlation for the larger OARs (i.e. brainstem and eyes) as for the smaller OARs (i.e. optic nerves and chiasm). Furthermore, we found that proximity to the target does not make contour variations more susceptible to the dose effect. However, the direction of the contour variation with respect to the relative location of the target seems to have a strong correlation with the dose effect. CONCLUSIONS This study shows a low correlation between segmentation metrics and dosimetric changes for OARs in brain tumor patients. Results suggest that the current metrics for image segmentation in RT, as well as deep learning systems employing such metrics, need to be revisited towards clinically oriented metrics that better reflect how segmentation quality affects dose distribution and related tumor control and toxicity.
Collapse
Affiliation(s)
- Robert Poel
- Department of Radiation Oncology, Inselspital, Bern University Hospital, and University of Bern, Bern, Switzerland; ARTORG Center for Biomedical Research, University of Bern, Bern, Switzerland
| | - Elias Rüfenacht
- ARTORG Center for Biomedical Research, University of Bern, Bern, Switzerland
| | - Evelyn Hermann
- Department of Radiation Oncology, Inselspital, Bern University Hospital, and University of Bern, Bern, Switzerland; Radiotherapy Department, Riviera-Chablais Hospital, Rennaz, Switzerland
| | - Stefan Scheib
- Varian Medical Systems Imaging Laboratory, GmbH, Switzerland
| | - Peter Manser
- Division of Medical Radiation Physics and Department of Radiation Oncology, Inselspital, Bern University Hospital, and University of Bern, Bern, Switzerland
| | - Daniel M Aebersold
- Department of Radiation Oncology, Inselspital, Bern University Hospital, and University of Bern, Bern, Switzerland
| | - Mauricio Reyes
- ARTORG Center for Biomedical Research, University of Bern, Bern, Switzerland.
| |
Collapse
|
14
|
Rahimzadeh M, Attar A, Sakhaei SM. A fully automated deep learning-based network for detecting COVID-19 from a new and large lung CT scan dataset. Biomed Signal Process Control 2021; 68:102588. [PMID: 33821166 PMCID: PMC8011666 DOI: 10.1016/j.bspc.2021.102588] [Citation(s) in RCA: 100] [Impact Index Per Article: 33.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2020] [Revised: 02/28/2021] [Accepted: 03/26/2021] [Indexed: 12/13/2022]
Abstract
This paper aims to propose a high-speed and accurate fully-automated method to detect COVID-19 from the patient's chest CT scan images. We introduce a new dataset that contains 48,260 CT scan images from 282 normal persons and 15,589 images from 95 patients with COVID-19 infections. At the first stage, this system runs our proposed image processing algorithm that analyzes the view of the lung to discard those CT images that inside the lung is not properly visible in them. This action helps to reduce the processing time and false detections. At the next stage, we introduce a novel architecture for improving the classification accuracy of convolutional networks on images containing small important objects. Our architecture applies a new feature pyramid network designed for classification problems to the ResNet50V2 model so the model becomes able to investigate different resolutions of the image and do not lose the data of small objects. As the infections of COVID-19 exist in various scales, especially many of them are tiny, using our method helps to increase the classification performance remarkably. After running these two phases, the system determines the condition of the patient using a selected threshold. We are the first to evaluate our system in two different ways on Xception, ResNet50V2, and our model. In the single image classification stage, our model achieved 98.49% accuracy on more than 7996 test images. At the patient condition identification phase, the system correctly identified almost 234 of 245 patients with high speed. Our dataset is accessible at https://github.com/mr7495/COVID-CTset.
Collapse
Affiliation(s)
- Mohammad Rahimzadeh
- School of Computer Engineering, Iran University of Science and Technology, Iran
| | - Abolfazl Attar
- Department of Electrical Engineering, Sharif University of Technology, Iran
| | | |
Collapse
|
15
|
Menze B, Isensee F, Wiest R, Wiestler B, Maier-Hein K, Reyes M, Bakas S. Analyzing magnetic resonance imaging data from glioma patients using deep learning. Comput Med Imaging Graph 2021; 88:101828. [PMID: 33571780 PMCID: PMC8040671 DOI: 10.1016/j.compmedimag.2020.101828] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2020] [Revised: 10/29/2020] [Accepted: 11/18/2020] [Indexed: 12/21/2022]
Abstract
The quantitative analysis of images acquired in the diagnosis and treatment of patients with brain tumors has seen a significant rise in the clinical use of computational tools. The underlying technology to the vast majority of these tools are machine learning methods and, in particular, deep learning algorithms. This review offers clinical background information of key diagnostic biomarkers in the diagnosis of glioma, the most common primary brain tumor. It offers an overview of publicly available resources and datasets for developing new computational tools and image biomarkers, with emphasis on those related to the Multimodal Brain Tumor Segmentation (BraTS) Challenge. We further offer an overview of the state-of-the-art methods in glioma image segmentation, again with an emphasis on publicly available tools and deep learning algorithms that emerged in the context of the BraTS challenge.
Collapse
Affiliation(s)
- Bjoern Menze
- Quantitative Biomedicine, University of Zurich, Zurich, Switzerland.
| | | | - Roland Wiest
- Support Center for Advanced Neuroimaging, Institute of Diagnostic and Interventional Neuroradiology, Inselspital, Bern, Switzerland.
| | | | | | | | - Spyridon Bakas
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA.
| |
Collapse
|
16
|
Gao Y, Huang R, Yang Y, Zhang J, Shao K, Tao C, Chen Y, Metaxas DN, Li H, Chen M. FocusNetv2: Imbalanced large and small organ segmentation with adversarial shape constraint for head and neck CT images. Med Image Anal 2020; 67:101831. [PMID: 33129144 DOI: 10.1016/j.media.2020.101831] [Citation(s) in RCA: 30] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2020] [Revised: 08/13/2020] [Accepted: 08/31/2020] [Indexed: 01/28/2023]
Abstract
Radiotherapy is a treatment where radiation is used to eliminate cancer cells. The delineation of organs-at-risk (OARs) is a vital step in radiotherapy treatment planning to avoid damage to healthy organs. For nasopharyngeal cancer, more than 20 OARs are needed to be precisely segmented in advance. The challenge of this task lies in complex anatomical structure, low-contrast organ contours, and the extremely imbalanced size between large and small organs. Common segmentation methods that treat them equally would generally lead to inaccurate small-organ labeling. We propose a novel two-stage deep neural network, FocusNetv2, to solve this challenging problem by automatically locating, ROI-pooling, and segmenting small organs with specifically designed small-organ localization and segmentation sub-networks while maintaining the accuracy of large organ segmentation. In addition to our original FocusNet, we employ a novel adversarial shape constraint on small organs to ensure the consistency between estimated small-organ shapes and organ shape prior knowledge. Our proposed framework is extensively tested on both self-collected dataset of 1,164 CT scans and the MICCAI Head and Neck Auto Segmentation Challenge 2015 dataset, which shows superior performance compared with state-of-the-art head and neck OAR segmentation methods.
Collapse
Affiliation(s)
- Yunhe Gao
- Cancer Hospital of University of the Chinese Academy of Sciences (Zhejiang Cancer Hospital), China; Department of Computer Science, Rutgers University, Piscataway, NJ, USA; Department of Electronic Engineering, The Chinese University of Hong Kong, Hong Kong SAR, China
| | | | - Yiwei Yang
- Cancer Hospital of University of the Chinese Academy of Sciences (Zhejiang Cancer Hospital), China
| | - Jie Zhang
- Cancer Hospital of University of the Chinese Academy of Sciences (Zhejiang Cancer Hospital), China
| | - Kainan Shao
- Cancer Hospital of University of the Chinese Academy of Sciences (Zhejiang Cancer Hospital), China
| | - Changjuan Tao
- Cancer Hospital of University of the Chinese Academy of Sciences (Zhejiang Cancer Hospital), China
| | - Yuanyuan Chen
- Cancer Hospital of University of the Chinese Academy of Sciences (Zhejiang Cancer Hospital), China
| | | | - Hongsheng Li
- Department of Electronic Engineering, The Chinese University of Hong Kong, Hong Kong SAR, China.
| | - Ming Chen
- Cancer Hospital of University of the Chinese Academy of Sciences (Zhejiang Cancer Hospital), China.
| |
Collapse
|