1
|
Shahzadi I, Seidlitz A, Beuthien-Baumann B, Zwanenburg A, Platzek I, Kotzerke J, Baumann M, Krause M, Troost EGC, Löck S. Radiomics for residual tumour detection and prognosis in newly diagnosed glioblastoma based on postoperative [ 11C] methionine PET and T1c-w MRI. Sci Rep 2024; 14:4576. [PMID: 38403632 PMCID: PMC10894870 DOI: 10.1038/s41598-024-55092-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Accepted: 02/20/2024] [Indexed: 02/27/2024] Open
Abstract
Personalized treatment strategies based on non-invasive biomarkers have potential to improve patient management in patients with newly diagnosed glioblastoma (GBM). The residual tumour burden after surgery in GBM patients is a prognostic imaging biomarker. However, in clinical patient management, its assessment is a manual and time-consuming process that is at risk of inter-rater variability. Furthermore, the prediction of patient outcome prior to radiotherapy may identify patient subgroups that could benefit from escalated radiotherapy doses. Therefore, in this study, we investigate the capabilities of traditional radiomics and 3D convolutional neural networks for automatic detection of the residual tumour status and to prognosticate time-to-recurrence (TTR) and overall survival (OS) in GBM using postoperative [11C] methionine positron emission tomography (MET-PET) and gadolinium-enhanced T1-w magnetic resonance imaging (MRI). On the independent test data, the 3D-DenseNet model based on MET-PET achieved the best performance for residual tumour detection, while the logistic regression model with conventional radiomics features performed best for T1c-w MRI (AUC: MET-PET 0.95, T1c-w MRI 0.78). For the prognosis of TTR and OS, the 3D-DenseNet model based on MET-PET integrated with age and MGMT status achieved the best performance (Concordance-Index: TTR 0.68, OS 0.65). In conclusion, we showed that both deep-learning and conventional radiomics have potential value for supporting image-based assessment and prognosis in GBM. After prospective validation, these models may be considered for treatment personalization.
Collapse
Affiliation(s)
- Iram Shahzadi
- OncoRay - National Center for Radiation Research in Oncology, Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Helmholtz-Zentrum Dresden - Rossendorf, Dresden, Germany
- German Cancer Consortium (DKTK) Partner Site Dresden, Germany, and German Cancer Research Center (DKFZ), Heidelberg, Germany
- National Center for Tumor Diseases (NCT), Partner Site Dresden, Germany: German Cancer Research Center (DKFZ), Heidelberg, Germany, Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany, and Helmholtz Association/Helmholtz-Zentrum Dresden - Rossendorf (HZDR), Dresden, Germany
| | - Annekatrin Seidlitz
- OncoRay - National Center for Radiation Research in Oncology, Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Helmholtz-Zentrum Dresden - Rossendorf, Dresden, Germany
- German Cancer Consortium (DKTK) Partner Site Dresden, Germany, and German Cancer Research Center (DKFZ), Heidelberg, Germany
- National Center for Tumor Diseases (NCT), Partner Site Dresden, Germany: German Cancer Research Center (DKFZ), Heidelberg, Germany, Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany, and Helmholtz Association/Helmholtz-Zentrum Dresden - Rossendorf (HZDR), Dresden, Germany
- Department of Radiotherapy and Radiation Oncology, Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
| | - Bettina Beuthien-Baumann
- Department of Nuclear Medicine, Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
- Department of Radiology, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Alex Zwanenburg
- OncoRay - National Center for Radiation Research in Oncology, Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Helmholtz-Zentrum Dresden - Rossendorf, Dresden, Germany
- German Cancer Consortium (DKTK) Partner Site Dresden, Germany, and German Cancer Research Center (DKFZ), Heidelberg, Germany
- National Center for Tumor Diseases (NCT), Partner Site Dresden, Germany: German Cancer Research Center (DKFZ), Heidelberg, Germany, Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany, and Helmholtz Association/Helmholtz-Zentrum Dresden - Rossendorf (HZDR), Dresden, Germany
| | - Ivan Platzek
- Institute of Radiology, Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
| | - Jörg Kotzerke
- National Center for Tumor Diseases (NCT), Partner Site Dresden, Germany: German Cancer Research Center (DKFZ), Heidelberg, Germany, Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany, and Helmholtz Association/Helmholtz-Zentrum Dresden - Rossendorf (HZDR), Dresden, Germany
- Department of Nuclear Medicine, Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
| | - Michael Baumann
- OncoRay - National Center for Radiation Research in Oncology, Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Helmholtz-Zentrum Dresden - Rossendorf, Dresden, Germany
- Division of Radiooncology/Radiobiology, German Cancer Research Center (DKFZ), Heidelberg, Germany
- Department of Radiotherapy and Radiation Oncology, Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
| | - Mechthild Krause
- OncoRay - National Center for Radiation Research in Oncology, Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Helmholtz-Zentrum Dresden - Rossendorf, Dresden, Germany
- German Cancer Consortium (DKTK) Partner Site Dresden, Germany, and German Cancer Research Center (DKFZ), Heidelberg, Germany
- National Center for Tumor Diseases (NCT), Partner Site Dresden, Germany: German Cancer Research Center (DKFZ), Heidelberg, Germany, Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany, and Helmholtz Association/Helmholtz-Zentrum Dresden - Rossendorf (HZDR), Dresden, Germany
- Department of Radiotherapy and Radiation Oncology, Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
- Helmholtz-Zentrum Dresden-Rossendorf, Institute of Radiooncology, Dresden, Germany
| | - Esther G C Troost
- OncoRay - National Center for Radiation Research in Oncology, Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Helmholtz-Zentrum Dresden - Rossendorf, Dresden, Germany
- German Cancer Consortium (DKTK) Partner Site Dresden, Germany, and German Cancer Research Center (DKFZ), Heidelberg, Germany
- National Center for Tumor Diseases (NCT), Partner Site Dresden, Germany: German Cancer Research Center (DKFZ), Heidelberg, Germany, Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany, and Helmholtz Association/Helmholtz-Zentrum Dresden - Rossendorf (HZDR), Dresden, Germany
- Department of Radiotherapy and Radiation Oncology, Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
- Helmholtz-Zentrum Dresden-Rossendorf, Institute of Radiooncology, Dresden, Germany
| | - Steffen Löck
- OncoRay - National Center for Radiation Research in Oncology, Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Helmholtz-Zentrum Dresden - Rossendorf, Dresden, Germany.
- German Cancer Consortium (DKTK) Partner Site Dresden, Germany, and German Cancer Research Center (DKFZ), Heidelberg, Germany.
- National Center for Tumor Diseases (NCT), Partner Site Dresden, Germany: German Cancer Research Center (DKFZ), Heidelberg, Germany, Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany, and Helmholtz Association/Helmholtz-Zentrum Dresden - Rossendorf (HZDR), Dresden, Germany.
- Department of Radiotherapy and Radiation Oncology, Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany.
| |
Collapse
|
2
|
Helland RH, Ferles A, Pedersen A, Kommers I, Ardon H, Barkhof F, Bello L, Berger MS, Dunås T, Nibali MC, Furtner J, Hervey-Jumper S, Idema AJS, Kiesel B, Tewari RN, Mandonnet E, Müller DMJ, Robe PA, Rossi M, Sagberg LM, Sciortino T, Aalders T, Wagemakers M, Widhalm G, Witte MG, Zwinderman AH, Majewska PL, Jakola AS, Solheim O, Hamer PCDW, Reinertsen I, Eijgelaar RS, Bouget D. Segmentation of glioblastomas in early post-operative multi-modal MRI with deep neural networks. Sci Rep 2023; 13:18897. [PMID: 37919325 PMCID: PMC10622432 DOI: 10.1038/s41598-023-45456-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Accepted: 10/19/2023] [Indexed: 11/04/2023] Open
Abstract
Extent of resection after surgery is one of the main prognostic factors for patients diagnosed with glioblastoma. To achieve this, accurate segmentation and classification of residual tumor from post-operative MR images is essential. The current standard method for estimating it is subject to high inter- and intra-rater variability, and an automated method for segmentation of residual tumor in early post-operative MRI could lead to a more accurate estimation of extent of resection. In this study, two state-of-the-art neural network architectures for pre-operative segmentation were trained for the task. The models were extensively validated on a multicenter dataset with nearly 1000 patients, from 12 hospitals in Europe and the United States. The best performance achieved was a 61% Dice score, and the best classification performance was about 80% balanced accuracy, with a demonstrated ability to generalize across hospitals. In addition, the segmentation performance of the best models was on par with human expert raters. The predicted segmentations can be used to accurately classify the patients into those with residual tumor, and those with gross total resection.
Collapse
Affiliation(s)
- Ragnhild Holden Helland
- Department of Health Research, SINTEF Digital, 7465, Trondheim, Norway.
- Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, NO-7491, Trondheim, Norway.
| | - Alexandros Ferles
- Cancer Center Amsterdam, Brain Tumor Center, Amsterdam University Medical Centers, 1081 HV, Amsterdam, The Netherlands
- Department of Radiology and Nuclear Medicine, Amsterdam University Medical Centers, Vrije Universiteit, 1081 HV, Amsterdam, The Netherlands
| | - André Pedersen
- Department of Health Research, SINTEF Digital, 7465, Trondheim, Norway
| | - Ivar Kommers
- Cancer Center Amsterdam, Brain Tumor Center, Amsterdam University Medical Centers, 1081 HV, Amsterdam, The Netherlands
- Department of Neurosurgery, Amsterdam University Medical Centers, Vrije Universiteit, 1081 HV, Amsterdam, The Netherlands
| | - Hilko Ardon
- Department of Neurosurgery, Twee Steden Hospital, 5042 AD, Tilburg, The Netherlands
| | - Frederik Barkhof
- Department of Radiology and Nuclear Medicine, Amsterdam University Medical Centers, Vrije Universiteit, 1081 HV, Amsterdam, The Netherlands
- Institutes of Neurology and Healthcare Engineering, University College London, London, WC1E 6BT, UK
| | - Lorenzo Bello
- Neurosurgical Oncology Unit, Department of Oncology and Hemato-oncology, Humanitas Research Hospital, Università Degli Studi di Milano, 20122, Milan, Italy
| | - Mitchel S Berger
- Department of Neurological Surgery, University of California San Francisco, San Francisco, CA, 94143, USA
| | - Tora Dunås
- Department of Clinical Neuroscience, Institute of Neuroscience and Physiology, Sahlgrenska Academy, University of Gothenburg, 405 30, Gothenburg, Sweden
| | | | - Julia Furtner
- Department of Biomedical Imaging and Image-guided Therapy, Medical University Vienna, 1090, Vienna, Austria
- Research Center for Medical Image Analysis and Artificial Intelligence (MIAAI), Faculty of Medicine and Dentistry, Danube Private University, 3500, Krems, Austria
| | - Shawn Hervey-Jumper
- Department of Neurological Surgery, University of California San Francisco, San Francisco, CA, 94143, USA
| | - Albert J S Idema
- Department of Neurosurgery, Northwest Clinics, 1815 JD, Alkmaar, The Netherlands
| | - Barbara Kiesel
- Department of Neurosurgery, Medical University Vienna, 1090, Vienna, Austria
| | - Rishi Nandoe Tewari
- Department of Neurosurgery, Haaglanden Medical Center, 2512 VA, The Hague, The Netherlands
| | - Emmanuel Mandonnet
- Department of Neurological Surgery, Hôpital Lariboisière, 75010, Paris, France
| | - Domenique M J Müller
- Cancer Center Amsterdam, Brain Tumor Center, Amsterdam University Medical Centers, 1081 HV, Amsterdam, The Netherlands
- Department of Neurosurgery, Amsterdam University Medical Centers, Vrije Universiteit, 1081 HV, Amsterdam, The Netherlands
| | - Pierre A Robe
- Department of Neurology and Neurosurgery, University Medical Center Utrecht, 3584 CX, Utrecht, The Netherlands
| | - Marco Rossi
- Department of Medical Biotechnology and Translational Medicine, Università Degli Studi di Milano, 20122, Milan, Italy
| | - Lisa M Sagberg
- Department of Neurosurgery, St. Olavs hospital, Trondheim University Hospital, 7030, Trondheim, Norway
- Department of Public Health and Nursing, Norwegian University of Science and Technology, 7491, Trondheim, Norway
| | | | - Tom Aalders
- Department of Neurosurgery, Isala, 8025 AB, Zwolle, The Netherlands
| | - Michiel Wagemakers
- Department of Neurosurgery, University Medical Center Groningen, University of Groningen, 9713 GZ, Groningen, The Netherlands
| | - Georg Widhalm
- Department of Neurosurgery, Medical University Vienna, 1090, Vienna, Austria
| | - Marnix G Witte
- Department of Radiation Oncology, The Netherlands Cancer Institute, 1066 CX, Amsterdam, The Netherlands
| | - Aeilko H Zwinderman
- Department of Clinical Epidemiology and Biostatistics, Amsterdam University Medical Centers, University of Amsterdam, 1105 AZ, Amsterdam, The Netherlands
| | - Paulina L Majewska
- Department of Neurology and Neurosurgery, University Medical Center Utrecht, 3584 CX, Utrecht, The Netherlands
- Department of Neuromedicine and Movement Science, Norwegian University of Science and Technology, 7491, Trondheim, Norway
| | - Asgeir S Jakola
- Department of Clinical Neuroscience, Institute of Neuroscience and Physiology, Sahlgrenska Academy, University of Gothenburg, 405 30, Gothenburg, Sweden
- Department of Neurosurgery, Sahlgrenska University Hospital, Gothenburg, Sweden
| | - Ole Solheim
- Department of Neurology and Neurosurgery, University Medical Center Utrecht, 3584 CX, Utrecht, The Netherlands
- Department of Neuromedicine and Movement Science, Norwegian University of Science and Technology, 7491, Trondheim, Norway
| | - Philip C De Witt Hamer
- Cancer Center Amsterdam, Brain Tumor Center, Amsterdam University Medical Centers, 1081 HV, Amsterdam, The Netherlands
- Department of Neurosurgery, Amsterdam University Medical Centers, Vrije Universiteit, 1081 HV, Amsterdam, The Netherlands
| | - Ingerid Reinertsen
- Department of Health Research, SINTEF Digital, 7465, Trondheim, Norway
- Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, NO-7491, Trondheim, Norway
| | - Roelant S Eijgelaar
- Cancer Center Amsterdam, Brain Tumor Center, Amsterdam University Medical Centers, 1081 HV, Amsterdam, The Netherlands
- Department of Neurosurgery, Amsterdam University Medical Centers, Vrije Universiteit, 1081 HV, Amsterdam, The Netherlands
| | - David Bouget
- Department of Health Research, SINTEF Digital, 7465, Trondheim, Norway
| |
Collapse
|
3
|
Sayah A, Bencheqroun C, Bhuvaneshwar K, Belouali A, Bakas S, Sako C, Davatzikos C, Alaoui A, Madhavan S, Gusev Y. Enhancing the REMBRANDT MRI collection with expert segmentation labels and quantitative radiomic features. Sci Data 2022; 9:338. [PMID: 35701399 PMCID: PMC9198015 DOI: 10.1038/s41597-022-01415-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2021] [Accepted: 05/24/2022] [Indexed: 01/26/2023] Open
Abstract
Malignancy of the brain and CNS is unfortunately a common diagnosis. A large subset of these lesions tends to be high grade tumors which portend poor prognoses and low survival rates, and are estimated to be the tenth leading cause of death worldwide. The complex nature of the brain tissue environment in which these lesions arise offers a rich opportunity for translational research. Magnetic Resonance Imaging (MRI) can provide a comprehensive view of the abnormal regions in the brain, therefore, its applications in the translational brain cancer research is considered essential for the diagnosis and monitoring of disease. Recent years has seen rapid growth in the field of radiogenomics, especially in cancer, and scientists have been able to successfully integrate the quantitative data extracted from medical images (also known as radiomics) with genomics to answer new and clinically relevant questions. In this paper, we took raw MRI scans from the REMBRANDT data collection from public domain, and performed volumetric segmentation to identify subregions of the brain. Radiomic features were then extracted to represent the MRIs in a quantitative yet summarized format. This resulting dataset now enables further biomedical and integrative data analysis, and is being made public via the NeuroImaging Tools & Resources Collaboratory (NITRC) repository ( https://www.nitrc.org/projects/rembrandt_brain/ ).
Collapse
Affiliation(s)
- Anousheh Sayah
- Medstar Georgetown University Hospital, Washington, DC, USA
| | - Camelia Bencheqroun
- Innovation Center for Biomedical Informatics (ICBI), Georgetown University, Washington, DC, USA
| | - Krithika Bhuvaneshwar
- Innovation Center for Biomedical Informatics (ICBI), Georgetown University, Washington, DC, USA.
| | - Anas Belouali
- Innovation Center for Biomedical Informatics (ICBI), Georgetown University, Washington, DC, USA
| | - Spyridon Bakas
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
| | - Chiharu Sako
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
| | - Christos Davatzikos
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
| | - Adil Alaoui
- Innovation Center for Biomedical Informatics (ICBI), Georgetown University, Washington, DC, USA
| | - Subha Madhavan
- Innovation Center for Biomedical Informatics (ICBI), Georgetown University, Washington, DC, USA
| | - Yuriy Gusev
- Innovation Center for Biomedical Informatics (ICBI), Georgetown University, Washington, DC, USA.
| |
Collapse
|
4
|
Lotan E, Zhang B, Dogra S, Wang W, Carbone D, Fatterpekar G, Oermann E, Lui Y. Development and Practical Implementation of a Deep Learning-Based Pipeline for Automated Pre- and Postoperative Glioma Segmentation. AJNR Am J Neuroradiol 2022; 43:24-32. [PMID: 34857514 PMCID: PMC8757542 DOI: 10.3174/ajnr.a7363] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2021] [Accepted: 09/22/2021] [Indexed: 01/03/2023]
Abstract
BACKGROUND AND PURPOSE Quantitative volumetric segmentation of gliomas has important implications for diagnosis, treatment, and prognosis. We present a deep-learning model that accommodates automated preoperative and postoperative glioma segmentation with a pipeline for clinical implementation. Developed and engineered in concert, the work seeks to accelerate clinical realization of such tools. MATERIALS AND METHODS A deep learning model, autoencoder regularization-cascaded anisotropic, was developed, trained, and tested fusing key elements of autoencoder regularization with a cascaded anisotropic convolutional neural network. We constructed a dataset consisting of 437 cases with 40 cases reserved as a held-out test and the remainder split 80:20 for training and validation. We performed data augmentation and hyperparameter optimization and used a mean Dice score to evaluate against baseline models. To facilitate clinical adoption, we developed the model with an end-to-end pipeline including routing, preprocessing, and end-user interaction. RESULTS The autoencoder regularization-cascaded anisotropic model achieved median and mean Dice scores of 0.88/0.83 (SD, 0.09), 0.89/0.84 (SD, 0.08), and 0.81/0.72 (SD, 0.1) for whole-tumor, tumor core/resection cavity, and enhancing tumor subregions, respectively, including both preoperative and postoperative follow-up cases. The overall total processing time per case was ∼10 minutes, including data routing (∼1 minute), preprocessing (∼6 minute), segmentation (∼1-2 minute), and postprocessing (∼1 minute). Implementation challenges were discussed. CONCLUSIONS We show the feasibility and advantages of building a coordinated model with a clinical pipeline for the rapid and accurate deep learning segmentation of both preoperative and postoperative gliomas. The ability of the model to accommodate cases of postoperative glioma is clinically important for follow-up. An end-to-end approach, such as used here, may lead us toward successful clinical translation of tools for quantitative volume measures for glioma.
Collapse
Affiliation(s)
- E. Lotan
- From the Department of Radiology (E.L., B.Z., S.D., D.C., G.F., E.K.O., Y.W.L.)
| | - B. Zhang
- From the Department of Radiology (E.L., B.Z., S.D., D.C., G.F., E.K.O., Y.W.L.)
| | - S. Dogra
- From the Department of Radiology (E.L., B.Z., S.D., D.C., G.F., E.K.O., Y.W.L.)
| | | | - D. Carbone
- From the Department of Radiology (E.L., B.Z., S.D., D.C., G.F., E.K.O., Y.W.L.)
| | - G. Fatterpekar
- From the Department of Radiology (E.L., B.Z., S.D., D.C., G.F., E.K.O., Y.W.L.)
| | - E.K. Oermann
- From the Department of Radiology (E.L., B.Z., S.D., D.C., G.F., E.K.O., Y.W.L.),Neurosurgery, School of Medicine (E.K.O.), NYU Langone Health, New York, New York
| | - Y.W. Lui
- From the Department of Radiology (E.L., B.Z., S.D., D.C., G.F., E.K.O., Y.W.L.)
| |
Collapse
|
5
|
Bhalodiya JM, Lim Choi Keung SN, Arvanitis TN. Magnetic resonance image-based brain tumour segmentation methods: A systematic review. Digit Health 2022; 8:20552076221074122. [PMID: 35340900 PMCID: PMC8943308 DOI: 10.1177/20552076221074122] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2021] [Revised: 11/20/2021] [Accepted: 12/27/2021] [Indexed: 01/10/2023] Open
Abstract
Background Image segmentation is an essential step in the analysis and subsequent characterisation of brain tumours through magnetic resonance imaging. In the literature, segmentation methods are empowered by open-access magnetic resonance imaging datasets, such as the brain tumour segmentation dataset. Moreover, with the increased use of artificial intelligence methods in medical imaging, access to larger data repositories has become vital in method development. Purpose To determine what automated brain tumour segmentation techniques can medical imaging specialists and clinicians use to identify tumour components, compared to manual segmentation. Methods We conducted a systematic review of 572 brain tumour segmentation studies during 2015-2020. We reviewed segmentation techniques using T1-weighted, T2-weighted, gadolinium-enhanced T1-weighted, fluid-attenuated inversion recovery, diffusion-weighted and perfusion-weighted magnetic resonance imaging sequences. Moreover, we assessed physics or mathematics-based methods, deep learning methods, and software-based or semi-automatic methods, as applied to magnetic resonance imaging techniques. Particularly, we synthesised each method as per the utilised magnetic resonance imaging sequences, study population, technical approach (such as deep learning) and performance score measures (such as Dice score). Statistical tests We compared median Dice score in segmenting the whole tumour, tumour core and enhanced tumour. Results We found that T1-weighted, gadolinium-enhanced T1-weighted, T2-weighted and fluid-attenuated inversion recovery magnetic resonance imaging are used the most in various segmentation algorithms. However, there is limited use of perfusion-weighted and diffusion-weighted magnetic resonance imaging. Moreover, we found that the U-Net deep learning technology is cited the most, and has high accuracy (Dice score 0.9) for magnetic resonance imaging-based brain tumour segmentation. Conclusion U-Net is a promising deep learning technology for magnetic resonance imaging-based brain tumour segmentation. The community should be encouraged to contribute open-access datasets so training, testing and validation of deep learning algorithms can be improved, particularly for diffusion- and perfusion-weighted magnetic resonance imaging, where there are limited datasets available.
Collapse
Affiliation(s)
- Jayendra M Bhalodiya
- Institute of Digital Healthcare, Warwick Manufacturing Group, The University of Warwick, UK
| | - Sarah N Lim Choi Keung
- Institute of Digital Healthcare, Warwick Manufacturing Group, The University of Warwick, UK
| | - Theodoros N Arvanitis
- Institute of Digital Healthcare, Warwick Manufacturing Group, The University of Warwick, UK
| |
Collapse
|
6
|
Güley O, Pati S, Bakas S. Classification of Infection and Ischemia in Diabetic Foot Ulcers Using VGG Architectures. DIABETIC FOOT ULCERS GRAND CHALLENGE : SECOND CHALLENGE, DFUC 2021, HELD IN CONJUNCTION WITH MICCAI 2021, STRASBOURG, FRANCE, SEPTEMBER 27, 2021 : PROCEEDINGS. DFUC (CONFERENCE) (2ND : 2021 : ONLINE) 2022; 13183:76-89. [PMID: 35465060 PMCID: PMC9026672 DOI: 10.1007/978-3-030-94907-5_6] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Diabetic foot ulceration (DFU) is a serious complication of diabetes, and a major challenge for healthcare systems around the world. Further infection and ischemia in DFU can significantly prolong treatment and often result in limb amputation, with more severe cases resulting in terminal illness. Thus, early identification and regular monitoring is necessary to improve care, and reduce the burden on healthcare systems. With that in mind, this study attempts to address the problem of infection and ischemia classification in diabetic food ulcers, in four distinct classes. We have evaluated a series of VGG architectures with different layers, following numerous training strategies, including k-fold cross validation, data pre-processing options, augmentation techniques, and weighted loss calculations. In favor of transparency and reproducibility, we make all the implementations available through the Generally Nuanced Deep Learning Framework (GaNDLF, github.com/CBICA/GaNDLF. Our best model was evaluated during the DFU Challenge 2021, and was ranked 2nd, 5th, and 7th based on the macro-averaged AUC (area under the curve), macro-averaged F1 score, and macro-averaged recall metrics, respectively. Our findings support that current state-of-the-art architectures provide good results for the DFU image classification task, and further experimentation is required to study the effects of pre-processing and augmentation strategies.
Collapse
Affiliation(s)
- Orhun Güley
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Informatics, Technical University of Munich, Munich, Germany
| | - Sarthak Pati
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Department of Informatics, Technical University of Munich, Munich, Germany
| | - Spyridon Bakas
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| |
Collapse
|
7
|
van Kempen EJ, Post M, Mannil M, Witkam RL, Ter Laan M, Patel A, Meijer FJA, Henssen D. Performance of machine learning algorithms for glioma segmentation of brain MRI: a systematic literature review and meta-analysis. Eur Radiol 2021; 31:9638-9653. [PMID: 34019128 PMCID: PMC8589805 DOI: 10.1007/s00330-021-08035-0] [Citation(s) in RCA: 33] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2021] [Revised: 04/04/2021] [Accepted: 05/03/2021] [Indexed: 02/05/2023]
Abstract
OBJECTIVES Different machine learning algorithms (MLAs) for automated segmentation of gliomas have been reported in the literature. Automated segmentation of different tumor characteristics can be of added value for the diagnostic work-up and treatment planning. The purpose of this study was to provide an overview and meta-analysis of different MLA methods. METHODS A systematic literature review and meta-analysis was performed on the eligible studies describing the segmentation of gliomas. Meta-analysis of the performance was conducted on the reported dice similarity coefficient (DSC) score of both the aggregated results as two subgroups (i.e., high-grade and low-grade gliomas). This study was registered in PROSPERO prior to initiation (CRD42020191033). RESULTS After the literature search (n = 734), 42 studies were included in the systematic literature review. Ten studies were eligible for inclusion in the meta-analysis. Overall, the MLAs from the included studies showed an overall DSC score of 0.84 (95% CI: 0.82-0.86). In addition, a DSC score of 0.83 (95% CI: 0.80-0.87) and 0.82 (95% CI: 0.78-0.87) was observed for the automated glioma segmentation of the high-grade and low-grade gliomas, respectively. However, heterogeneity was considerably high between included studies, and publication bias was observed. CONCLUSION MLAs facilitating automated segmentation of gliomas show good accuracy, which is promising for future implementation in neuroradiology. However, before actual implementation, a few hurdles are yet to be overcome. It is crucial that quality guidelines are followed when reporting on MLAs, which includes validation on an external test set. KEY POINTS • MLAs from the included studies showed an overall DSC score of 0.84 (95% CI: 0.82-0.86), indicating a good performance. • MLA performance was comparable when comparing the segmentation results of the high-grade gliomas and the low-grade gliomas. • For future studies using MLAs, it is crucial that quality guidelines are followed when reporting on MLAs, which includes validation on an external test set.
Collapse
Affiliation(s)
- Evi J van Kempen
- Department of Medical Imaging, Radboud University Medical Center, Geert Grooteplein Zuid 10, 6525 EZ, Nijmegen, The Netherlands
| | - Max Post
- Department of Medical Imaging, Radboud University Medical Center, Geert Grooteplein Zuid 10, 6525 EZ, Nijmegen, The Netherlands
| | - Manoj Mannil
- Clinic of Radiology, University Hospital Münster, Münster, Germany
| | - Richard L Witkam
- Department of Anaesthesiology, Pain and Palliative Medicine, Radboud University Medical Center, Nijmegen, The Netherlands
- Department of Neurosurgery, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Mark Ter Laan
- Department of Neurosurgery, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Ajay Patel
- Department of Medical Imaging, Radboud University Medical Center, Geert Grooteplein Zuid 10, 6525 EZ, Nijmegen, The Netherlands
| | - Frederick J A Meijer
- Department of Medical Imaging, Radboud University Medical Center, Geert Grooteplein Zuid 10, 6525 EZ, Nijmegen, The Netherlands
| | - Dylan Henssen
- Department of Medical Imaging, Radboud University Medical Center, Geert Grooteplein Zuid 10, 6525 EZ, Nijmegen, The Netherlands.
| |
Collapse
|
8
|
Interactive Machine Learning-Based Multi-Label Segmentation of Solid Tumors and Organs. APPLIED SCIENCES-BASEL 2021; 11. [PMID: 34621541 PMCID: PMC8494410 DOI: 10.3390/app11167488] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
Abstract
We seek the development and evaluation of a fast, accurate, and consistent method for general-purpose segmentation, based on interactive machine learning (IML). To validate our method, we identified retrospective cohorts of 20 brain, 50 breast, and 50 lung cancer patients, as well as 20 spleen scans, with corresponding ground truth annotations. Utilizing very brief user training annotations and the adaptive geodesic distance transform, an ensemble of SVMs is trained, providing a patient-specific model applied to the whole image. Two experts segmented each cohort twice with our method and twice manually. The IML method was faster than manual annotation by 53.1% on average. We found significant (p < 0.001) overlap difference for spleen (DiceIML/DiceManual = 0.91/0.87), breast tumors (DiceIML/DiceManual = 0.84/0.82), and lung nodules (DiceIML/DiceManual = 0.78/0.83). For intra-rater consistency, a significant (p = 0.003) difference was found for spleen (DiceIML/DiceManual = 0.91/0.89). For inter-rater consistency, significant (p < 0.045) differences were found for spleen (DiceIML/DiceManual = 0.91/0.87), breast (DiceIML/DiceManual = 0.86/0.81), lung (DiceIML/DiceManual = 0.85/0.89), the non-enhancing (DiceIML/DiceManual = 0.79/0.67) and the enhancing (DiceIML/DiceManual = 0.79/0.84) brain tumor sub-regions, which, in aggregation, favored our method. Quantitative evaluation for speed, spatial overlap, and consistency, reveals the benefits of our proposed method when compared with manual annotation, for several clinically relevant problems. We publicly release our implementation through CaPTk (Cancer Imaging Phenomics Toolkit) and as an MITK plugin.
Collapse
|
9
|
Shusharina N, Söderberg J, Lidberg D, Niyazi M, Shih HA, Bortfeld T. Accounting for uncertainties in the position of anatomical barriers used to define the clinical target volume. Phys Med Biol 2021; 66. [PMID: 34171846 DOI: 10.1088/1361-6560/ac0ea3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2021] [Accepted: 06/25/2021] [Indexed: 11/11/2022]
Abstract
The definition of the clinical target volume (CTV) is becoming the weakest link in the radiotherapy chain. CTV definition consensus guidelines include the geometric expansion beyond the visible gross tumor volume, while avoiding anatomical barriers. In a previous publication we described how to implement these consensus guidelines using deep learning and graph search techniques in a computerized CTV auto-delineation process. In this paper we address the remaining problem of how to deal with uncertainties in positions of the anatomical barriers. The objective was to develop an algorithm that implements the consensus guidelines on considering barrier uncertainties. Our approach is to perform multiple expansions using the fast marching method with barriers in place or removed at different stages of the expansion. We validate the algorithm in a computational phantom and compare manually generated with automated CTV contours, both taking barrier uncertainties into account.
Collapse
Affiliation(s)
- Nadya Shusharina
- Division of Radiation Biophysics, Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, United States of America
| | | | | | - Maximilian Niyazi
- Department of Radiation Oncology, University Hospital, LMU Munich, Munich, Germany.,German Cancer Consortium (DKTK), Partner Site Munich, Munich, Germany
| | - Helen A Shih
- Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, United States of America
| | - Thomas Bortfeld
- Division of Radiation Biophysics, Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, United States of America
| |
Collapse
|
10
|
Menze B, Isensee F, Wiest R, Wiestler B, Maier-Hein K, Reyes M, Bakas S. Analyzing magnetic resonance imaging data from glioma patients using deep learning. Comput Med Imaging Graph 2021; 88:101828. [PMID: 33571780 PMCID: PMC8040671 DOI: 10.1016/j.compmedimag.2020.101828] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2020] [Revised: 10/29/2020] [Accepted: 11/18/2020] [Indexed: 12/21/2022]
Abstract
The quantitative analysis of images acquired in the diagnosis and treatment of patients with brain tumors has seen a significant rise in the clinical use of computational tools. The underlying technology to the vast majority of these tools are machine learning methods and, in particular, deep learning algorithms. This review offers clinical background information of key diagnostic biomarkers in the diagnosis of glioma, the most common primary brain tumor. It offers an overview of publicly available resources and datasets for developing new computational tools and image biomarkers, with emphasis on those related to the Multimodal Brain Tumor Segmentation (BraTS) Challenge. We further offer an overview of the state-of-the-art methods in glioma image segmentation, again with an emphasis on publicly available tools and deep learning algorithms that emerged in the context of the BraTS challenge.
Collapse
Affiliation(s)
- Bjoern Menze
- Quantitative Biomedicine, University of Zurich, Zurich, Switzerland.
| | | | - Roland Wiest
- Support Center for Advanced Neuroimaging, Institute of Diagnostic and Interventional Neuroradiology, Inselspital, Bern, Switzerland.
| | | | | | | | - Spyridon Bakas
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA.
| |
Collapse
|
11
|
Spiteri M, Guillemaut JY, Windridge D, Avula S, Kumar R, Lewis E. Fully-Automated Identification of Imaging Biomarkers for Post-Operative Cerebellar Mutism Syndrome Using Longitudinal Paediatric MRI. Neuroinformatics 2020; 18:151-162. [PMID: 31254271 PMCID: PMC6981105 DOI: 10.1007/s12021-019-09427-w] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
Post-operative cerebellar mutism syndrome (POPCMS) in children is a post- surgical complication which occurs following the resection of tumors within the brain stem and cerebellum. High resolution brain magnetic resonance (MR) images acquired at multiple time points across a patient’s treatment allow the quantification of localized changes caused by the progression of this syndrome. However, MR images are not necessarily acquired at regular intervals throughout treatment and are often not volumetric. This restricts the analysis to 2D space and causes difficulty in intra- and inter-subject comparison. To address these challenges, we have developed an automated image processing and analysis pipeline. Multi-slice 2D MR image slices are interpolated in space and time to produce a 4D volumetric MR image dataset providing a longitudinal representation of the cerebellum and brain stem at specific time points across treatment. The deformations within the brain over time are represented using a novel metric known as the Jacobian of deformations determinant. This metric, together with the changing grey-level intensity of areas within the brain over time, are analyzed using machine learning techniques in order to identify biomarkers that correspond with the development of POPCMS following tumor resection. This study makes use of a fully automated approach which is not hypothesis-driven. As a result, we were able to automatically detect six potential biomarkers that are related to the development of POPCMS following tumor resection in the posterior fossa.
Collapse
Affiliation(s)
- Michaela Spiteri
- Centre for Vision, Speech and Signal Processing (CVSSP), University of Surrey, Guildford, GU27XH, UK.
| | - Jean-Yves Guillemaut
- Centre for Vision, Speech and Signal Processing (CVSSP), University of Surrey, Guildford, GU27XH, UK
| | - David Windridge
- Centre for Vision, Speech and Signal Processing (CVSSP), University of Surrey, Guildford, GU27XH, UK
| | - Shivaram Avula
- Alder Hey Children's NHS Trust, E Prescot Rd, Liverpool, L14 5AB, UK
| | - Ram Kumar
- Alder Hey Children's NHS Trust, E Prescot Rd, Liverpool, L14 5AB, UK
| | - Emma Lewis
- Centre for Vision, Speech and Signal Processing (CVSSP), University of Surrey, Guildford, GU27XH, UK
| |
Collapse
|
12
|
Bakas S, Shukla G, Akbari H, Erus G, Sotiras A, Rathore S, Sako C, Min Ha S, Rozycki M, Shinohara RT, Bilello M, Davatzikos C. Overall survival prediction in glioblastoma patients using structural magnetic resonance imaging (MRI): advanced radiomic features may compensate for lack of advanced MRI modalities. J Med Imaging (Bellingham) 2020; 7:031505. [PMID: 32566694 DOI: 10.1117/1.jmi.7.3.031505] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2019] [Accepted: 05/20/2020] [Indexed: 12/22/2022] Open
Abstract
Purpose: Glioblastoma, the most common and aggressive adult brain tumor, is considered noncurative at diagnosis, with 14 to 16 months median survival following treatment. There is increasing evidence that noninvasive integrative analysis of radiomic features can predict overall and progression-free survival, using advanced multiparametric magnetic resonance imaging (Adv-mpMRI). If successfully applicable, such noninvasive markers can considerably influence patient management. However, most patients prior to initiation of therapy typically undergo only basic structural mpMRI (Bas-mpMRI, i.e., T1, T1-Gd, T2, and T2-fluid-attenuated inversion recovery) preoperatively, rather than Adv-mpMRI that provides additional vascularization (dynamic susceptibility contrast-MRI) and cell-density (diffusion tensor imaging) related information. Approach: We assess a retrospective cohort of 101 glioblastoma patients with available Adv-mpMRI from a previous study, which has shown that an initial feature panel (IFP, i.e., intensity, volume, location, and growth model parameters) extracted from Adv-mpMRI can yield accurate overall survival stratification. We focus on demonstrating that equally accurate prediction models can be constructed using augmented radiomic feature panels (ARFPs, i.e., integrating morphology and textural descriptors) extracted solely from widely available Bas-mpMRI, obviating the need for using Adv-mpMRI. We extracted 1612 radiomic features from distinct tumor subregions to build multivariate models that stratified patients as long-, intermediate-, or short-survivors. Results: The classification accuracy of the model utilizing Adv-mpMRI protocols and the IFP was 72.77% and degraded to 60.89% when using only Bas-mpMRI. However, utilizing the ARFP on Bas-mpMRI improved the accuracy to 74.26%. Furthermore, Kaplan-Meier analysis demonstrated superior classification of subjects into short-, intermediate-, and long-survivor classes when using ARFP extracted from Bas-mpMRI. Conclusions: This quantitative evaluation indicates that accurate survival prediction in glioblastoma patients is feasible using solely Bas-mpMRI and integrative advanced radiomic features, which can compensate for the lack of Adv-mpMRI. Our finding holds promise for generalization across multiple institutions that may not have access to Adv-mpMRI and to better inform clinical decision-making about aggressive interventions and clinical trials.
Collapse
Affiliation(s)
- Spyridon Bakas
- University of Pennsylvania, Perelman School of Medicine, Center for Biomedical Image Computing and Analytics, Richards Medical Research Laboratories, Philadelphia, PA, United States.,University of Pennsylvania, Perelman School of Medicine, Richards Medical Research Laboratories, Department of Radiology, Philadelphia, PA, United States.,University of Pennsylvania, Perelman School of Medicine, Richards Medical Research Laboratories, Department of Pathology and Laboratory Medicine, Philadelphia, PA, United States
| | - Gaurav Shukla
- University of Pennsylvania, Perelman School of Medicine, Center for Biomedical Image Computing and Analytics, Richards Medical Research Laboratories, Philadelphia, PA, United States.,Thomas Jefferson University, Sidney Kimmel Cancer Center, Department of Radiation Oncology, Philadelphia, PA, United States
| | - Hamed Akbari
- University of Pennsylvania, Perelman School of Medicine, Center for Biomedical Image Computing and Analytics, Richards Medical Research Laboratories, Philadelphia, PA, United States.,University of Pennsylvania, Perelman School of Medicine, Richards Medical Research Laboratories, Department of Radiology, Philadelphia, PA, United States
| | - Guray Erus
- University of Pennsylvania, Perelman School of Medicine, Center for Biomedical Image Computing and Analytics, Richards Medical Research Laboratories, Philadelphia, PA, United States.,University of Pennsylvania, Perelman School of Medicine, Richards Medical Research Laboratories, Department of Radiology, Philadelphia, PA, United States
| | - Aristeidis Sotiras
- University of Pennsylvania, Perelman School of Medicine, Center for Biomedical Image Computing and Analytics, Richards Medical Research Laboratories, Philadelphia, PA, United States.,University of Pennsylvania, Perelman School of Medicine, Richards Medical Research Laboratories, Department of Radiology, Philadelphia, PA, United States.,Washington University in St. Louis, School of Medicine, Institute for Informatics, Saint Louis, MO, United States.,Washington University in St. Louis, Department of Radiology, Saint Louis, MO, United States
| | - Saima Rathore
- University of Pennsylvania, Perelman School of Medicine, Center for Biomedical Image Computing and Analytics, Richards Medical Research Laboratories, Philadelphia, PA, United States.,University of Pennsylvania, Perelman School of Medicine, Richards Medical Research Laboratories, Department of Radiology, Philadelphia, PA, United States
| | - Chiharu Sako
- University of Pennsylvania, Perelman School of Medicine, Center for Biomedical Image Computing and Analytics, Richards Medical Research Laboratories, Philadelphia, PA, United States.,University of Pennsylvania, Perelman School of Medicine, Richards Medical Research Laboratories, Department of Radiology, Philadelphia, PA, United States
| | - Sung Min Ha
- University of Pennsylvania, Perelman School of Medicine, Center for Biomedical Image Computing and Analytics, Richards Medical Research Laboratories, Philadelphia, PA, United States.,University of Pennsylvania, Perelman School of Medicine, Richards Medical Research Laboratories, Department of Radiology, Philadelphia, PA, United States
| | - Martin Rozycki
- University of Pennsylvania, Perelman School of Medicine, Center for Biomedical Image Computing and Analytics, Richards Medical Research Laboratories, Philadelphia, PA, United States.,University of Pennsylvania, Perelman School of Medicine, Richards Medical Research Laboratories, Department of Radiology, Philadelphia, PA, United States
| | - Russell T Shinohara
- University of Pennsylvania, Perelman School of Medicine, Center for Biomedical Image Computing and Analytics, Richards Medical Research Laboratories, Philadelphia, PA, United States.,University of Pennsylvania, Perelman School of Medicine, Penn Statistics in Imaging and Visualization Center, Department of Biostatistics, Epidemiology, and Informatics, Philadelphia, PA, United States
| | - Michel Bilello
- University of Pennsylvania, Perelman School of Medicine, Center for Biomedical Image Computing and Analytics, Richards Medical Research Laboratories, Philadelphia, PA, United States.,University of Pennsylvania, Perelman School of Medicine, Richards Medical Research Laboratories, Department of Radiology, Philadelphia, PA, United States
| | - Christos Davatzikos
- University of Pennsylvania, Perelman School of Medicine, Center for Biomedical Image Computing and Analytics, Richards Medical Research Laboratories, Philadelphia, PA, United States.,University of Pennsylvania, Perelman School of Medicine, Richards Medical Research Laboratories, Department of Radiology, Philadelphia, PA, United States
| |
Collapse
|
13
|
Mang A, Bakas S, Subramanian S, Davatzikos C, Biros G. Integrated Biophysical Modeling and Image Analysis: Application to Neuro-Oncology. Annu Rev Biomed Eng 2020; 22:309-341. [PMID: 32501772 PMCID: PMC7520881 DOI: 10.1146/annurev-bioeng-062117-121105] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
Central nervous system (CNS) tumors come with vastly heterogeneous histologic, molecular, and radiographic landscapes, rendering their precise characterization challenging. The rapidly growing fields of biophysical modeling and radiomics have shown promise in better characterizing the molecular, spatial, and temporal heterogeneity of tumors. Integrative analysis of CNS tumors, including clinically acquired multi-parametric magnetic resonance imaging (mpMRI) and the inverse problem of calibrating biophysical models to mpMRI data, assists in identifying macroscopic quantifiable tumor patterns of invasion and proliferation, potentially leading to improved (a) detection/segmentation of tumor subregions and (b) computer-aided diagnostic/prognostic/predictive modeling. This article presents a summary of (a) biophysical growth modeling and simulation,(b) inverse problems for model calibration, (c) these models' integration with imaging workflows, and (d) their application to clinically relevant studies. We anticipate that such quantitative integrative analysis may even be beneficial in a future revision of the World Health Organization (WHO) classification for CNS tumors, ultimately improving patient survival prospects.
Collapse
Affiliation(s)
- Andreas Mang
- Department of Mathematics, University of Houston, Houston, Texas 77204, USA;
| | - Spyridon Bakas
- Department of Mathematics, University of Houston, Houston, Texas 77204, USA;
| | - Shashank Subramanian
- Oden Institute of Computational Engineering and Sciences, The University of Texas at Austin, Austin, Texas 78712, USA; ,
| | - Christos Davatzikos
- Center for Biomedical Image Computing and Analytics (CBICA); Department of Radiology; and Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA; ,
| | - George Biros
- Oden Institute of Computational Engineering and Sciences, The University of Texas at Austin, Austin, Texas 78712, USA; ,
| |
Collapse
|
14
|
Pati S, Singh A, Rathore S, Gastounioti A, Bergman M, Ngo P, Ha SM, Bounias D, Minock J, Murphy G, Li H, Bhattarai A, Wolf A, Sridaran P, Kalarot R, Akbari H, Sotiras A, Thakur SP, Verma R, Shinohara RT, Yushkevich P, Fan Y, Kontos D, Davatzikos C, Bakas S. The Cancer Imaging Phenomics Toolkit (CaPTk): Technical Overview. BRAINLESION : GLIOMA, MULTIPLE SCLEROSIS, STROKE AND TRAUMATIC BRAIN INJURIES. BRAINLES (WORKSHOP) 2020; 11993:380-394. [PMID: 32754723 PMCID: PMC7402244 DOI: 10.1007/978-3-030-46643-5_38] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
The purpose of this manuscript is to provide an overview of the technical specifications and architecture of the Cancer imaging Phenomics Toolkit (CaPTk www.cbica.upenn.edu/captk), a cross-platform, open-source, easy-to-use, and extensible software platform for analyzing 2D and 3D images, currently focusing on radiographic scans of brain, breast, and lung cancer. The primary aim of this platform is to enable swift and efficient translation of cutting-edge academic research into clinically useful tools relating to clinical quantification, analysis, predictive modeling, decision-making, and reporting workflow. CaPTk builds upon established open-source software toolkits, such as the Insight Toolkit (ITK) and OpenCV, to bring together advanced computational functionality. This functionality describes specialized, as well as general-purpose, image analysis algorithms developed during active multi-disciplinary collaborative research studies to address real clinical requirements. The target audience of CaPTk consists of both computational scientists and clinical experts. For the former it provides i) an efficient image viewer offering the ability of integrating new algorithms, and ii) a library of readily-available clinically-relevant algorithms, allowing batch-processing of multiple subjects. For the latter it facilitates the use of complex algorithms for clinically-relevant studies through a user-friendly interface, eliminating the prerequisite of a substantial computational background. CaPTk's long-term goal is to provide widely-used technology to make use of advanced quantitative imaging analytics in cancer prediction, diagnosis and prognosis, leading toward a better understanding of the biological mechanisms of cancer development.
Collapse
Affiliation(s)
- Sarthak Pati
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
| | - Ashish Singh
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
| | - Saima Rathore
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Aimilia Gastounioti
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Mark Bergman
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
| | - Phuc Ngo
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Sung Min Ha
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Dimitrios Bounias
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
| | - James Minock
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
| | - Grayson Murphy
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
| | - Hongming Li
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Amit Bhattarai
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
| | - Adam Wolf
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
| | - Patmaa Sridaran
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
| | - Ratheesh Kalarot
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
| | - Hamed Akbari
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Aristeidis Sotiras
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology and Institute for Informatics, School of Medicine, Washington University in St. Louis, Saint Louis, MO, USA
| | - Siddhesh P Thakur
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
| | - Ragini Verma
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Russell T Shinohara
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Penn Statistics in Imaging and Visualization Endeavor (PennSIVE), University of Pennsylvania, Philadelphia, PA, USA
| | - Paul Yushkevich
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Penn Image Computing and Science Lab., University of Pennsylvania (PICSL), Philadelphia, PA, USA
| | - Yong Fan
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Despina Kontos
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Christos Davatzikos
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Spyridon Bakas
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| |
Collapse
|
15
|
Ermiş E, Jungo A, Poel R, Blatti-Moreno M, Meier R, Knecht U, Aebersold DM, Fix MK, Manser P, Reyes M, Herrmann E. Fully automated brain resection cavity delineation for radiation target volume definition in glioblastoma patients using deep learning. Radiat Oncol 2020; 15:100. [PMID: 32375839 PMCID: PMC7204033 DOI: 10.1186/s13014-020-01553-z] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2020] [Accepted: 04/27/2020] [Indexed: 11/23/2022] Open
Abstract
Background Automated brain tumor segmentation methods are computational algorithms that yield tumor delineation from, in this case, multimodal magnetic resonance imaging (MRI). We present an automated segmentation method and its results for resection cavity (RC) in glioblastoma multiforme (GBM) patients using deep learning (DL) technologies. Methods Post-operative, T1w with and without contrast, T2w and fluid attenuated inversion recovery MRI studies of 30 GBM patients were included. Three radiation oncologists manually delineated the RC to obtain a reference segmentation. We developed a DL cavity segmentation method, which utilizes all four MRI sequences and the reference segmentation to learn to perform RC delineations. We evaluated the segmentation method in terms of Dice coefficient (DC) and estimated volume measurements. Results Median DC of the three radiation oncologist were 0.85 (interquartile range [IQR]: 0.08), 0.84 (IQR: 0.07), and 0.86 (IQR: 0.07). The results of the automatic segmentation compared to the three different raters were 0.83 (IQR: 0.14), 0.81 (IQR: 0.12), and 0.81 (IQR: 0.13) which was significantly lower compared to the DC among raters (chi-square = 11.63, p = 0.04). We did not detect a statistically significant difference of the measured RC volumes for the different raters and the automated method (Kruskal-Wallis test: chi-square = 1.46, p = 0.69). The main sources of error were due to signal inhomogeneity and similar intensity patterns between cavity and brain tissues. Conclusions The proposed DL approach yields promising results for automated RC segmentation in this proof of concept study. Compared to human experts, the DC are still subpar.
Collapse
Affiliation(s)
- Ekin Ermiş
- Department of Radiation Oncology, Inselspital, Bern University Hospital, and University of Bern, Freiburgstrasse, 3010, Bern, Switzerland
| | - Alain Jungo
- Insel Data Science Center, Inselspital, Bern University Hospital, Bern, Switzerland.,ARTORG Center for Biomedical Research, University of Bern, Bern, Switzerland
| | - Robert Poel
- Department of Radiation Oncology, Inselspital, Bern University Hospital, and University of Bern, Freiburgstrasse, 3010, Bern, Switzerland
| | - Marcela Blatti-Moreno
- Department of Radiation Oncology, Inselspital, Bern University Hospital, and University of Bern, Freiburgstrasse, 3010, Bern, Switzerland
| | - Raphael Meier
- Institute for Diagnostic and Interventional Neuroradiology, Inselspital, Bern University Hospital, and University of Bern, Bern, Switzerland
| | - Urspeter Knecht
- Institute for Diagnostic and Interventional Neuroradiology, Inselspital, Bern University Hospital, and University of Bern, Bern, Switzerland
| | - Daniel M Aebersold
- Department of Radiation Oncology, Inselspital, Bern University Hospital, and University of Bern, Freiburgstrasse, 3010, Bern, Switzerland
| | - Michael K Fix
- Division of Medical Radiation Physics and Department of Radiation Oncology, Inselspital, Bern University Hospital, and University of Bern, Bern, Switzerland
| | - Peter Manser
- Division of Medical Radiation Physics and Department of Radiation Oncology, Inselspital, Bern University Hospital, and University of Bern, Bern, Switzerland
| | - Mauricio Reyes
- Insel Data Science Center, Inselspital, Bern University Hospital, Bern, Switzerland.,ARTORG Center for Biomedical Research, University of Bern, Bern, Switzerland
| | - Evelyn Herrmann
- Department of Radiation Oncology, Inselspital, Bern University Hospital, and University of Bern, Freiburgstrasse, 3010, Bern, Switzerland.
| |
Collapse
|
16
|
Longitudinal brain tumor segmentation prediction in MRI using feature and label fusion. Biomed Signal Process Control 2020; 55:101648. [PMID: 34354762 PMCID: PMC8336640 DOI: 10.1016/j.bspc.2019.101648] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2023]
Abstract
This work proposes a novel framework for brain tumor segmentation prediction in longitudinal multi-modal MRI scans, comprising two methods; feature fusion and joint label fusion (JLF). The first method fuses stochastic multi-resolution texture features with tumor cell density feature to obtain tumor segmentation predictions in follow-up timepoints using data from baseline pre-operative timepoint. The cell density feature is obtained by solving the 3D reaction-diffusion equation for biophysical tumor growth modelling using the Lattice-Boltzmann method. The second method utilizes JLF to combine segmentation labels obtained from (i) the stochastic texture feature-based and Random Forest (RF)-based tumor segmentation method; and (ii) another state-of-the-art tumor growth and segmentation method, known as boosted Glioma Image Segmentation and Registration (GLISTRboost, or GB). We quantitatively evaluate both proposed methods using the Dice Similarity Coefficient (DSC) in longitudinal scans of 9 patients from the public BraTS 2015 multi-institutional dataset. The evaluation results for the feature-based fusion method show improved tumor segmentation prediction for the whole tumor(DSC WT = 0.314, p = 0.1502), tumor core (DSC TC = 0.332, p = 0.0002), and enhancing tumor (DSC ET = 0.448, p = 0.0002) regions. The feature-based fusion shows some improvement on tumor prediction of longitudinal brain tumor tracking, whereas the JLF offers statistically significant improvement on the actual segmentation of WT and ET (DSC WT = 0.85 ± 0.055, DSC ET = 0.837 ± 0.074), and also improves the results of GB. The novelty of this work is two-fold: (a) exploit tumor cell density as a feature to predict brain tumor segmentation, using a stochastic multi-resolution RF-based method, and (b) improve the performance of another successful tumor segmentation method, GB, by fusing with the RF-based segmentation labels.
Collapse
|
17
|
Davatzikos C, Sotiras A, Fan Y, Habes M, Erus G, Rathore S, Bakas S, Chitalia R, Gastounioti A, Kontos D. Precision diagnostics based on machine learning-derived imaging signatures. Magn Reson Imaging 2019; 64:49-61. [PMID: 31071473 PMCID: PMC6832825 DOI: 10.1016/j.mri.2019.04.012] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2018] [Revised: 04/24/2019] [Accepted: 04/29/2019] [Indexed: 01/08/2023]
Abstract
The complexity of modern multi-parametric MRI has increasingly challenged conventional interpretations of such images. Machine learning has emerged as a powerful approach to integrating diverse and complex imaging data into signatures of diagnostic and predictive value. It has also allowed us to progress from group comparisons to imaging biomarkers that offer value on an individual basis. We review several directions of research around this topic, emphasizing the use of machine learning in personalized predictions of clinical outcome, in breaking down broad umbrella diagnostic categories into more detailed and precise subtypes, and in non-invasively estimating cancer molecular characteristics. These methods and studies contribute to the field of precision medicine, by introducing more specific diagnostic and predictive biomarkers of clinical outcome, therefore pointing to better matching of treatments to patients.
Collapse
Affiliation(s)
- Christos Davatzikos
- Center for Biomedical Image Computing and Analytics, University of Pennsylvania, United States of America.
| | - Aristeidis Sotiras
- Center for Biomedical Image Computing and Analytics, University of Pennsylvania, United States of America
| | - Yong Fan
- Center for Biomedical Image Computing and Analytics, University of Pennsylvania, United States of America
| | - Mohamad Habes
- Center for Biomedical Image Computing and Analytics, University of Pennsylvania, United States of America
| | - Guray Erus
- Center for Biomedical Image Computing and Analytics, University of Pennsylvania, United States of America
| | - Saima Rathore
- Center for Biomedical Image Computing and Analytics, University of Pennsylvania, United States of America
| | - Spyridon Bakas
- Center for Biomedical Image Computing and Analytics, University of Pennsylvania, United States of America
| | - Rhea Chitalia
- Center for Biomedical Image Computing and Analytics, University of Pennsylvania, United States of America
| | - Aimilia Gastounioti
- Center for Biomedical Image Computing and Analytics, University of Pennsylvania, United States of America
| | - Despina Kontos
- Center for Biomedical Image Computing and Analytics, University of Pennsylvania, United States of America
| |
Collapse
|
18
|
Davatzikos C, Rathore S, Bakas S, Pati S, Bergman M, Kalarot R, Sridharan P, Gastounioti A, Jahani N, Cohen E, Akbari H, Tunc B, Doshi J, Parker D, Hsieh M, Sotiras A, Li H, Ou Y, Doot RK, Bilello M, Fan Y, Shinohara RT, Yushkevich P, Verma R, Kontos D. Cancer imaging phenomics toolkit: quantitative imaging analytics for precision diagnostics and predictive modeling of clinical outcome. J Med Imaging (Bellingham) 2018; 5:011018. [PMID: 29340286 PMCID: PMC5764116 DOI: 10.1117/1.jmi.5.1.011018] [Citation(s) in RCA: 97] [Impact Index Per Article: 16.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2017] [Accepted: 12/05/2017] [Indexed: 12/26/2022] Open
Abstract
The growth of multiparametric imaging protocols has paved the way for quantitative imaging phenotypes that predict treatment response and clinical outcome, reflect underlying cancer molecular characteristics and spatiotemporal heterogeneity, and can guide personalized treatment planning. This growth has underlined the need for efficient quantitative analytics to derive high-dimensional imaging signatures of diagnostic and predictive value in this emerging era of integrated precision diagnostics. This paper presents cancer imaging phenomics toolkit (CaPTk), a new and dynamically growing software platform for analysis of radiographic images of cancer, currently focusing on brain, breast, and lung cancer. CaPTk leverages the value of quantitative imaging analytics along with machine learning to derive phenotypic imaging signatures, based on two-level functionality. First, image analysis algorithms are used to extract comprehensive panels of diverse and complementary features, such as multiparametric intensity histogram distributions, texture, shape, kinetics, connectomics, and spatial patterns. At the second level, these quantitative imaging signatures are fed into multivariate machine learning models to produce diagnostic, prognostic, and predictive biomarkers. Results from clinical studies in three areas are shown: (i) computational neuro-oncology of brain gliomas for precision diagnostics, prediction of outcome, and treatment planning; (ii) prediction of treatment response for breast and lung cancer, and (iii) risk assessment for breast cancer.
Collapse
Affiliation(s)
- Christos Davatzikos
- Center for Biomedical Image Computing and Analytics (CBICA), Philadelphia, Pennsylvania, United States
| | - Saima Rathore
- Center for Biomedical Image Computing and Analytics (CBICA), Philadelphia, Pennsylvania, United States
| | - Spyridon Bakas
- Center for Biomedical Image Computing and Analytics (CBICA), Philadelphia, Pennsylvania, United States
| | - Sarthak Pati
- Center for Biomedical Image Computing and Analytics (CBICA), Philadelphia, Pennsylvania, United States
| | - Mark Bergman
- Center for Biomedical Image Computing and Analytics (CBICA), Philadelphia, Pennsylvania, United States
| | - Ratheesh Kalarot
- Center for Biomedical Image Computing and Analytics (CBICA), Philadelphia, Pennsylvania, United States
| | - Patmaa Sridharan
- Center for Biomedical Image Computing and Analytics (CBICA), Philadelphia, Pennsylvania, United States
| | - Aimilia Gastounioti
- Center for Biomedical Image Computing and Analytics (CBICA), Philadelphia, Pennsylvania, United States
| | - Nariman Jahani
- Center for Biomedical Image Computing and Analytics (CBICA), Philadelphia, Pennsylvania, United States
| | - Eric Cohen
- Center for Biomedical Image Computing and Analytics (CBICA), Philadelphia, Pennsylvania, United States
| | - Hamed Akbari
- Center for Biomedical Image Computing and Analytics (CBICA), Philadelphia, Pennsylvania, United States
| | - Birkan Tunc
- Center for Biomedical Image Computing and Analytics (CBICA), Philadelphia, Pennsylvania, United States
| | - Jimit Doshi
- Center for Biomedical Image Computing and Analytics (CBICA), Philadelphia, Pennsylvania, United States
| | - Drew Parker
- Center for Biomedical Image Computing and Analytics (CBICA), Philadelphia, Pennsylvania, United States
| | - Michael Hsieh
- Center for Biomedical Image Computing and Analytics (CBICA), Philadelphia, Pennsylvania, United States
| | - Aristeidis Sotiras
- Center for Biomedical Image Computing and Analytics (CBICA), Philadelphia, Pennsylvania, United States
| | - Hongming Li
- Center for Biomedical Image Computing and Analytics (CBICA), Philadelphia, Pennsylvania, United States
| | - Yangming Ou
- Massachusetts General Hospital, Martinos Center for Biomedical Imaging, Boston, Massachusetts, United States
| | - Robert K. Doot
- Center for Biomedical Image Computing and Analytics (CBICA), Philadelphia, Pennsylvania, United States
| | - Michel Bilello
- Center for Biomedical Image Computing and Analytics (CBICA), Philadelphia, Pennsylvania, United States
| | - Yong Fan
- Center for Biomedical Image Computing and Analytics (CBICA), Philadelphia, Pennsylvania, United States
| | - Russell T. Shinohara
- Center for Biomedical Image Computing and Analytics (CBICA), Philadelphia, Pennsylvania, United States
- University of Pennsylvania, Perelman School of Medicine, Center for Clinical Epidemiology and Biostatistics (CCEB), Department of Biostatistics, Epidemiology, and Informatics, Philadelphia, Pennsylvania, United States
| | - Paul Yushkevich
- Center for Biomedical Image Computing and Analytics (CBICA), Philadelphia, Pennsylvania, United States
| | - Ragini Verma
- Center for Biomedical Image Computing and Analytics (CBICA), Philadelphia, Pennsylvania, United States
| | - Despina Kontos
- Center for Biomedical Image Computing and Analytics (CBICA), Philadelphia, Pennsylvania, United States
| |
Collapse
|
19
|
Li Y, Liu X, Wei F, Sima DM, Van Cauter S, Himmelreich U, Pi Y, Hu G, Yao Y, Van Huffel S. An advanced MRI and MRSI data fusion scheme for enhancing unsupervised brain tumor differentiation. Comput Biol Med 2017; 81:121-129. [DOI: 10.1016/j.compbiomed.2016.12.017] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2016] [Revised: 12/09/2016] [Accepted: 12/26/2016] [Indexed: 01/12/2023]
|
20
|
Song B, Chou CR, Chen X, Huang A, Liu MC. Anatomy-Guided Brain Tumor Segmentation and Classification. BRAINLESION: GLIOMA, MULTIPLE SCLEROSIS, STROKE AND TRAUMATIC BRAIN INJURIES 2016. [DOI: 10.1007/978-3-319-55524-9_16] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/13/2023]
|
21
|
Kamnitsas K, Ferrante E, Parisot S, Ledig C, Nori AV, Criminisi A, Rueckert D, Glocker B. DeepMedic for Brain Tumor Segmentation. BRAINLESION: GLIOMA, MULTIPLE SCLEROSIS, STROKE AND TRAUMATIC BRAIN INJURIES 2016. [DOI: 10.1007/978-3-319-55524-9_14] [Citation(s) in RCA: 127] [Impact Index Per Article: 15.9] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/02/2022]
|