1
|
Yalcinkaya DM, Youssef K, Heydari B, Wei J, Merz NB, Judd R, Dharmakumar R, Simonetti OP, Weinsaft JW, Raman SV, Sharif B. Improved Robustness for Deep Learning-based Segmentation of Multi-Center Myocardial Perfusion MRI Datasets Using Data Adaptive Uncertainty-guided Space-time Analysis. ARXIV 2024:arXiv:2408.04805v1. [PMID: 39148930 PMCID: PMC11326424] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 08/17/2024]
Abstract
Background Fully automatic analysis of myocardial perfusion MRI datasets enables rapid and objective reporting of stress/rest studies in patients with suspected ischemic heart disease. Developing deep learning techniques that can analyze multi-center datasets despite limited training data and variations in software (pulse sequence) and hardware (scanner vendor) is an ongoing challenge. Methods Datasets from 3 medical centers acquired at 3T (n = 150 subjects; 21,150 first-pass images) were included: an internal dataset (inD; n = 95) and two external datasets (exDs; n = 55) used for evaluating the robustness of the trained deep neural network (DNN) models against differences in pulse sequence (exD-1) and scanner vendor (exD-2). A subset of inD (n = 85) was used for training/validation of a pool of DNNs for segmentation, all using the same spatiotemporal U-Net architecture and hyperparameters but with different parameter initializations. We employed a space-time sliding-patch analysis approach that automatically yields a pixel-wise "uncertainty map" as a byproduct of the segmentation process. In our approach, dubbed Data Adaptive Uncertainty-Guided Space-time (DAUGS) analysis, a given test case is segmented by all members of the DNN pool and the resulting uncertainty maps are leveraged to automatically select the "best" one among the pool of solutions. For comparison, we also trained a DNN using the established approach with the same settings (hyperparameters, data augmentation, etc.). Results The proposed DAUGS analysis approach performed similarly to the established approach on the internal dataset (Dice score for the testing subset of inD: 0.896 ± 0.050 vs. 0.890 ± 0.049; p = n.s.) whereas it significantly outperformed on the external datasets (Dice for exD-1: 0.885 ± 0.040 vs. 0.849 ± 0.065, p < 0.005; Dice for exD-2: 0.811 ± 0.070 vs. 0.728 ± 0.149, p < 0.005). Moreover, the number of image series with "failed" segmentation (defined as having myocardial contours that include bloodpool or are noncontiguous in ≥1 segment) was significantly lower for the proposed vs. the established approach (4.3% vs. 17.1%, p < 0.0005). Conclusions The proposed DAUGS analysis approach has the potential to improve the robustness of deep learning methods for segmentation of multi-center stress perfusion datasets with variations in the choice of pulse sequence, site location or scanner vendor.
Collapse
Affiliation(s)
- Dilek M. Yalcinkaya
- Laboratory for Translational Imaging of Microcirculation, Indiana University School of Medicine, Indianapolis, IN, USA
- Elmore Family School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, USA
| | - Khalid Youssef
- Laboratory for Translational Imaging of Microcirculation, Indiana University School of Medicine, Indianapolis, IN, USA
- Krannert Cardiovascular Research Center, Dept. of Medicine, Indiana Univ. School of Medicine, Indianapolis, IN, USA
| | - Bobak Heydari
- Stephenson Cardiac Imaging Centre, Department of Cardiac Sciences, University of Calgary, Alberta, Canada
| | - Janet Wei
- Barbra Streisand Women’s Heart Center, Smidt Heart Institute, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - Noel Bairey Merz
- Barbra Streisand Women’s Heart Center, Smidt Heart Institute, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - Robert Judd
- Division of Cardiology, Department of Medicine, Duke University, Durham, NC, USA
| | - Rohan Dharmakumar
- Krannert Cardiovascular Research Center, Dept. of Medicine, Indiana Univ. School of Medicine, Indianapolis, IN, USA
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN, USA
| | - Orlando P. Simonetti
- Department of Medicine, Davis Heart and Lung Research Institute, The Ohio State University, Columbus, OH, USA
| | - Jonathan W. Weinsaft
- Division of Cardiology at NY Presbyterian Hospital, Weill Cornell Medical Center, New York, NY, USA
| | - Subha V. Raman
- Krannert Cardiovascular Research Center, Dept. of Medicine, Indiana Univ. School of Medicine, Indianapolis, IN, USA
- OhioHealth, Columbus, OH, USA
| | - Behzad Sharif
- Laboratory for Translational Imaging of Microcirculation, Indiana University School of Medicine, Indianapolis, IN, USA
- Elmore Family School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, USA
- Krannert Cardiovascular Research Center, Dept. of Medicine, Indiana Univ. School of Medicine, Indianapolis, IN, USA
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN, USA
| |
Collapse
|
2
|
Jiao C, Lao Y, Zhang W, Braunstein S, Salans M, Villanueva-Meyer JE, Hervey-Jumper SL, Yang B, Morin O, Valdes G, Fan Z, Shiroishi M, Zada G, Sheng K, Yang W. Multi-modal fusion and feature enhancement U-Net coupling with stem cell niches proximity estimation for voxel-wise GBM recurrence prediction . Phys Med Biol 2024; 69:10.1088/1361-6560/ad64b8. [PMID: 39019073 PMCID: PMC11308744 DOI: 10.1088/1361-6560/ad64b8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2024] [Accepted: 07/17/2024] [Indexed: 07/19/2024]
Abstract
Objective.We aim to develop a Multi-modal Fusion and Feature Enhancement U-Net (MFFE U-Net) coupling with stem cell niche proximity estimation to improve voxel-wise Glioblastoma (GBM) recurrence prediction.Approach.57 patients with pre- and post-surgery magnetic resonance (MR) scans were retrospectively solicited from 4 databases. Post-surgery MR scans included two months before the clinical diagnosis of recurrence and the day of the radiologicaly confirmed recurrence. The recurrences were manually annotated on the T1ce. The high-risk recurrence region was first determined. Then, a sparse multi-modal feature fusion U-Net was developed. The 50 patients from 3 databases were divided into 70% training, 10% validation, and 20% testing. 7 patients from the 4th institution were used as external testing with transfer learning. Model performance was evaluated by recall, precision, F1-score, and Hausdorff Distance at the 95% percentile (HD95). The proposed MFFE U-Net was compared to the support vector machine (SVM) model and two state-of-the-art neural networks. An ablation study was performed.Main results.The MFFE U-Net achieved a precision of 0.79 ± 0.08, a recall of 0.85 ± 0.11, and an F1-score of 0.82 ± 0.09. Statistically significant improvement was observed when comparing MFFE U-Net with proximity estimation couple SVM (SVMPE), mU-Net, and Deeplabv3. The HD95 was 2.75 ± 0.44 mm and 3.91 ± 0.83 mm for the 10 patients used in the model construction and 7 patients used for external testing, respectively. The ablation test showed that all five MR sequences contributed to the performance of the final model, with T1ce contributing the most. Convergence analysis, time efficiency analysis, and visualization of the intermediate results further discovered the characteristics of the proposed method.Significance. We present an advanced MFFE learning framework, MFFE U-Net, for effective voxel-wise GBM recurrence prediction. MFFE U-Net performs significantly better than the state-of-the-art networks and can potentially guide early RT intervention of the disease recurrence.
Collapse
Affiliation(s)
- Changzhe Jiao
- Department of Radiation Oncology, UC San Francisco, San Francisco, CA 94143
| | - Yi Lao
- Department of Radiation Oncology, UC Los Angeles, Los Angeles, CA 90095
| | - Wenwen Zhang
- Department of Radiation Oncology, UC San Francisco, San Francisco, CA 94143
| | - Steve Braunstein
- Department of Radiation Oncology, UC San Francisco, San Francisco, CA 94143
| | - Mia Salans
- Department of Radiation Oncology, UC San Francisco, San Francisco, CA 94143
| | | | | | - Bo Yang
- Department of Radiation Oncology, UC San Francisco, San Francisco, CA 94143
| | - Olivier Morin
- Department of Radiation Oncology, UC San Francisco, San Francisco, CA 94143
| | - Gilmer Valdes
- Department of Radiation Oncology, UC San Francisco, San Francisco, CA 94143
| | - Zhaoyang Fan
- Department of Radiology, University of Southern California, Los Angeles, CA 90033
| | - Mark Shiroishi
- Department of Radiology, University of Southern California, Los Angeles, CA 90033
| | - Gabriel Zada
- Department of Neurosurgery, University of Southern California, Los Angeles, CA 90033
| | - Ke Sheng
- Department of Radiation Oncology, UC San Francisco, San Francisco, CA 94143
| | - Wensha Yang
- Department of Radiation Oncology, UC San Francisco, San Francisco, CA 94143
| |
Collapse
|
3
|
Ghaderi S, Mohammadi S, Ghaderi K, Kiasat F, Mohammadi M. Marker-controlled watershed algorithm and fuzzy C-means clustering machine learning: automated segmentation of glioblastoma from MRI images in a case series. Ann Med Surg (Lond) 2024; 86:1460-1475. [PMID: 38463066 PMCID: PMC10923355 DOI: 10.1097/ms9.0000000000001756] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2023] [Accepted: 01/16/2024] [Indexed: 03/12/2024] Open
Abstract
Introduction and importance Automated segmentation of glioblastoma multiforme (GBM) from MRI images is crucial for accurate diagnosis and treatment planning. This paper presents a new and innovative approach for automating the segmentation of GBM from MRI images using the marker-controlled watershed segmentation (MCWS) algorithm. Case presentation and methods The technique involves several image processing techniques, including adaptive thresholding, morphological filtering, gradient magnitude calculation, and regional maxima identification. The MCWS algorithm efficiently segments images based on local intensity structures using the watershed transform, and fuzzy c-means (FCM) clustering improves segmentation accuracy. The presented approach achieved improved segmentation accuracy in detecting and segmenting GBM tumours from axial T2-weighted (T2-w) MRI images, as demonstrated by the mean characteristics performance metrics for GBM segmentation (sensitivity: 0.9905, specificity: 0.9483, accuracy: 0.9508, precision: 0.5481, F_measure: 0.7052, and jaccard: 0.9340). Clinical discussion The results of this study underline the importance of reliable and accurate image segmentation for effective diagnosis and treatment planning of GBM tumours. Conclusion The MCWS technique provides an effective and efficient approach for the segmentation of challenging medical images.
Collapse
Affiliation(s)
- Sadegh Ghaderi
- Department of Neuroscience and Addiction Studies, School of Advanced Technologies in Medicine, Tehran University of Medical Sciences, Tehran
| | - Sana Mohammadi
- Department of Medical Sciences, School of Medicine, Iran University of Medical Sciences, Tehran
| | - Kayvan Ghaderi
- Department of Information Technology and Computer Engineering, Faculty of Engineering, University of Kurdistan, Sanandaj
| | - Fereshteh Kiasat
- Department of Information Technology and Computer Engineering, Faculty of Engineering, University of Kurdistan, Sanandaj
| | - Mahdi Mohammadi
- Department of Medical Physics and Biomedical Engineering, School of Medicine, Tehran University of Medical Sciences, Tehran, Iran
| |
Collapse
|
4
|
Yang C, Zhou Q, Li M, Xu L, Zeng Y, Liu J, Wei Y, Shi F, Chen J, Li P, Shu Y, Yang L, Shu J. MRI-based automatic identification and segmentation of extrahepatic cholangiocarcinoma using deep learning network. BMC Cancer 2023; 23:1089. [PMID: 37950207 PMCID: PMC10636947 DOI: 10.1186/s12885-023-11575-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Accepted: 10/27/2023] [Indexed: 11/12/2023] Open
Abstract
BACKGROUND Accurate identification of extrahepatic cholangiocarcinoma (ECC) from an image is challenging because of the small size and complex background structure. Therefore, considering the limitation of manual delineation, it's necessary to develop automated identification and segmentation methods for ECC. The aim of this study was to develop a deep learning approach for automatic identification and segmentation of ECC using MRI. METHODS We recruited 137 ECC patients from our hospital as the main dataset (C1) and an additional 40 patients from other hospitals as the external validation set (C2). All patients underwent axial T1-weighted imaging (T1WI), T2-weighted imaging (T2WI), and diffusion-weighted imaging (DWI). Manual delineations were performed and served as the ground truth. Next, we used 3D VB-Net to establish single-mode automatic identification and segmentation models based on T1WI (model 1), T2WI (model 2), and DWI (model 3) in the training cohort (80% of C1), and compared them with the combined model (model 4). Subsequently, the generalization capability of the best models was evaluated using the testing set (20% of C1) and the external validation set (C2). Finally, the performance of the developed models was further evaluated. RESULTS Model 3 showed the best identification performance in the training, testing, and external validation cohorts with success rates of 0.980, 0.786, and 0.725, respectively. Furthermore, model 3 yielded an average Dice similarity coefficient (DSC) of 0.922, 0.495, and 0.466 to segment ECC automatically in the training, testing, and external validation cohorts, respectively. CONCLUSION The DWI-based model performed better in automatically identifying and segmenting ECC compared to T1WI and T2WI, which may guide clinical decisions and help determine prognosis.
Collapse
Affiliation(s)
- Chunmei Yang
- Department of Radiology, The Affiliated Hospital of Southwest Medical University, Luzhou, Sichuan, 646000, China
| | - Qin Zhou
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Mingdong Li
- Department of Radiology, The Affiliated Hospital of Southwest Medical University, Luzhou, Sichuan, 646000, China
| | - Lulu Xu
- Department of Radiology, The Affiliated Hospital of Southwest Medical University, Luzhou, Sichuan, 646000, China
| | - Yanyan Zeng
- Department of Radiology, The Affiliated Hospital of Southwest Medical University, Luzhou, Sichuan, 646000, China
| | - Jiong Liu
- Department of Radiology, The Affiliated Hospital of Southwest Medical University, Luzhou, Sichuan, 646000, China
| | - Ying Wei
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Feng Shi
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Jing Chen
- Department of Radiology, The Affiliated Hospital of Southwest Medical University, Luzhou, Sichuan, 646000, China
| | - Pinxiong Li
- Department of Radiology, The Affiliated Hospital of Southwest Medical University, Luzhou, Sichuan, 646000, China
| | - Yue Shu
- Department of Oncology, The Affiliated Hospital of Southwest Medical University, Luzhou, Sichuan, 646000, China
| | - Lu Yang
- Department of Radiology, The Affiliated Hospital of Southwest Medical University, Luzhou, Sichuan, 646000, China
| | - Jian Shu
- Department of Radiology, The Affiliated Hospital of Southwest Medical University, Luzhou, Sichuan, 646000, China.
| |
Collapse
|
5
|
Pemberton HG, Wu J, Kommers I, Müller DMJ, Hu Y, Goodkin O, Vos SB, Bisdas S, Robe PA, Ardon H, Bello L, Rossi M, Sciortino T, Nibali MC, Berger MS, Hervey-Jumper SL, Bouwknegt W, Van den Brink WA, Furtner J, Han SJ, Idema AJS, Kiesel B, Widhalm G, Kloet A, Wagemakers M, Zwinderman AH, Krieg SM, Mandonnet E, Prados F, de Witt Hamer P, Barkhof F, Eijgelaar RS. Multi-class glioma segmentation on real-world data with missing MRI sequences: comparison of three deep learning algorithms. Sci Rep 2023; 13:18911. [PMID: 37919354 PMCID: PMC10622563 DOI: 10.1038/s41598-023-44794-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2023] [Accepted: 10/12/2023] [Indexed: 11/04/2023] Open
Abstract
This study tests the generalisability of three Brain Tumor Segmentation (BraTS) challenge models using a multi-center dataset of varying image quality and incomplete MRI datasets. In this retrospective study, DeepMedic, no-new-Unet (nn-Unet), and NVIDIA-net (nv-Net) were trained and tested using manual segmentations from preoperative MRI of glioblastoma (GBM) and low-grade gliomas (LGG) from the BraTS 2021 dataset (1251 in total), in addition to 275 GBM and 205 LGG acquired clinically across 12 hospitals worldwide. Data was split into 80% training, 5% validation, and 15% internal test data. An additional external test-set of 158 GBM and 69 LGG was used to assess generalisability to other hospitals' data. All models' median Dice similarity coefficient (DSC) for both test sets were within, or higher than, previously reported human inter-rater agreement (range of 0.74-0.85). For both test sets, nn-Unet achieved the highest DSC (internal = 0.86, external = 0.93) and the lowest Hausdorff distances (10.07, 13.87 mm, respectively) for all tumor classes (p < 0.001). By applying Sparsified training, missing MRI sequences did not statistically affect the performance. nn-Unet achieves accurate segmentations in clinical settings even in the presence of incomplete MRI datasets. This facilitates future clinical adoption of automated glioma segmentation, which could help inform treatment planning and glioma monitoring.
Collapse
Affiliation(s)
- Hugh G Pemberton
- Centre for Medical Image Computing (CMIC), University College London, London, UK
- Neuroradiological Academic Unit, UCL Queen Square Institute of Neurology, University College London, London, UK
| | - Jiaming Wu
- Centre for Medical Image Computing (CMIC), University College London, London, UK
| | - Ivar Kommers
- Neurosurgical Center Amsterdam, Amsterdam UMC, Vrije Universiteit, Amsterdam, The Netherlands
| | - Domenique M J Müller
- Neurosurgical Center Amsterdam, Amsterdam UMC, Vrije Universiteit, Amsterdam, The Netherlands
| | - Yipeng Hu
- Centre for Medical Image Computing (CMIC), University College London, London, UK
| | - Olivia Goodkin
- Centre for Medical Image Computing (CMIC), University College London, London, UK
- Neuroradiological Academic Unit, UCL Queen Square Institute of Neurology, University College London, London, UK
| | - Sjoerd B Vos
- Centre for Medical Image Computing (CMIC), University College London, London, UK
- Neuroradiological Academic Unit, UCL Queen Square Institute of Neurology, University College London, London, UK
| | - Sotirios Bisdas
- Neuroradiological Academic Unit, UCL Queen Square Institute of Neurology, University College London, London, UK
| | - Pierre A Robe
- Department of Neurology & Neurosurgery, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Hilko Ardon
- Department of Neurosurgery, St. Elisabeth Hospital, Tilburg, The Netherlands
| | - Lorenzo Bello
- Neurosurgical Oncology Unit, Department of Oncology and Hemato-Oncology, Università degli Studi di Milano, Milan, Italy
| | - Marco Rossi
- Neurosurgical Oncology Unit, Department of Oncology and Hemato-Oncology, Università degli Studi di Milano, Milan, Italy
| | - Tommaso Sciortino
- Neurosurgical Oncology Unit, Department of Oncology and Hemato-Oncology, Università degli Studi di Milano, Milan, Italy
| | - Marco Conti Nibali
- Neurosurgical Oncology Unit, Department of Oncology and Hemato-Oncology, Università degli Studi di Milano, Milan, Italy
| | - Mitchel S Berger
- Department of Neurological Surgery, University of California, San Francisco, CA, USA
| | - Shawn L Hervey-Jumper
- Department of Neurological Surgery, University of California, San Francisco, CA, USA
| | - Wim Bouwknegt
- Department of Neurosurgery, Medical Center Slotervaart, Amsterdam, The Netherlands
| | | | - Julia Furtner
- Department of Biomedical Imaging and Image-Guided Therapy, Medical University Vienna, Vienna, Austria
| | - Seunggu J Han
- Department of Neurological Surgery, Stanford University, Stanford, USA
| | - Albert J S Idema
- Department of Neurosurgery, Northwest Clinics, Alkmaar, The Netherlands
| | - Barbara Kiesel
- Department of Neurosurgery, Medical University Vienna, Vienna, Austria
| | - Georg Widhalm
- Department of Neurosurgery, Medical University Vienna, Vienna, Austria
| | - Alfred Kloet
- Department of Neurosurgery, Medical Center Haaglanden, The Hague, The Netherlands
| | - Michiel Wagemakers
- Department of Neurosurgery, University of Groningen, University Medical Center Groningen, Groningen, The Netherlands
| | - Aeilko H Zwinderman
- Department of Clinical Epidemiology and Biostatistics, Academic Medical Center, Amsterdam, The Netherlands
| | - Sandro M Krieg
- TUM-Neuroimaging Center, Klinikum rechts der Isar, Technische Universität München, Munich, Germany
- Department of Neurosurgery, Klinikum rechts der Isar, Technische Universität München, Munich, Germany
| | | | - Ferran Prados
- Centre for Medical Image Computing (CMIC), University College London, London, UK
- Department of Neuroinflammation, Faculty of Brain Sciences, Queen Square MS Centre, UCL Institute of Neurology, University College London, London, UK
- e-Health Center, Universitat Oberta de Catalunya, Barcelona, Spain
| | - Philip de Witt Hamer
- Neurosurgical Center Amsterdam, Amsterdam UMC, Vrije Universiteit, Amsterdam, The Netherlands
| | - Frederik Barkhof
- Centre for Medical Image Computing (CMIC), University College London, London, UK
- Neuroradiological Academic Unit, UCL Queen Square Institute of Neurology, University College London, London, UK
- Radiology & Nuclear Medicine, VU University Medical Center, Amsterdam, the Netherlands
| | - Roelant S Eijgelaar
- Neurosurgical Center Amsterdam, Amsterdam UMC, Vrije Universiteit, Amsterdam, The Netherlands.
| |
Collapse
|
6
|
Bianconi A, Rossi LF, Bonada M, Zeppa P, Nico E, De Marco R, Lacroce P, Cofano F, Bruno F, Morana G, Melcarne A, Ruda R, Mainardi L, Fiaschi P, Garbossa D, Morra L. Deep learning-based algorithm for postoperative glioblastoma MRI segmentation: a promising new tool for tumor burden assessment. Brain Inform 2023; 10:26. [PMID: 37801128 PMCID: PMC10558414 DOI: 10.1186/s40708-023-00207-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Accepted: 09/16/2023] [Indexed: 10/07/2023] Open
Abstract
OBJECTIVE Clinical and surgical decisions for glioblastoma patients depend on a tumor imaging-based evaluation. Artificial Intelligence (AI) can be applied to magnetic resonance imaging (MRI) assessment to support clinical practice, surgery planning and prognostic predictions. In a real-world context, the current obstacles for AI are low-quality imaging and postoperative reliability. The aim of this study is to train an automatic algorithm for glioblastoma segmentation on a clinical MRI dataset and to obtain reliable results both pre- and post-operatively. METHODS The dataset used for this study comprises 237 (71 preoperative and 166 postoperative) MRIs from 71 patients affected by a histologically confirmed Grade IV Glioma. The implemented U-Net architecture was trained by transfer learning to perform the segmentation task on postoperative MRIs. The training was carried out first on BraTS2021 dataset for preoperative segmentation. Performance is evaluated using DICE score (DS) and Hausdorff 95% (H95). RESULTS In preoperative scenario, overall DS is 91.09 (± 0.60) and H95 is 8.35 (± 1.12), considering tumor core, enhancing tumor and whole tumor (ET and edema). In postoperative context, overall DS is 72.31 (± 2.88) and H95 is 23.43 (± 7.24), considering resection cavity (RC), gross tumor volume (GTV) and whole tumor (WT). Remarkably, the RC segmentation obtained a mean DS of 63.52 (± 8.90) in postoperative MRIs. CONCLUSIONS The performances achieved by the algorithm are consistent with previous literature for both pre-operative and post-operative glioblastoma's MRI evaluation. Through the proposed algorithm, it is possible to reduce the impact of low-quality images and missing sequences.
Collapse
Affiliation(s)
- Andrea Bianconi
- Neurosurgery, Department of Neuroscience, University of Turin, via Cherasco 15, 10126, Turin, Italy.
| | | | - Marta Bonada
- Neurosurgery, Department of Neuroscience, University of Turin, via Cherasco 15, 10126, Turin, Italy
| | - Pietro Zeppa
- Neurosurgery, Department of Neuroscience, University of Turin, via Cherasco 15, 10126, Turin, Italy
| | - Elsa Nico
- Department of Neurosurgery, Barrow Neurological Institute, St. Joseph's Hospital and Medical Center, Phoenix, AZ, USA
| | - Raffaele De Marco
- Neurosurgery, Department of Neuroscience, University of Turin, via Cherasco 15, 10126, Turin, Italy
| | | | - Fabio Cofano
- Neurosurgery, Department of Neuroscience, University of Turin, via Cherasco 15, 10126, Turin, Italy
| | - Francesco Bruno
- Neurooncology, Department of Neuroscience, University of Turin, Turin, Italy
| | - Giovanni Morana
- Neuroradiology, Department of Neuroscience, University of Turin, Turin, Italy
| | - Antonio Melcarne
- Neurosurgery, Department of Neuroscience, University of Turin, via Cherasco 15, 10126, Turin, Italy
| | - Roberta Ruda
- Neurooncology, Department of Neuroscience, University of Turin, Turin, Italy
| | - Luca Mainardi
- Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Milan, Italy
| | - Pietro Fiaschi
- IRCCS Ospedale Policlinico S. Martino, Genoa, Italy
- Dipartimento di Neuroscienze, Riabilitazione, Oftalmologia, Genetica e Scienze Materno-Infantili, Univeristy of Genoa, Genoa, Italy
| | - Diego Garbossa
- Neurosurgery, Department of Neuroscience, University of Turin, via Cherasco 15, 10126, Turin, Italy
| | - Lia Morra
- Dipartimento di Automatica e Informatica, Politecnico di Torino, Turin, Italy
| |
Collapse
|
7
|
Hagiwara A, Fujita S, Kurokawa R, Andica C, Kamagata K, Aoki S. Multiparametric MRI: From Simultaneous Rapid Acquisition Methods and Analysis Techniques Using Scoring, Machine Learning, Radiomics, and Deep Learning to the Generation of Novel Metrics. Invest Radiol 2023; 58:548-560. [PMID: 36822661 PMCID: PMC10332659 DOI: 10.1097/rli.0000000000000962] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Revised: 01/10/2023] [Indexed: 02/25/2023]
Abstract
ABSTRACT With the recent advancements in rapid imaging methods, higher numbers of contrasts and quantitative parameters can be acquired in less and less time. Some acquisition models simultaneously obtain multiparametric images and quantitative maps to reduce scan times and avoid potential issues associated with the registration of different images. Multiparametric magnetic resonance imaging (MRI) has the potential to provide complementary information on a target lesion and thus overcome the limitations of individual techniques. In this review, we introduce methods to acquire multiparametric MRI data in a clinically feasible scan time with a particular focus on simultaneous acquisition techniques, and we discuss how multiparametric MRI data can be analyzed as a whole rather than each parameter separately. Such data analysis approaches include clinical scoring systems, machine learning, radiomics, and deep learning. Other techniques combine multiple images to create new quantitative maps associated with meaningful aspects of human biology. They include the magnetic resonance g-ratio, the inner to the outer diameter of a nerve fiber, and the aerobic glycolytic index, which captures the metabolic status of tumor tissues.
Collapse
Affiliation(s)
- Akifumi Hagiwara
- From theDepartment of Radiology, Juntendo University School of Medicine, Tokyo, Japan
| | - Shohei Fujita
- From theDepartment of Radiology, Juntendo University School of Medicine, Tokyo, Japan
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Ryo Kurokawa
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
- Division of Neuroradiology, Department of Radiology, University of Michigan, Ann Arbor, Michigan
| | - Christina Andica
- From theDepartment of Radiology, Juntendo University School of Medicine, Tokyo, Japan
| | - Koji Kamagata
- From theDepartment of Radiology, Juntendo University School of Medicine, Tokyo, Japan
| | - Shigeki Aoki
- From theDepartment of Radiology, Juntendo University School of Medicine, Tokyo, Japan
| |
Collapse
|
8
|
Yang Z, Hu Z, Ji H, Lafata K, Vaios E, Floyd S, Yin FF, Wang C. A neural ordinary differential equation model for visualizing deep neural network behaviors in multi-parametric MRI-based glioma segmentation. Med Phys 2023; 50:4825-4838. [PMID: 36840621 PMCID: PMC10440249 DOI: 10.1002/mp.16286] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Revised: 01/26/2023] [Accepted: 01/30/2023] [Indexed: 02/26/2023] Open
Abstract
PURPOSE To develop a neural ordinary differential equation (ODE) model for visualizing deep neural network behavior during multi-parametric MRI-based glioma segmentation as a method to enhance deep learning explainability. METHODS By hypothesizing that deep feature extraction can be modeled as a spatiotemporally continuous process, we implemented a novel deep learning model, Neural ODE, in which deep feature extraction was governed by an ODE parameterized by a neural network. The dynamics of (1) MR images after interactions with the deep neural network and (2) segmentation formation can thus be visualized after solving the ODE. An accumulative contribution curve (ACC) was designed to quantitatively evaluate each MR image's utilization by the deep neural network toward the final segmentation results. The proposed Neural ODE model was demonstrated using 369 glioma patients with a 4-modality multi-parametric MRI protocol: T1, contrast-enhanced T1 (T1-Ce), T2, and FLAIR. Three Neural ODE models were trained to segment enhancing tumor (ET), tumor core (TC), and whole tumor (WT), respectively. The key MRI modalities with significant utilization by deep neural networks were identified based on ACC analysis. Segmentation results by deep neural networks using only the key MRI modalities were compared to those using all four MRI modalities in terms of Dice coefficient, accuracy, sensitivity, and specificity. RESULTS All Neural ODE models successfully illustrated image dynamics as expected. ACC analysis identified T1-Ce as the only key modality in ET and TC segmentations, while both FLAIR and T2 were key modalities in WT segmentation. Compared to the U-Net results using all four MRI modalities, the Dice coefficient of ET (0.784→0.775), TC (0.760→0.758), and WT (0.841→0.837) using the key modalities only had minimal differences without significance. Accuracy, sensitivity, and specificity results demonstrated the same patterns. CONCLUSION The Neural ODE model offers a new tool for optimizing the deep learning model inputs with enhanced explainability. The presented methodology can be generalized to other medical image-related deep-learning applications.
Collapse
Affiliation(s)
- Zhenyu Yang
- Deparment of Radiation Oncology, Duke University, Durham, NC, 27710
| | - Zongsheng Hu
- Medical Physics Graduate Program, Duke Kunshan University, Kunshan, Jiangsu, China, 215316
| | - Hangjie Ji
- Department of Mathematics, North Carolina State University, Raleigh, NC, 27695
| | - Kyle Lafata
- Deparment of Radiation Oncology, Duke University, Durham, NC, 27710
- Department of Radiology, Duke University, Durham, NC, 27710
- Department of Electrical and Computer Engineering, Duke University, Durham, NC, 27710
| | - Eugene Vaios
- Deparment of Radiation Oncology, Duke University, Durham, NC, 27710
| | - Scott Floyd
- Deparment of Radiation Oncology, Duke University, Durham, NC, 27710
| | - Fang-Fang Yin
- Deparment of Radiation Oncology, Duke University, Durham, NC, 27710
- Medical Physics Graduate Program, Duke Kunshan University, Kunshan, Jiangsu, China, 215316
| | - Chunhao Wang
- Deparment of Radiation Oncology, Duke University, Durham, NC, 27710
| |
Collapse
|
9
|
Eidex Z, Ding Y, Wang J, Abouei E, Qiu RL, Liu T, Wang T, Yang X. Deep Learning in MRI-guided Radiation Therapy: A Systematic Review. ARXIV 2023:arXiv:2303.11378v2. [PMID: 36994167 PMCID: PMC10055493] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 03/31/2023]
Abstract
MRI-guided radiation therapy (MRgRT) offers a precise and adaptive approach to treatment planning. Deep learning applications which augment the capabilities of MRgRT are systematically reviewed. MRI-guided radiation therapy offers a precise, adaptive approach to treatment planning. Deep learning applications which augment the capabilities of MRgRT are systematically reviewed with emphasis placed on underlying methods. Studies are further categorized into the areas of segmentation, synthesis, radiomics, and real time MRI. Finally, clinical implications, current challenges, and future directions are discussed.
Collapse
Affiliation(s)
- Zach Eidex
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
- School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA
| | - Yifu Ding
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| | - Jing Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| | - Elham Abouei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| | - Richard L.J. Qiu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| | - Tian Liu
- Department of Radiation Oncology, Icahn School of Medicine at Mount Sinai, New York, NY
| | - Tonghe Wang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
- School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA
| |
Collapse
|
10
|
Lee J, Liu C, Kim J, Chen Z, Sun Y, Rogers JR, Chung WK, Weng C. Deep learning for rare disease: A scoping review. J Biomed Inform 2022; 135:104227. [DOI: 10.1016/j.jbi.2022.104227] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2022] [Revised: 08/22/2022] [Accepted: 10/07/2022] [Indexed: 10/31/2022]
|
11
|
Savjani RR, Lauria M, Bose S, Deng J, Yuan Y, Andrearczyk V. Automated Tumor Segmentation in Radiotherapy. Semin Radiat Oncol 2022; 32:319-329. [DOI: 10.1016/j.semradonc.2022.06.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
|
12
|
Nadkarni P, Merchant SA. Enhancing medical-imaging artificial intelligence through holistic use of time-tested key imaging and clinical parameters: Future insights. Artif Intell Med Imaging 2022; 3:55-69. [DOI: 10.35711/aimi.v3.i3.55] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/19/2022] [Revised: 04/12/2022] [Accepted: 06/17/2022] [Indexed: 02/06/2023] Open
Abstract
Much of the published literature in Radiology-related Artificial Intelligence (AI) focuses on single tasks, such as identifying the presence or absence or severity of specific lesions. Progress comparable to that achieved for general-purpose computer vision has been hampered by the unavailability of large and diverse radiology datasets containing different types of lesions with possibly multiple kinds of abnormalities in the same image. Also, since a diagnosis is rarely achieved through an image alone, radiology AI must be able to employ diverse strategies that consider all available evidence, not just imaging information. Using key imaging and clinical signs will help improve their accuracy and utility tremendously. Employing strategies that consider all available evidence will be a formidable task; we believe that the combination of human and computer intelligence will be superior to either one alone. Further, unless an AI application is explainable, radiologists will not trust it to be either reliable or bias-free; we discuss some approaches aimed at providing better explanations, as well as regulatory concerns regarding explainability (“transparency”). Finally, we look at federated learning, which allows pooling data from multiple locales while maintaining data privacy to create more generalizable and reliable models, and quantum computing, still prototypical but potentially revolutionary in its computing impact.
Collapse
Affiliation(s)
- Prakash Nadkarni
- College of Nursing, University of Iowa, Iowa City, IA 52242, United States
| | - Suleman Adam Merchant
- Department of Radiology, LTM Medical College & LTM General Hospital, Mumbai 400022, Maharashtra, India
| |
Collapse
|
13
|
Guo P, Unberath M, Heo HY, Eberhart CG, Lim M, Blakeley JO, Jiang S. Learning-based analysis of amide proton transfer-weighted MRI to identify true progression in glioma patients. NEUROIMAGE: CLINICAL 2022; 35:103121. [PMID: 35905666 PMCID: PMC9421489 DOI: 10.1016/j.nicl.2022.103121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Revised: 07/13/2022] [Accepted: 07/14/2022] [Indexed: 11/29/2022] Open
Affiliation(s)
- Pengfei Guo
- Department of Radiology, Johns Hopkins University, Baltimore, MD, USA; Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Mathias Unberath
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Hye-Young Heo
- Department of Radiology, Johns Hopkins University, Baltimore, MD, USA
| | | | - Michael Lim
- Department of Neurosurgery, Johns Hopkins University, Baltimore, MD, USA
| | | | - Shanshan Jiang
- Department of Radiology, Johns Hopkins University, Baltimore, MD, USA.
| |
Collapse
|
14
|
Lai YM, Boer C, Eijgelaar RS, van den Brom CE, de Witt Hamer P, Schober P. Predictors for time to awake in patients undergoing awake craniotomies. J Neurosurg 2021:1-7. [PMID: 34678766 DOI: 10.3171/2021.6.jns21320] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2021] [Accepted: 06/07/2021] [Indexed: 11/06/2022]
Abstract
OBJECTIVE Awake craniotomies are often characterized by alternating asleep-awake-asleep periods. Preceding the awake phase, patients are weaned from anesthesia and mechanical ventilation. Although clinicians aim to minimize the time to awake for patient safety and operating room efficiency, in some patients, the time to awake exceeds 20 minutes. The goal of this study was to determine the average time to awake and the factors associated with prolonged time to awake (> 20 minutes) in patients undergoing awake craniotomy. METHODS Records of patients who underwent awake craniotomy between 2003 and 2020 were evaluated. Time to awake was defined as the time between discontinuation of propofol and remifentanil infusion and the time of extubation. Patient and perioperative characteristics were explored as predictors for time to awake using logistic regression analyses. RESULTS Data of 307 patients were analyzed. The median (IQR) time to awake was 13 (10-20) minutes and exceeded 20 minutes in 17% (95% CI 13%-21%) of the patients. In both univariate and multivariable analyses, increased age, nonsmoker status, and American Society of Anesthesiologists (ASA) class III versus II were associated with a time to awake exceeding 20 minutes. BMI, as well as the use of alcohol, drugs, dexamethasone, or antiepileptic agents, was not significantly associated with the time to awake. CONCLUSIONS While most patients undergoing awake craniotomy are awake within a reasonable time frame after discontinuation of propofol and remifentanil infusion, time to awake exceeded 20 minutes in 17% of the patients. Increasing age, nonsmoker status, and higher ASA classification were found to be associated with a prolonged time to awake.
Collapse
Affiliation(s)
| | | | - Roelant S Eijgelaar
- 3Neurosurgical Center Amsterdam, Amsterdam University Medical Centers, Vrije Universiteit Amsterdam, The Netherlands
| | | | - Philip de Witt Hamer
- 2Neurosurgery, Amsterdam University Medical Centers, VU University Medical Center, Amsterdam; and
| | | |
Collapse
|
15
|
Kommers I, Bouget D, Pedersen A, Eijgelaar RS, Ardon H, Barkhof F, Bello L, Berger MS, Conti Nibali M, Furtner J, Fyllingen EH, Hervey-Jumper S, Idema AJS, Kiesel B, Kloet A, Mandonnet E, Müller DMJ, Robe PA, Rossi M, Sagberg LM, Sciortino T, van den Brink WA, Wagemakers M, Widhalm G, Witte MG, Zwinderman AH, Reinertsen I, Solheim O, De Witt Hamer PC. Glioblastoma Surgery Imaging-Reporting and Data System: Standardized Reporting of Tumor Volume, Location, and Resectability Based on Automated Segmentations. Cancers (Basel) 2021; 13:2854. [PMID: 34201021 PMCID: PMC8229389 DOI: 10.3390/cancers13122854] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2021] [Revised: 05/28/2021] [Accepted: 06/02/2021] [Indexed: 01/01/2023] Open
Abstract
Treatment decisions for patients with presumed glioblastoma are based on tumor characteristics available from a preoperative MR scan. Tumor characteristics, including volume, location, and resectability, are often estimated or manually delineated. This process is time consuming and subjective. Hence, comparison across cohorts, trials, or registries are subject to assessment bias. In this study, we propose a standardized Glioblastoma Surgery Imaging Reporting and Data System (GSI-RADS) based on an automated method of tumor segmentation that provides standard reports on tumor features that are potentially relevant for glioblastoma surgery. As clinical validation, we determine the agreement in extracted tumor features between the automated method and the current standard of manual segmentations from routine clinical MR scans before treatment. In an observational consecutive cohort of 1596 adult patients with a first time surgery of a glioblastoma from 13 institutions, we segmented gadolinium-enhanced tumor parts both by a human rater and by an automated algorithm. Tumor features were extracted from segmentations of both methods and compared to assess differences, concordance, and equivalence. The laterality, contralateral infiltration, and the laterality indices were in excellent agreement. The native and normalized tumor volumes had excellent agreement, consistency, and equivalence. Multifocality, but not the number of foci, had good agreement and equivalence. The location profiles of cortical and subcortical structures were in excellent agreement. The expected residual tumor volumes and resectability indices had excellent agreement, consistency, and equivalence. Tumor probability maps were in good agreement. In conclusion, automated segmentations are in excellent agreement with manual segmentations and practically equivalent regarding tumor features that are potentially relevant for neurosurgical purposes. Standard GSI-RADS reports can be generated by open access software.
Collapse
Affiliation(s)
- Ivar Kommers
- Department of Neurosurgery, Amsterdam University Medical Centers, Vrije Universiteit, 1081 HV Amsterdam, The Netherlands; (I.K.); (R.S.E.); (D.M.J.M.)
- Cancer Center Amsterdam, Brain Tumor Center, Amsterdam University Medical Centers, 1081 HV Amsterdam, The Netherlands
| | - David Bouget
- Department of Health Research, SINTEF Digital, NO-7465 Trondheim, Norway; (D.B.); (A.P.); (I.R.)
| | - André Pedersen
- Department of Health Research, SINTEF Digital, NO-7465 Trondheim, Norway; (D.B.); (A.P.); (I.R.)
| | - Roelant S. Eijgelaar
- Department of Neurosurgery, Amsterdam University Medical Centers, Vrije Universiteit, 1081 HV Amsterdam, The Netherlands; (I.K.); (R.S.E.); (D.M.J.M.)
- Cancer Center Amsterdam, Brain Tumor Center, Amsterdam University Medical Centers, 1081 HV Amsterdam, The Netherlands
| | - Hilko Ardon
- Department of Neurosurgery, Twee Steden Hospital, 5042 AD Tilburg, The Netherlands;
| | - Frederik Barkhof
- Department of Radiology and Nuclear Medicine, Amsterdam University Medical Centers, Vrije Universiteit, 1081 HV Amsterdam, The Netherlands;
- Institutes of Neurology and Healthcare Engineering, University College London, London WC1E 6BT, UK
| | - Lorenzo Bello
- Neurosurgical Oncology Unit, Department of Oncology and Hemato-Oncology, Humanitas Research Hospital, Università Degli Studi di Milano, 20122 Milano, Italy; (L.B.); (M.C.N.); (M.R.); (T.S.)
| | - Mitchel S. Berger
- Department of Neurological Surgery, University of California San Francisco, San Francisco, CA 94143, USA; (M.S.B.); (S.H.-J.)
| | - Marco Conti Nibali
- Neurosurgical Oncology Unit, Department of Oncology and Hemato-Oncology, Humanitas Research Hospital, Università Degli Studi di Milano, 20122 Milano, Italy; (L.B.); (M.C.N.); (M.R.); (T.S.)
| | - Julia Furtner
- Department of Biomedical Imaging and Image-Guided Therapy, Medical University Vienna, 1090 Wien, Austria;
| | - Even H. Fyllingen
- Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, NO-7491 Trondheim, Norway;
- Department of Radiology and Nuclear Medicine, St. Olav’s Hospital, Trondheim University Hospital, NO-7030 Trondheim, Norway
| | - Shawn Hervey-Jumper
- Department of Neurological Surgery, University of California San Francisco, San Francisco, CA 94143, USA; (M.S.B.); (S.H.-J.)
| | - Albert J. S. Idema
- Department of Neurosurgery, Northwest Clinics, 1815 JD Alkmaar, The Netherlands;
| | - Barbara Kiesel
- Department of Neurosurgery, Medical University Vienna, 1090 Wien, Austria; (B.K.); (G.W.)
| | - Alfred Kloet
- Department of Neurosurgery, Haaglanden Medical Center, 2512 VA The Hague, The Netherlands;
| | - Emmanuel Mandonnet
- Department of Neurological Surgery, Hôpital Lariboisière, 75010 Paris, France;
| | - Domenique M. J. Müller
- Department of Neurosurgery, Amsterdam University Medical Centers, Vrije Universiteit, 1081 HV Amsterdam, The Netherlands; (I.K.); (R.S.E.); (D.M.J.M.)
- Cancer Center Amsterdam, Brain Tumor Center, Amsterdam University Medical Centers, 1081 HV Amsterdam, The Netherlands
| | - Pierre A. Robe
- Department of Neurology and Neurosurgery, University Medical Center Utrecht, 3584 CX Utrecht, The Netherlands;
| | - Marco Rossi
- Neurosurgical Oncology Unit, Department of Oncology and Hemato-Oncology, Humanitas Research Hospital, Università Degli Studi di Milano, 20122 Milano, Italy; (L.B.); (M.C.N.); (M.R.); (T.S.)
| | - Lisa M. Sagberg
- Department of Neurosurgery, St. Olav’s Hospital, Trondheim University Hospital, NO-7030 Trondheim, Norway;
| | - Tommaso Sciortino
- Neurosurgical Oncology Unit, Department of Oncology and Hemato-Oncology, Humanitas Research Hospital, Università Degli Studi di Milano, 20122 Milano, Italy; (L.B.); (M.C.N.); (M.R.); (T.S.)
| | | | - Michiel Wagemakers
- Department of Neurosurgery, University Medical Center Groningen, University of Groningen, 9713 GZ Groningen, The Netherlands;
| | - Georg Widhalm
- Department of Neurosurgery, Medical University Vienna, 1090 Wien, Austria; (B.K.); (G.W.)
| | - Marnix G. Witte
- Department of Radiation Oncology, The Netherlands Cancer Institute, 1066 CX Amsterdam, The Netherlands;
| | - Aeilko H. Zwinderman
- Department of Clinical Epidemiology and Biostatistics, Amsterdam University Medical Centers, University of Amsterdam, 1105 AZ Amsterdam, The Netherlands;
| | - Ingerid Reinertsen
- Department of Health Research, SINTEF Digital, NO-7465 Trondheim, Norway; (D.B.); (A.P.); (I.R.)
- Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, NO-7491 Trondheim, Norway;
| | - Ole Solheim
- Department of Neurosurgery, St. Olav’s Hospital, Trondheim University Hospital, NO-7030 Trondheim, Norway;
- Department of Neuromedicine and Movement Science, Norwegian University of Science and Technology, NO-7491 Trondheim, Norway
| | - Philip C. De Witt Hamer
- Department of Neurosurgery, Amsterdam University Medical Centers, Vrije Universiteit, 1081 HV Amsterdam, The Netherlands; (I.K.); (R.S.E.); (D.M.J.M.)
- Cancer Center Amsterdam, Brain Tumor Center, Amsterdam University Medical Centers, 1081 HV Amsterdam, The Netherlands
| |
Collapse
|
16
|
Ankenbrand MJ, Shainberg L, Hock M, Lohr D, Schreiber LM. Sensitivity analysis for interpretation of machine learning based segmentation models in cardiac MRI. BMC Med Imaging 2021; 21:27. [PMID: 33588786 PMCID: PMC7885570 DOI: 10.1186/s12880-021-00551-1] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2020] [Accepted: 01/24/2021] [Indexed: 12/15/2022] Open
Abstract
BACKGROUND Image segmentation is a common task in medical imaging e.g., for volumetry analysis in cardiac MRI. Artificial neural networks are used to automate this task with performance similar to manual operators. However, this performance is only achieved in the narrow tasks networks are trained on. Performance drops dramatically when data characteristics differ from the training set properties. Moreover, neural networks are commonly considered black boxes, because it is hard to understand how they make decisions and why they fail. Therefore, it is also hard to predict whether they will generalize and work well with new data. Here we present a generic method for segmentation model interpretation. Sensitivity analysis is an approach where model input is modified in a controlled manner and the effect of these modifications on the model output is evaluated. This method yields insights into the sensitivity of the model to these alterations and therefore to the importance of certain features on segmentation performance. RESULTS We present an open-source Python library (misas), that facilitates the use of sensitivity analysis with arbitrary data and models. We show that this method is a suitable approach to answer practical questions regarding use and functionality of segmentation models. We demonstrate this in two case studies on cardiac magnetic resonance imaging. The first case study explores the suitability of a published network for use on a public dataset the network has not been trained on. The second case study demonstrates how sensitivity analysis can be used to evaluate the robustness of a newly trained model. CONCLUSIONS Sensitivity analysis is a useful tool for deep learning developers as well as users such as clinicians. It extends their toolbox, enabling and improving interpretability of segmentation models. Enhancing our understanding of neural networks through sensitivity analysis also assists in decision making. Although demonstrated only on cardiac magnetic resonance images this approach and software are much more broadly applicable.
Collapse
Affiliation(s)
- Markus J Ankenbrand
- Chair of Cellular and Molecular Imaging, Comprehensive Heart Failure Center (CHFC), University Hospital Würzburg, Am Schwarzenberg 15, 97078, Würzburg, Germany.
| | - Liliia Shainberg
- Chair of Cellular and Molecular Imaging, Comprehensive Heart Failure Center (CHFC), University Hospital Würzburg, Am Schwarzenberg 15, 97078, Würzburg, Germany
| | - Michael Hock
- Chair of Cellular and Molecular Imaging, Comprehensive Heart Failure Center (CHFC), University Hospital Würzburg, Am Schwarzenberg 15, 97078, Würzburg, Germany
| | - David Lohr
- Chair of Cellular and Molecular Imaging, Comprehensive Heart Failure Center (CHFC), University Hospital Würzburg, Am Schwarzenberg 15, 97078, Würzburg, Germany
| | - Laura M Schreiber
- Chair of Cellular and Molecular Imaging, Comprehensive Heart Failure Center (CHFC), University Hospital Würzburg, Am Schwarzenberg 15, 97078, Würzburg, Germany
| |
Collapse
|
17
|
Ebrahimi Zade A, Shahabi Haghighi S, Soltani M. A neuro evolutionary algorithm for patient calibrated prediction of survival in Glioblastoma patients. J Biomed Inform 2021; 115:103694. [PMID: 33545332 DOI: 10.1016/j.jbi.2021.103694] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2020] [Revised: 01/28/2021] [Accepted: 01/29/2021] [Indexed: 01/30/2023]
Abstract
BACKGROUND AND OBJECTIVES Glioblastoma multiforme (GBM) is the most common and malignant type of primary brain tumors. Radiation therapy (RT) plus concomitant and adjuvant Temozolomide (TMZ) constitute standard treatment of GBM. Existing models for GBM growth do not consider the effect of different schedules on tumor growth and patient survival. However, clinical trials show that treatment schedule and drug dosage significantly affect patient survival. The goal is to provide a patient calibrated model for predicting survival according to the treatment schedule. METHODS We propose a top-down method based on artificial neural networks (ANN) and genetic algorithm (GA) to predict survival of GBM patients. A feed forward undercomplete Autoencoder network is integrated with the neuro-evolutionary (NE) algorithm in order to extract a compressed representation of input clinical data. The proposed NE algorithm uses GA to obtain optimal architecture of a multi-layer perceptron (MLP). Taguchi L16 orthogonal design of experiments is used to tune parameters of the proposed NE algorithm. Finally, the optimal MLP is used to predict survival of GBM patients. RESULTS Data from 8 related clinical trials have been collected and integrated to train the model. From 847 evaluable cases, 719 were used for train and validation and the remaining 128 cases were used to test the model. Mean absolute error of the predictions on the test data is 0.087 months which shows excellent performance of the proposed model in predicting survival of the patients. Also, the results show that the proposed NE algorithm is superior to other existing models in both the mean and variability of the prediction error.
Collapse
Affiliation(s)
- Amir Ebrahimi Zade
- Faculty of Industrial Engineering and Systems Management, Amirkabir University of Technology, Tehran, Iran
| | | | - M Soltani
- Faculty of Mechanical Engineering, K.N. Toosi University of Technology, Tehran, Iran; Advanced Bioengineering Initiative Center, Computational Medicine Center, K. N. Toosi University of Technology, Tehran, Iran; Centre for Biotechnology and Bioengineering (CBB), University of Waterloo, Waterloo, ON, Canada; Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, ON, Canada
| |
Collapse
|